Apple brings agent coding to Xcode. On Tuesday, the company announced the release of Xcode 26.3. This allows developers to use agent tools like Anthropic’s Claude Agent and OpenAI’s Codex directly in Apple’s official app development suite.
The Xcode 26.3 release candidate is currently available to all Apple developers from the developer website and will be published to the App Store shortly.
This latest update follows last year’s Xcode 26 release, which introduced support for ChatGPT and Claude within Apple’s integrated development environment (IDE) used by users building apps for iPhone, iPad, Mac, Apple Watch, and Apple’s other hardware platforms.
The integration of agenttic coding tools allows AI models to further leverage the power of Xcode to perform tasks and perform more complex automation.
Models also have access to Apple’s current developer documentation to ensure they are using the latest APIs and following best practices when building.
At startup, the agent helps developers explore the project, understand its structure and metadata, build the project and run tests to see if there are any errors, and fix any errors.

In preparation for this launch, Apple said it worked closely with both Anthropic and OpenAI to design the new experience. Specifically, the company says it has done a lot of work to optimize token usage and tool calls so that the agent runs efficiently in Xcode.
Xcode leverages the Model Context Protocol (MCP) to expose its functionality to agents and connect agents to its tools. This means that Xcode can now work with external MCP-compatible agents for project discovery, modification, file management, previews and snippets, access to the latest documentation, and more.
Developers who want to try out agent coding features must first download the agent they want to use from Xcode’s settings. You can also connect your account to an AI provider by signing in or adding an API key. An in-app drop-down menu allows developers to select the model version to use (e.g. GPT-5.2-Codex vs. GPT-5.1 mini).
In a prompt box on the left side of the screen, developers can use natural language commands to tell the agent what kind of project they want to build or what changes to the code they want to create. For example, you can tell Xcode to add a feature to your app that uses one of the frameworks provided by Apple, and you can tell it how that feature should appear and work.

Once the agent starts working, it breaks down the task into small steps so you can easily see what’s happening and how the code is changing. Also, find the documentation you need before you start coding. Changes are visually highlighted in the code, and project recordings on the side of the screen let developers know what’s happening under the hood.
Apple believes this transparency could be especially helpful for new developers who are learning to code. To that end, the company is hosting a “code-along” workshop on its developer site on Thursday, where users can watch and learn how to use agent coding tools while writing code in real-time using their own copy of Xcode.
At the end of the process, the AI agent validates that the code you wrote works as expected. Armed with the results of testing this aspect, the agent can further iterate the project to fix errors and other issues as needed. (Apple noted that to force agents to plan ahead, asking agents to think through their plans before writing code can help improve the process.)
Additionally, if developers aren’t satisfied with the results, they can easily revert the code at any time because Xcode creates milestones every time the agent makes a change.
