Build, test, and submit your app using Xcode, Apple's integrated development environment.

Xcode Documentation

Posts under Xcode subtopic

Post

Replies

Boosts

Views

Activity

A Summary of the WWDC25 Group Lab - Developer Tools
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Developer Tools. Will my project codebase be used for training when I use Xcode's intelligent assistant powered by cloud-based models? When using ChatGPT without logging in, your data will not be used to improve any models. If you log in to a ChatGPT account, this is based on your ChatGPT account settings, which allows you to opt-out (it defaults to on). When using Xcode with accounts for other model providers, you should check with the policies of your provider. And finally, at no point will any portion of your codebase be used to train or improve any Apple models. We'd love to make our SwiftUI Previews (and soon, Playgrounds) as snappy as possible. Is there any way to skip certain build steps, such as running linters? It seems the build environment is exactly the same (compared to a debug build), but maybe there's a trick. Starting with Xcode 16, SwiftUI previews use the exact same build artifacts as the regular build. The new Playgrounds support in Xcode 26 uses these build artifacts too. Shell script build phases are the most common thing that introduces extra build time, so as a first step, try turning off all shell script build phases (like linters) to get an idea if that’s the issue. If those build phases add significant time to your build, consider moving some of those phases into asynchronous steps, such as running linters before committing instead of on every build. If you do need a shell script build phase to run during your build, make sure to explicitly define the input and output files, as that is a huge way to improve your build performance. Are we able to provide additional context for the models, like coding standards? Documentation for third party dependencies? Documentation on your own codebase that explains things like architecture and more? In general, Xcode will automatically search for the right context based on the question and the evolving answer, as the model can interact multiple times with your project as it develops an answer. This will automatically pick up the coding style of the code it sees, and can include files that contain architecture comments, etc. Beyond automatic context, you can manually attach other documents, even if they aren't in your project. For example, you could make a file with rules and ideas and attach it, and it will influence the response. We are very aware of other kinds of automatic context like rule files, etc, though Xcode does not support these at this time. Once ChatGPT is enabled for Coding Intelligence in Xcode 26, and I sign into my existing ChatGPT account, will the ChatGPT Coding Intelligence model in Xcode know about chat conversations on Xcode development done previously in the ChatGPT Mac app? Xcode does not use information from other conversations, and conversations started in Xcode are not accessible in the web UI or ChatGPT app. Is there a plan to make SwiftUI views easier to locate and understand in the view hierarchy like UIKit views? SwiftUI uses a declarative paradigm to define your user interface. That allows you to specify what you want, with the system translating that into an efficient representation at runtime. Unlike traditional AppKit and UIKit, seeing the runtime representation of SwiftUI views isn't sufficient in order to understand why it's not doing what you want. This year, we introduced a SwiftUI Instrument that shows why things are happening, like view re-rendering. Is it possible to use the AI chat with ChatGPT Enterprise? My company doesn't allow us to use the general ChatGPT, only the enterprise version they have setup that prevents data from being leaked Yes, Xcode 26 supports logging into any existing ChatGPT account, including enterprise accounts. If that does not meet your needs, you can also setup a local server that implements the popular chat completions REST API to talk to your enterprise account how you need. Now that Icon Composer is here, how does it complement or replace existing vector design tools such as Sketch for icon design? Icon Composer complements your existing vector design tools. You should continue to create your shapes, gradients, and layers in another tool like Sketch, and compose the exported SVG layers in Icon Composer. Once you bring your layers into Icon Composer, you can then use it to influence the translucency, blur, and specular highlights for your icon. What’s one feature or improvement in the new Xcode that you personally think developers will love, but might not immediately discover? Maybe something tucked away or quietly powerful that’s flown under the radar so far? One feature we're particularly excited about is the new power profiler for iOS, which gives you further insights into the energy consumption of your app beyond what was possible with the energy instrument previously. You can learn more about how to use this instrument and how it can help you greatly reduce your apps battery usage in the documentation, as well as the session Profile and optimize power usage in your app. There were also improvements in accessibility this year with Voice Control, where you can naturally speak your Swift code to Xcode, and it understands the Swift syntax as you speak. To see it in action, take a look at the demonstration in What’s new in Xcode 26. We have a software advisory council that is very sensitive to having our private information going to the cloud in any form. What information do you have to help me guide Xcode and Apple Intelligence through the acceptance process? One thing you can do is configure a proxy for your enterprise that implementing the popular Chat Completions API endpoint protocol. When using a model provider via URL, you can use your proxy endpoint to inspect the network traffic for anything that you do not want sent outside of your enterprise, and then forward the traffic through the proxy to your chosen model provider. Are there list of recommended LLMs to use with Xcode via Intelligence/Local? I've tried Gemma3-12B, but.. I hope there are better options? Apple doesn't have a published list of recommended local models. This is a fast-moving space, and so a recommendation would become out of date very quickly as new models are released. We encourage you to try out the local model support in Xcode 26 with models that you find meet your needs, and let us and the community know! (continued below)
1
0
810
Jul ’25
Xcode simulator & preview not loading
Hello, I recently decided to start learning how to code for iOS. I don't have much coding experience but I still wanted to explore it for fun at least. I downloaded Xcode on my Macbook, and opened a new iOS file after downloading iOS 18.1 so I could run the simulator/get a preview of my code. Even though I only had the basic "Hello World!" that is auto-generated in my code, the preview would never show and sat at a loading screen for multiple hours, saying "Preparing (Automatic) iPhone Simulator" at the top. There is probably a simple solution that I'm missing. I would appreciate any tips! Thanks.
2
0
935
Dec ’24
Preview won't load on Mac-OS/Multiplatform Swift-App
Hey, I wanted to create a Mac-OS application. Normally I only code iPhone apps. But as soon as I want to display anything on the preview, it loads, just as normal, and then the throbber/progress indicator disappears and the preview canvas stays gray like it used to be before. I also don't get any error messages. Only one time after trying different things I got one message saying: "Could not launch Preview Shell." and "Could not create FBSOpenApplicationService." I also searched for a few solutions and tried some but none of them seemed to work. In the DiagnosticReports were some files of the time but I didn't seem to find anything helpful in there and they don't appear when I reopen my project or switch from PreviewMode "My Mac" to "iPhone 16 Pro". When I launch the app on a simulator it works perfectly fine but this is quite annoying. Thanks for trying to help me!
1
0
382
Jan ’25
My app stops at 90% on testfight and doesn't download
I am creating an imessage sticker pack and I am having an issue with an update of my app. It publishes to test flight but I can't download it to test. My app has an extension of imessage. I have tried: Checking the size of the animated stickers. All are below 500kb My version number is 4.1.1. so no 0 I am really no sure what to do. Any suggestions would be appreciated.
0
0
212
Jan ’25
CoreData in Swift Packages
I am having issues loading my model from a Swift Package with the following structure: | Package.swift | Sources | - | SamplePackage | - | - Core | - | - | - SamplePackageDataStack.swift | - | - | - DataModel.xcdatamodeld | - | - | - | - Model.xcdatamodel ( <- is this new? ) As mentioned, I am not required to list the xcdatamodeld as a resource in my Package manifest. When trying to load the model in the main app, I am getting CoreData: error:  Failed to load model named DataModel Code: In my swift Package: public class SamplePackageDataStack: NSObject {     public static let shared = SamplePackageDataStack()     private override init() {}     public lazy var persistentContainer: NSPersistentContainer = { let container = NSPersistentContainer(name: "DataModel") container.loadPersistentStores(completionHandler: { (storeDescription, error) in             if let error = error as NSError? {                 fatalError("Unresolved error \(error), \(error.userInfo)")             }         })         return container     }()     /// The managed object context associated with the main queue. (read-only)     public var context: NSManagedObjectContext {         return self.persistentContainer.viewContext     }     public func saveContext () {         if context.hasChanges {             do {                 try context.save()             } catch {                 let nserror = error as NSError                 fatalError("Unresolved error \(nserror), \(nserror.userInfo)")             }         }     } } Main App: import SamplePackage class ViewController: UIViewController { override func viewDidLoad() { &#9;&#9;&#9;&#9;&#9;super.viewDidLoad() &#9;var container = SamplePackageDataStack.shared.persistentContainer         print(container) &#9;&#9;} }
5
0
5.5k
Jul ’25
dyld: Symbol not found ... certain MeshResource APIs on iOS 17.x
I submitted feedback as FB16463501 -- posting here for others to see, or maybe for Apple to share any help if there are workarounds, etc.: Targets below iOS 18.x fail to launch app due to dyld[xxxxx]: Symbol not found: errors when referencing: MeshResource.init(from:) async - https://developer.apple.com/documentation/realitykit/meshresource/init(from:)-b7hb i.e. dyld[61511]: Symbol not found: _$s10RealityKit12MeshResourceC0A10FoundationE4fromACSayAD0C10DescriptorVG_tYaKcfC MeshResource.replace(with:) async - https://developer.apple.com/documentation/realitykit/meshresource/replace(with:)-8uvri i.e. dyld[78830]: Symbol not found: _$s10RealityKit12MeshResourceC0A10FoundationE7replace4withyAcDE8ContentsV_tYaKF Targets tested that exhibit issue: (DYLD errors) Device: iOS 17.7.2, iPhone 14 Pro Max Simulator: iOS 17.5 (21F79), iPhone 15 System Information: macOS Version 15.3 (Build 24D60) Xcode 16.2 (23507) (Build 16C5032a) MRE -- include this code in your app: (no need to invoke, just reference) static func addOrUpdateEntityModel_MRE(_ entity: ModelEntity) async { let descriptor = MeshDescriptor(name: "MyDescriptor") do { if let modelComponent = entity.model { // update existing ModelComponent if let model = try? MeshResource.Model(id: "MyModelId", descriptors: [descriptor]) { var contents = MeshResource.Contents() contents.models = .init([model]) try await modelComponent.mesh.replace(with: contents) /// `dyld[78830]: Symbol not found: _$s10RealityKit12MeshResourceC0A10FoundationE7replace4withyAcDE8ContentsV_tYaKF` } } else { //create new ModelComponent /// Comment-out the 2 lines below == dyld error for above `MeshResource.replace(with:)` let meshRes = try await MeshResource(from: [descriptor]) /// `dyld[61511]: Symbol not found: _$s10RealityKit12MeshResourceC0A10FoundationE4fromACSayAD0C10DescriptorVG_tYaKcfC` entity.model = .init(mesh: meshRes, materials: [SimpleMaterial()]) } } catch { fatalError() } }
0
0
503
Feb ’25
How to Handle Identically Named Files in Different Folders in Xcode 16
Hello everyone =) I'm working at the moment on a project in Xcode 16 where I need to have multiple files with the exact same name, stored in different folders (for example, separate quest data files). When I compile, Xcode seems to treat these identically named files as duplicates, causing build errors. How can I properly organize (or configure) Xcode so that it recognizes these files as distinct? Any advice or best practices would be greatly appreciated. Thank you! ;)
0
0
244
Dec ’24
Vision Framework Causes EXC_BREAKPOINT Error in Xcode App Playground (.swiftpm) File
I’m trying to use the Vision framework in a Swift Playground to perform face detection on an image. The following code works perfectly when I run it in a regular Xcode project, but in an App Playground, I get the error: Thread 12: EXC_BREAKPOINT (code=1, subcode=0x10321c2a8) Here's the code: import SwiftUI import Vision struct ContentView: View { var body: some View { VStack { Text("Face Detection") .font(.largeTitle) .padding() Image("me") .resizable() .aspectRatio(contentMode: .fit) .onAppear { detectFace() } } } func detectFace() { guard let cgImage = UIImage(named: "me")?.cgImage else { return } let request = VNDetectFaceRectanglesRequest { request, error in if let results = request.results as? [VNFaceObservation] { print("Detected \(results.count) face(s).") for face in results { print("Bounding Box: \(face.boundingBox)") } } else { print("No faces detected.") } } let handler = VNImageRequestHandler(cgImage: cgImage, options: [:]) do { try handler.perform([request]) // This line causes the error. } catch { print("Failed to perform Vision request: \(error)") } } } The error occurs on this line: try handler.perform([request]) Details: This code runs fine in a normal Xcode project (.xcodeproj). I'm using an App Playground instead (.swiftpm). The image is being included in the .xcassets folder. Is there any way I can mitigate this issue? Please do not recommend switching to .xcodeproj, as I am making a submission for Apple's Swift Student Challenge, and they require that I use .swiftpm.
1
0
459
Dec ’24
Issue with CBConnectPeripheralOptionNotifyOnConnectionKey Not Triggering Alert When Reconnecting in Background
I'm working with CBConnectPeripheralOptionNotifyOnConnectionKey, and my understanding is that it should trigger an alert when a reconnection occurs while the central app is in the background. To test this, I've set up two separate iPhone devices—one acting as the peripheral and the other as the central. The process I'm using is as follows: The central app connects to the peripheral app. I then switch to a different app on the central device, which causes the central app to go into the background. I manually disconnect and reconnect Bluetooth on the central device, which should trigger the peripheral app to reestablish the connection. However, despite the central app being in the background, I don't see the expected alert on the central side. The connection reestablishes correctly, but no alert appears. I would appreciate any insights on what might be causing this issue or if I'm misunderstanding the behavior of CBConnectPeripheralOptionNotifyOnConnectionKey. I'd be happy to provide more specific code or logs if needed. Thanks in advance! I’m relatively new to Core Bluetooth and feel like I’ve explored most of the options, but I’m still encountering this issue.
3
0
425
Jan ’25
Troubleshooting Apple Vision Framework Errors
When working on the project "Analyzing a Selfie and Visualizing Its Content" from Apple's documentation, I downloaded the project and opened it in Xcode. However, I encountered the following error: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=com.apple.Vision Code=9 \"Could not create inference context\" UserInfo={NSLocalizedDescription=Could not create inference context}") VTEST: error: DetectFaceRectanglesRequest was cancelled. VTEST: error: DetectFaceRectanglesRequest was cancelled. Error Domain=com.apple.Vision Code=9 "Could not create inference context" UserInfo={NSLocalizedDescription=Could not create inference context} How can I resolve this issue? Thanks in advance!
1
0
379
Feb ’25
"could not build module" errors on Xcode 16.2
Hello all:) I am using Xcode 16.2, react native 0.76, simulator -> iPhone 15(17.0), mac M1 Sequoia 15.2. Many "could not build module" errors appear while building files inside iPhoneSimulator18.2.sdk. The think is that I don't even use this simulator and if I try to delete it the Xcode hides all other simulator options and requires 18.2 to download.. Of course I already tried to clean and delete and reinstall everything but nothing works.. Any help is welcome:) Thanks!
1
0
489
Feb ’25
Adding Fonts in Xcode 16.1 Causes XIB Files to Malfunction
I encountered a problem when adding a new custom font in Xcode 16.1. After including the font and opening my XIB files, the interface preview became blank and the application seemed to experience a heavy load. To troubleshoot, I removed all custom fonts, and everything returned to normal functionality. However, even after reinstalling Xcode, the issue persisted when adding the font again. The XIB preview loaded correctly: The XIB preview turned blank and became unresponsive:
0
0
527
Dec ’24