Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

iOS 18 new RecognizedTextRequest DEADLOCKS if more than 2 are run in parallel
Following WWDC24 video "Discover Swift enhancements in the Vision framework" recommendations (cfr video at 10'41"), I used the following code to perform multiple new iOS 18 `RecognizedTextRequest' in parallel. Problem: if more than 2 request are run in parallel, the request will hang, leaving the app in a state where no more requests can be started. -> deadlock I tried other ways to run the requests, but no matter the method employed, or what device I use: no more than 2 requests can ever be run in parallel. func triggerDeadlock() {} try await withThrowingTaskGroup(of: Void.self) { group in // See: WWDC 2024 Discover Siwft enhancements in the Vision framework at 10:41 // ############## THIS IS KEY let maxOCRTasks = 5 // On a real-device, if more than 2 RecognizeTextRequest are launched in parallel using tasks, the request hangs // ############## THIS IS KEY for idx in 0..<maxOCRTasks { let url = ... // URL to some image group.addTask { // Perform OCR let _ = await performOCRRequest(on: url: url) } } var nextIndex = maxOCRTasks for try await _ in group { // Wait for the result of the next child task that finished if nextIndex < pageCount { group.addTask { let url = ... // URL to some image // Perform OCR let _ = await performOCRRequest(on: url: url) } nextIndex += 1 } } } } // MARK: - ASYNC/AWAIT version with iOS 18 @available(iOS 18, *) func performOCRRequest(on url: URL) async throws -> [RecognizedText] { // Create request var request = RecognizeTextRequest() // Single request: no need for ImageRequestHandler // Configure request request.recognitionLevel = .accurate request.automaticallyDetectsLanguage = true request.usesLanguageCorrection = true request.minimumTextHeightFraction = 0.016 // Perform request let textObservations: [RecognizedTextObservation] = try await request.perform(on: url) // Convert [RecognizedTextObservation] to [RecognizedText] return textObservations.compactMap { observation in observation.topCandidates(1).first } } I also found this Swift forums post mentioning something very similar. I also opened a feedback: FB17240843
7
0
242
Aug ’25
Swift playgrounds (.swiftpm) and CoreML
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope: "TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language." Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help??? (Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
2
0
780
Feb ’25
Vision Framework - Testing RecognizeDocumentsRequest
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM I am running Xcode Beta, however I only have one primary device that I cannot install beta software on. Please provide a strategy for testing. Will simulator work? The new capability is critical to my application, just what I need for structuring document scans and extraction. Thank you.
1
0
196
Jun ’25
Metal GPU Work Won't Stop
Is there any way to stop GPU work running that is scheduled using metal? Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display. Why is this functionality not available when Swift Tasks are able to be canceled?
2
0
743
Feb ’25
FoundationModels and Core Data
Hi, I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
1
0
199
Jun ’25
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -&gt; ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..&lt;40) case .china: temp = Int.random(in: 10..&lt;30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
243
Aug ’25
Unavailable error is wrong?
This is my code: witch SystemLanguageModel.default.availability { case .available: ContentView() .popover(isPresented: $showSettings) { SettingsView().presentationCompactAdaptation(.popover) } case .unavailable(.modelNotReady): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please come back later.")) case .unavailable(.appleIntelligenceNotEnabled): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please turn on Apple Intelligence.")) case .unavailable(.deviceNotEligible): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("This device is not eligible for Apple Intelligence.")) case .unavailable: ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark") } When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later." This seems to be wrong error?
1
0
275
Jul ’25
Can't apply compression techniques on my CoreML Object Detection model.
import coremltools as ct from coremltools.models.neural_network import quantization_utils # load full precision model model_fp32 = ct.models.MLModel(modelPath) model_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16) model_fp16.save("reduced-model.mlmodel") I'm testing it with the model from one of Apple's source codes(GameBoardDetector), and it works fine, reduces the model size by half. But there are several problems with my model(trained on CreateML app using Full Network): Quantizing to float 16 does not work(new file gets created with reduced only 0.1mb). Quantizing to below 16 values cause errors, and no file gets created. Here are additional metadata and precisions of models. Working model's additional metadata and precision: Mine's additional metadata and precision:
2
0
622
Jan ’25
App Shortcuts Limit (10 per app) — Can This Be Increased?
Hi Apple team, When using AppShortcutsProvider, I hit the hard limit: Each app may have at most 10 App Shortcuts. This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration. Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities? AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful. Thanks!
1
0
156
Jun ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
584
Jul ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
541
Mar ’25
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
739
Mar ’25
Memory Attribution for Foundation Models in iOS 26
Hi, I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage. Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system? I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination? Thanks in advance for any clarification. Thanks.
1
0
412
Jul ’25
Create ML app seems to stop testing without error
I have a smallish image classifier I've been working on using the Create ML app. For a while everything was going fine, but lately, as the dataset has gotten larger, Create ML seems to stop during the testing phase with no error or test results. You can see here that there is no score in the result box, even though there are testing started and completed messages: No error message is shown in the Create ML app, but I do see these messages in the log: default 14:25:36.529887-0500 MLRecipeExecutionService [0x6000012bc000] activating connection: mach=false listener=false peer=false name=com.apple.coremedia.videodecoder default 14:25:36.529978-0500 MLRecipeExecutionService [0x41c5d34c0] activating connection: mach=false listener=true peer=false name=(anonymous) default 14:25:36.530004-0500 MLRecipeExecutionService [0x41c5d34c0] Channel could not return listener port. default 14:25:36.530364-0500 MLRecipeExecutionService [0x429a88740] activating connection: mach=false listener=false peer=true name=com.apple.xpc.anonymous.0x41c5d34c0.peer[1167].0x429a88740 default 14:25:36.534523-0500 MLRecipeExecutionService [0x6000012bc000] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534537-0500 MLRecipeExecutionService [0x41c5d34c0] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534544-0500 MLRecipeExecutionService [0x429a88740] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() error 14:25:36.558788-0500 MLRecipeExecutionService CreateWithURL:342: *** ERROR: err=24 (Too many open files) - could not open '<CFURL 0x60000079b540 [0x1fdd32240]>{string = file:///Users/kevin/Library/Mobile%20Documents/com~apple~CloudDocs/Binary%20Formations/Under%20My%20Roof/Core%20ML%20Training%20Data/Household%20Items/Output/2025.01.23_12.55.16/Test/Stove/Test480.webp, encoding = 134217984, base = (null)}' default 14:25:36.559030-0500 MLRecipeExecutionService Error: <private> default 14:25:36.559077-0500 MLRecipeExecutionService Error: <private> Of particular interest is the "Too many open files" message from MLRecipeExecutionService referencing one of the test images. There are a total of 2,555 test images, which I wouldn't think would be a very large set. The system doesn't seem to be running out of memory or anything like that. Near the end of the test run there MLRecipeExecution service had 2934 file descriptors open according to lsof. Has anyone else run into this or know of a workaround? So far I've tried rebooting and recreating the Create ML project. Currently using Create ML Version 6.1 (150.3) on macOS 15.2 (24C101) running on a Mac Studio.
1
0
487
Jan ’25
BNNS random number generator for Double value types
I generate an array of random floats using the code shown below. However, I would like to do this with Double instead of Float. Are there any BNNS random number generators for double values, something like BNNSRandomFillUniformDouble? If not, is there a way I can convert BNNSNDArrayDescriptor from float to double? import Accelerate let n = 100_000_000 let result = Array<Float>(unsafeUninitializedCapacity: n) { buffer, initCount in var descriptor = BNNSNDArrayDescriptor(data: buffer, shape: .vector(n))! let randomGenerator = BNNSCreateRandomGenerator(BNNSRandomGeneratorMethodAES_CTR, nil) BNNSRandomFillUniformFloat(randomGenerator, &descriptor, 0, 1) initCount = n }
3
0
112
Jun ’25
Issue with #Playground and Foundation Model
Hi all, I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26. Below are the details: Xcode: Latest version with iOS 26 SDK macOS: macOS 26 Tahoe (installed on main disk) Mac: 16” MacBook Pro with M2 Pro chip Apple Intelligence: Available and functional on this machine Problem: I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output. #Playground { let session = LanguageModelSession() let response = try await session.respond(to: "Tell me a story") } Then, I tried to catch an error with this code: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "Tell me a story") print(response) } catch { print("Failed to get response:", error) } print("This line, never gets executed") } And got these results: I’ve done further testing and discovered something important: I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination: In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt. In the sample project, the Canvas was running on My Mac. I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit). So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
7
0
491
Jul ’25
AI-Powered Feed Customization via User-Defined Algorithm
Hey guys 👋 I’ve been thinking about a feature idea for iOS that could totally change the way we interact with apps like Twitter/X. Imagine if we could define our own recommendation algorithm, and have an AI on the iPhone that replaces the suggested tweets in the feed with ones that match our personal interests — based on public tweets, and without hacking anything. Kinda like a personalized "AI skin" over the app that curates content you actually care about. Feels like this would make content way more relevant and less algorithmically manipulative. Would love to know what you all think — and if Apple could pull this off 🔥
1
0
71
Jun ’25