Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

NLTagger.requestAssets hangs indefinitely
When calling NLTagger.requestAssets with some languages, it hangs indefinitely both in the simulator and a device. This happens consistently for some languages like greek. An example call is NLTagger.requestAssets(for: .greek, tagScheme: .lemma). Other languages like french return immediately. I captured some logs from Console and found what looks like the repeated attempts to download the asset. I would expect the call to eventually terminate, either loading the asset or failing with an error.
1
0
175
May ’25
Max tokens for Foundation Models
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device. Could you help me out by answering any of these questions: Is 4096 the correct limit? Do all devices have the same limit? Will the limit change over time or by device? The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out. Thanks for the fun new toys. Regards, Rob
2
0
262
Jul ’25
Keep getting exceededContextWindowSize with Foundation Models
I'm a bit new to the LLM stuff and with Foundation Models. My understanding is that there is a token limit of around 4K. I want to process the contents of files which may be quite large. I first tried going the Tool route but that didn't work out so I then tried manually chunking the text to keep things under the limit. It mostly works except that every now and then it'll exceed the limit. This happens even when the chunks are less than 100 characters. Instructions themselves are about 500 characters but still overall, well below 1000 characters per prompt, all told, which, in my limited understanding, should not result in 4K tokens being parsed. Any ideas on what is going on here?
2
0
312
Aug ’25
ML models failed to decrypt and load
We have suddenly encountered a serious issue: our local ML models are no longer being decrypted. Everything was set up according to the guide at https://developer.apple.com/documentation/coreml/generating-a-model-encryption-key and had been working in production, but yesterday we started receiving the following error: Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID.} We haven’t changed anything in our code. This started spontaneously affecting users of the release version as of yesterday. It also no longer works locally — we receive the same error at the moment the autogenerated function is called: class func load(configuration: MLModelConfiguration = MLModelConfiguration(), completionHandler handler: @escaping (Swift.Result<ZingPDModel, Error>) -> Void) I assume that I can generate a new key through Xcode, integrate it in place of the old one, and it might start working again. However, this won’t affect existing users until they update the app. Could the issue be on Apple’s infrastructure side?
1
0
365
Jul ’25
Metal GPU Work Won't Stop
Is there any way to stop GPU work running that is scheduled using metal? Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display. Why is this functionality not available when Swift Tasks are able to be canceled?
2
0
784
Feb ’25
Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
498
May ’25
Is there an API to check if a Core ML compiled model is already cached?
Hello Apple Developer Community, I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays. Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system? If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached. Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
2
0
240
May ’25
Localizing prompts that has string interpolated generable objects
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example: "Given the following workout routine: \(routine), suggest one additional exercise to complement it." In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld. Has anyone found a way to accomplish this?
1
0
367
Jul ’25
Issue with #Playground and Foundation Model
Hi all, I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26. Below are the details: Xcode: Latest version with iOS 26 SDK macOS: macOS 26 Tahoe (installed on main disk) Mac: 16” MacBook Pro with M2 Pro chip Apple Intelligence: Available and functional on this machine Problem: I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output. #Playground { let session = LanguageModelSession() let response = try await session.respond(to: "Tell me a story") } Then, I tried to catch an error with this code: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "Tell me a story") print(response) } catch { print("Failed to get response:", error) } print("This line, never gets executed") } And got these results: I’ve done further testing and discovered something important: I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination: In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt. In the sample project, the Canvas was running on My Mac. I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit). So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
7
0
533
Jul ’25
Is it allowed for an iOS app to download machine learning model files (e.g., .mlmodel, .onnx) from a separate cloud server?
Hello, I am developing an iOS app that uses machine learning models. To improve accuracy and user experience, I would like to download .mlmodel files (compiled and compressed as zip files) from our own server after the app is installed, and use them for inference within the app. No executable code, scripts, or dynamic libraries will be downloaded—only model data files are used. According to App Store Review Guideline 2.5.2, I understand that apps may not download or execute code which introduces or changes features or functionality. In this case, are compiled and zip-compressed .mlmodel files considered "data" rather than "code", and is it allowed to download and use them in the app? If there are any restrictions or best practices related to this, please let me know. Thank you.
1
0
378
Jul ’25
Model w/ Guardrails Disabled Still Frequently Refuses to Summarize Text
Foundation Models are driving me up the wall. My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations... ... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use. I instructed the model to tell me why it failed if that happens. Examples of various refusals for news articles from major sources: "The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26." "The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines." "the article contains unsafe content." "The article is biased and opinionated and focuses on the author's opinion." (this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses) I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work. Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
8
0
934
Sep ’25
Difference between compiling a Model using CoreML and Swift-Transformers
Hello, I was successfully able to compile TKDKid1000/TinyLlama-1.1B-Chat-v0.3-CoreML using Core ML, and it's working well. However, I’m now trying to compile the same model using Swift Transformers. With the limited documentation available on the swift-chat and Hugging Face repositories, I’m finding it difficult to understand the correct process for compiling a model via Swift Transformers. I attempted the following approach, but I’m fairly certain it’s not the recommended or correct method. Could someone guide me on the proper way to compile and use models like TinyLlama with Swift Transformers? Any official workflow, example, or best practice would be very helpful. Thanks in advance! This is the approach I have used: import Foundation import CoreML import Tokenizers @main struct HopeApp { static func main() async { print(" Running custom decoder loop...") do { let tokenizer = try await AutoTokenizer.from(pretrained: "PY007/TinyLlama-1.1B-Chat-v0.3") var inputIds = tokenizer("this is the test of the prompt") print("🧠 Prompt token IDs:", inputIds) let model = try float16_model(configuration: .init()) let maxTokens = 30 for _ in 0..<maxTokens { let input = try MLMultiArray(shape: [1, 128], dataType: .int32) let mask = try MLMultiArray(shape: [1, 128], dataType: .int32) for i in 0..<inputIds.count { input[i] = NSNumber(value: inputIds[i]) mask[i] = 1 } for i in inputIds.count..<128 { input[i] = 0 mask[i] = 0 } let output = try model.prediction(input_ids: input, attention_mask: mask) let logits = output.logits // shape: [1, seqLen, vocabSize] let lastIndex = inputIds.count - 1 let lastLogitsStart = lastIndex * 32003 // vocab size = 32003 var nextToken = 0 var maxLogit: Float32 = -Float.greatestFiniteMagnitude for i in 0..<32003 { let logit = logits[lastLogitsStart + i].floatValue if logit > maxLogit { maxLogit = logit nextToken = i } } inputIds.append(nextToken) if nextToken == 32002 { break } let partialText = try await tokenizer.decode(tokens:inputIds) print(partialText) } } catch { print("❌ Error: \(error)") } } }
1
0
193
Jun ’25
Ways I can leverage AI when the user asks Siri, "What does this word mean"
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great. Would the new Foundations model be any help here?
2
0
137
Jun ’25
ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
Just tried to write a very simple test of using foundation models, but it gave me the error like this "ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference establishment of session failed with Missing entitlement: com.apple.modelmanager.inference" The simple code is listed below: let session: LanguageModelSession = LanguageModelSession() let response = try? await session.respond(to: "What is the capital of France?") print("Response: (response)") So what's the problem of this one?
2
0
257
Jul ’25