Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
622
Mar ’25
Does ImageRequestHandler(data:) include depth data from AVCapturePhoto?
Hi all, I'm capturing a photo using AVCapturePhotoOutput, and I've set: let photoSettings = AVCapturePhotoSettings() photoSettings.isDepthDataDeliveryEnabled = true Then I create the handler like this: let data = photo.fileDataRepresentation() let handler = try ImageRequestHandler(data: data, orientation: .right) Now I’m wondering: If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler? Or do I need to explicitly pass the depth data using the other initializer? let handler = try ImageRequestHandler( cvPixelBuffer: photo.pixelBuffer!, depthData: photo.depthData, orientation: .right ) In short: Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required? Thanks for any clarification!
1
0
267
Jul ’25
Getting FoundationsModel running in Simulator
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta. I can run code that uses the LLM in a #Preview { }. But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app. Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
3
0
367
Jul ’25
Crash when testing Speech sample app with FoundationModels on macOS 26.0 beta and iOS 26.0 beta
Hello, I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app. On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework. It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device. I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon? Environment macOS: Device: MacBook Pro (Intel) Processor: 2 GHz Quad-Core Intel Core i5 Graphics: Intel Iris Plus Graphics 1536 MB Memory: 16 GB 3733 MHz LPDDR4X OS: macOS Tahoe Version 26.0 Beta (25A5338b) iOS: Device: iPhone 11 Model Number: MHDD3HN/A OS: iOS 26.0 Xcode: Version: 26.0 beta 3 (17A5276g) Crash (macOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x7ff80e3ad4a0 &lt;+0&gt;: movl $0x2000209, %eax 0x7ff80e3ad4a5 &lt;+5&gt;: movq %rcx, %r10 0x7ff80e3ad4a8 &lt;+8&gt;: syscall -&gt; 0x7ff80e3ad4aa &lt;+10&gt;: jae 0x7ff80e3ad4b4 Console: dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Crash (iOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x18f22b4b0 &lt;+0&gt;: mov x16, #0x209 0x18f22b4b4 &lt;+4&gt;: svc #0x80 -&gt; 0x18f22b4b8 &lt;+8&gt;: b.lo 0x18f22b4d8 Console dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels Question Is this crash expected on Intel Macs and older iPhone models with the beta SDKs? Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1? Any suggested workarounds for testing this sample project on current hardware? Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases? Attaching screenshots for reference. Thank you in advance.
4
0
556
Aug ’25
Foundation Models inside of DeviceActivityReport?
Pretty much as per the title and I suspect I know the answer. Given that Foundation Models run on device, is it possible to use Foundation Models framework inside of a DeviceActivityReport? I've been tinkering with it, and all I get is errors and "Sandbox restrictions". Am I missing something? Seems like a missed trick to utilise on device AI/ML with other frameworks.
1
0
465
Oct ’25
Using Past Versions of Foundation Models As They Progress
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
7
0
367
Jul ’25
UI Guidelines for Apple Intelligence?
Are there any guidelines for using Foundation Models To generate text for users in response to some canned queries? Should we use a special icon or text to let the user know that Apple Intelligence is generating the text? Should there be a disclaimer like, Apple Intelligence can make mistakes, please check for accuracy, etc?
1
0
677
Sep ’25
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
803
Mar ’25
App Shortcuts Limit (10 per app) — Can This Be Increased?
Hi Apple team, When using AppShortcutsProvider, I hit the hard limit: Each app may have at most 10 App Shortcuts. This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration. Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities? AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful. Thanks!
1
0
190
Jun ’25
Inference Provider crashed with 2:5
I am trying to create a slightly different version of the content tagging code in the documentation: https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging In the playground I am getting an "Inference Provider crashed with 2:5" error. I have no idea what that means or how to address the error. Any assistance would be appreciated.
1
0
539
Jul ’25
Converting GenerableContent to JSON string
Hey, I receive GenerableContent as follows: let response = try await session.respond(to: "", schema: generationSchema) And it wraps GeneratedJSON which seems to be private. What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
6
0
229
Jul ’25
Difference between compiling a Model using CoreML and Swift-Transformers
Hello, I was successfully able to compile TKDKid1000/TinyLlama-1.1B-Chat-v0.3-CoreML using Core ML, and it's working well. However, I’m now trying to compile the same model using Swift Transformers. With the limited documentation available on the swift-chat and Hugging Face repositories, I’m finding it difficult to understand the correct process for compiling a model via Swift Transformers. I attempted the following approach, but I’m fairly certain it’s not the recommended or correct method. Could someone guide me on the proper way to compile and use models like TinyLlama with Swift Transformers? Any official workflow, example, or best practice would be very helpful. Thanks in advance! This is the approach I have used: import Foundation import CoreML import Tokenizers @main struct HopeApp { static func main() async { print(" Running custom decoder loop...") do { let tokenizer = try await AutoTokenizer.from(pretrained: "PY007/TinyLlama-1.1B-Chat-v0.3") var inputIds = tokenizer("this is the test of the prompt") print("🧠 Prompt token IDs:", inputIds) let model = try float16_model(configuration: .init()) let maxTokens = 30 for _ in 0..<maxTokens { let input = try MLMultiArray(shape: [1, 128], dataType: .int32) let mask = try MLMultiArray(shape: [1, 128], dataType: .int32) for i in 0..<inputIds.count { input[i] = NSNumber(value: inputIds[i]) mask[i] = 1 } for i in inputIds.count..<128 { input[i] = 0 mask[i] = 0 } let output = try model.prediction(input_ids: input, attention_mask: mask) let logits = output.logits // shape: [1, seqLen, vocabSize] let lastIndex = inputIds.count - 1 let lastLogitsStart = lastIndex * 32003 // vocab size = 32003 var nextToken = 0 var maxLogit: Float32 = -Float.greatestFiniteMagnitude for i in 0..<32003 { let logit = logits[lastLogitsStart + i].floatValue if logit > maxLogit { maxLogit = logit nextToken = i } } inputIds.append(nextToken) if nextToken == 32002 { break } let partialText = try await tokenizer.decode(tokens:inputIds) print(partialText) } } catch { print("❌ Error: \(error)") } } }
1
0
192
Jun ’25
How can I give my documents access to Model Foundation
I would like to write a macOS application that uses on-device AI (FoundationModels). I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.” Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own? Thanks
1
0
590
Oct ’25
Automated Testing and Performance Validation for Foundation Models Framework
I've been successfully integrating the Foundation Models framework into my healthcare app using structured generation with @Generable schemas. While my initial testing (20-30 iterations) shows promising results, I need to validate consistency and reliability at scale before production deployment. Question Is there a recommended approach for automated, large-scale testing of Foundation Models responses? Specifically, I'm looking to: Automate 1000+ test iterations with consistent prompts and structured schemas Measure response consistency across identical inputs Validate structured output reliability (proper schema adherence, no generation failures) Collect performance metrics (TTFT, TPS) for optimization Specific Questions Framework Limitations: Are there any undocumented rate limits or thermal throttling considerations for rapid session creation/destruction? Performance Tools: Can Xcode's Foundation Models Instrument be used programmatically, or only through Instruments UI? Automation Integration: Any recommendations for integrating with testing frameworks? Session Reuse: Is it better to reuse a single LanguageModelSession or create fresh sessions for each test iteration? Use Case Context My wellness app provides medically safe activity recommendations based on user health profiles. The Foundation Models framework processes health context and generates structured recommendations for exercises, nutrition, and lifestyle activities. Given the safety implications of providing health-related guidance, I need rigorous validation to ensure the model consistently produces appropriate, well-formed recommendations across diverse user scenarios and health conditions. Has anyone in the community built similar large-scale testing infrastructure for Foundation Models? Any insights on best practices or potential pitfalls would be greatly appreciated.
1
0
207
Jul ’25
Model w/ Guardrails Disabled Still Frequently Refuses to Summarize Text
Foundation Models are driving me up the wall. My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations... ... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use. I instructed the model to tell me why it failed if that happens. Examples of various refusals for news articles from major sources: "The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26." "The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines." "the article contains unsafe content." "The article is biased and opinionated and focuses on the author's opinion." (this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses) I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work. Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
8
0
934
Sep ’25
Foundation model adapter assets are invalid
I've tried creating a Lora adapter using the example dataset, scripts as part of the adapter_training_toolkit_v26_0_0 (last available) on MacOs 26 Beta 6. import SwiftUI import FoundationModels import Playgrounds #Playground { // The absolute path to your adapter. let localURL = URL(filePath: "/Users/syl/Downloads/adapter_training_toolkit_v26_0_0/train/test-lora.fmadapter") // Initialize the adapter by using the local URL. let adapter = try SystemLanguageModel.Adapter(fileURL: localURL) // An instance of the the system language model using your adapter. let customAdapterModel = SystemLanguageModel(adapter: adapter) // Create a session and prompt the model. let session = LanguageModelSession(model: customAdapterModel) let response = try await session.respond(to: "hello") } I get Adapter assets are invalid error. I've added the entitlements Is adapter_training_toolkit_v26_0_0 up to date?
2
0
241
Aug ’25
The asset pack with the ID “testVideoAssetPack” couldn’t be looked up: Could not connect to the server.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g) Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs… Loading the asset pack at “test.aar”… Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125… running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!" There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards. { "assetPackID": "testVideoAssetPack", "downloadPolicy": { "prefetch": { "installationEventTypes": ["firstInstallation", "subsequentUpdate"] } }, "fileSelectors": [ { "file": "video/test.mp4" } ], "platforms": [ "iOS" ] } this is my Manifest.json
1
0
361
Jul ’25
BNNS random number generator for Double value types
I generate an array of random floats using the code shown below. However, I would like to do this with Double instead of Float. Are there any BNNS random number generators for double values, something like BNNSRandomFillUniformDouble? If not, is there a way I can convert BNNSNDArrayDescriptor from float to double? import Accelerate let n = 100_000_000 let result = Array<Float>(unsafeUninitializedCapacity: n) { buffer, initCount in var descriptor = BNNSNDArrayDescriptor(data: buffer, shape: .vector(n))! let randomGenerator = BNNSCreateRandomGenerator(BNNSRandomGeneratorMethodAES_CTR, nil) BNNSRandomFillUniformFloat(randomGenerator, &descriptor, 0, 1) initCount = n }
3
0
136
Jun ’25