Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Vision Framework VNTrackObjectRequest: Minimum Valid Bounding Box Size Causing Internal Error (Code=9)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest. Occasionally (but not always), I receive the following runtime error: Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size} From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest. Could someone clarify: What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects? Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request? This information would be extremely helpful to reliably avoid this internal error. Thank you!
0
0
116
Apr ’25
CoreML GPU NaN bug with fused QKV attention on macOS Tahoe
Problem: CoreML produces NaN on GPU (works fine on CPU) when running transformer attention with fused QKV projection on macOS 26.2. Root cause: The common::fuse_transpose_matmul optimization pass triggers a Metal kernel bug when sliced tensors feed into matmul(transpose_y=True). Workaround: pipeline = ct.PassPipeline.DEFAULT pipeline.remove_passes(['common::fuse_transpose_matmul']) mlmodel = ct.convert(model, ..., pass_pipeline=pipeline) Minimal repro: https://github.com/imperatormk/coreml-birefnet/blob/main/apple_bug_repro.py Affected: Any ViT/Swin/transformer with fused QKV attention (BiRefNet, etc.) Has anyone else hit this? Filed FB report too.
0
0
140
22h
The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
3
0
173
Jul ’25
Context window 90% of adapter model full after single user prompt
I have been able to train an adapter on Google's Colaboratory. I am able to start a LanguageModelSession and load it with my adapter. The problem is that after one simple prompt, the context window is 90% full. If I start the session without the adapter, the same simple prompt consumes only 1% of the context window. Has anyone encountered this? I asked Claude AI and it seems to think that my training script needs adjusting. Grok on the other hand is (wrongly, I tried) convinced that I just need to tweak some parameters of LanguageModelSession or SystemLanguageModel. Thanks for any tips.
11
0
3.0k
1w
AppShortcuts.xcstrings does not translate each invocation phrase option separately, just the first
Due to our min iOS version, this is my first time using .xcstrings instead of .strings for AppShortcuts. When using the migrate .strings to .xcstrings Xcode context menu option, an .xcstrings catalog is produced that, as expected, has each invocation phrase as a separate string key. However, after compilation, the catalog changes to group all invocation phrases under the first phrase listed for each intent (see attached screenshot). It is possible to hover in blank space on the right and add more translations, but there is no 1:1 key matching requirement to the phrases on the left nor a requirement that there are the same number of keys in one language vs. another. (The lines just happen to align due to my window size.) What does that mean, practically? Do all sub-phrases in each language in AppShortcuts.xcstrings get processed during compilation, even if there isn't an equivalent phrase key declared in the AppShortcut (e.g., the ja translation has more phrases than the English)? (That makes some logical sense, as these phrases need not be 1:1 across languages.) In the AppShortcut declaration, if I delete all but the top invocation phrase, does nothing change with Siri? Is there something I'm doing incorrectly? struct WatchShortcuts: AppShortcutsProvider { static var appShortcuts: [AppShortcut] { AppShortcut( intent: QuickAddWaterIntent(), phrases: [ "\(.applicationName) log water", "\(.applicationName) log my water", "Log water in \(.applicationName)", "Log my water in \(.applicationName)", "Log a bottle of water in \(.applicationName)", ], shortTitle: "Log Water", systemImageName: "drop.fill" ) } }
0
0
307
Aug ’25
Image Playground files suddenly not available
My app lets you create images with Image Playground. When the user approves an image I move it to the documents dir from the temp storage. With over a year of usage I’ve created a lot of images over time. Out of nowhere the app stopped loading my custom creations from Image Playground saying it couldn’t find the files. It still had my VoiceOver strings I had added for each image and still had the custom categories I assigned them. Debug code to look in the docs dir doesn’t find them. I downloaded the app’s container and only see the images I created as a test after the problem started. But my ~70MB app is still taking up 300MB on my iPhone so it feels like they’re there but not accessible. Is there anything else I can try?
2
0
779
3w
Correct JSON format for CoreMotion data for ActivityClassification purposes
I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data. I am getting the error: Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
2
0
170
Jul ’25
Memory stride warning when loading CoreML models on ANE
When I am doing an uncached load of CoreML model on ANE, I received this warning in Xcode console Type of hiddenStates in function main's I/O contains unknown strides. Using unknown strides for MIL tensor buffers with unknown shapes is not recommended in E5ML. Please use row_alignment_in_bytes property instead. Refer to https://e5-ml.apple.com/more-info/memory-layouts.html for more information. However, the web link does not seem to be working. Where can I find more information about about this and how can I fix it?
1
0
254
Jul ’25
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Greetings, and Happy Holidays, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device. parcri.net has the link :)
1
0
316
Dec ’25
CoreML multifunction model runtime memory cost
Recently, I'm trying to deploy some third-party LLM to Apple devices. The methodoloy is similar to https://github.com/Anemll/Anemll. The biggest issue I'm having now is the runtime memory usage. When there are multiple functions in a model (mlpackage or mlmodelc), the runtime memory usage for weights is somehow duplicated when I load all of them. Here's the detail: I created my multifunction mlpackage following https://apple.github.io/coremltools/docs-guides/source/multifunction-models.html I loaded each of the functions using the generated swift class: let config = MLModelConfiguration() config.computeUnits = MLComputeUnits.cpuAndNeuralEngine config.functionName = "infer_512"; let ffn1_infer_512 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) config.functionName = "infer_1024"; let ffn1_infer_1024 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) config.functionName = "infer_2048"; let ffn1_infer_2048 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) I observed that RAM usage increases linearly as I load each of the functions. Using instruments, I see that there are multiple HWX files generated and loaded, each of which contains all the weight data. My understanding of what's happening here: The CoreML framework did some MIL->MIL preprocessing before further compilation, which includes separating CPU workload from ANE workload. The ANE part of each function is moved into a separate MIL file then compile separately into a HWX file each. The problem is that the weight data of these HWX files are duplicated. Since that the weight data of LLMs is huge, it will cause out-of-memory issue on mobile devices. The improvement I'm hoping from Apple: I hope we can try to merge the processed MIL files back into one before calling ANECCompile(), so that the weights can be merged. I don't have control over that in user space and I'm not sure if that is feasible. So I'm asking for help here. Thanks.
1
0
188
Apr ’25
CoreML Unified Memory failure/silent exit on long video tasks (M1 Mac 32GB)
Hi Apple Engineers, I am experiencing a potential memory management bug with CoreML on M1 Mac (32GB Unified Memory). When processing long video files (approx. 12,000 frames) using a CoreML execution provider, the system often completes the 'Analysing' phase but fails to transition into 'Processing'. It simply exits silently or hits an import error (scipy). However, if I split the same task into small 20-frame segments, it works perfectly at high speeds (~40 FPS). This suggests the hardware is capable, but there is an issue with memory fragmentation or resource cleanup during long-running CoreML sessions. Is there a way to force a VRAM/Unified Memory flush via CLI, or is this a known limitation for large frame indexing?
0
0
455
Dec ’25
RDMA API Documentation
With the release of the newest version of tahoe and MLX supporting RDMA. Is there a documentation link to how to utilizes the libdrma dylib as well as what functions are available? I am currently assuming it mostly follows the standard linux infiniband library but I would like the apple specific details.
0
0
231
Dec ’25
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence & Siri --> On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
4
0
552
Dec ’25
Foundation Models Adaptors for Generable output?
Is it possible to train an Adaptor for the Foundation Models to produce Generable output? If so what would the response part of the training data need to look like? Presumably, under the hood, the model is outputting JSON (or some other similar structure) that can be decoded to a Generable type. Would the response part of the training data for an Adaptor need to be in that structured format?
2
0
245
Jun ’25
Deterministic AI Safety Governor for iOS — Seeking Feedback on App Review Approach
I've built an iOS app with a novel approach to AI safety: a deterministic, pre-inference validation layer called Newton Engine. Instead of relying on the LLM to self-moderate, Newton validates every prompt BEFORE it reaches the model. It uses shape theory and semantic analysis to detect: • Corrosive frames (self-harm language patterns) • Logical contradictions (requests that undermine themselves) • Delegation attempts (asking AI to make human decisions) • Jailbreak patterns (prompt injection, role-play escapes) • Hallucination triggers (requests for fabricated citations) The system achieves a 96% adversarial catch rate across 847 test cases, with zero false positives on benign prompts. Key technical details: • Pure Swift/SwiftUI, no external dependencies • Runs entirely on-device (no server calls for validation) • Deterministic (same input always produces same output) • Auditable (full trace logging for every validation) I'm preparing to submit to the App Store and wanted to ask: Are there specific App Review guidelines I should reference for AI safety claims? Is there interest from Apple in deterministic governance layers for Apple Intelligence integration? Any recommendations for demonstrating safety compliance during review? The app is called Ada, and the engine is open source at: github.com/jaredlewiswechs/ada-newton Happy to share technical documentation or discuss the architecture with anyone interested. See: parcri.net
1
0
424
4w
Getting CoreML to run inference on already allocated gpu buffers
I am running some experiments with WebGPU using the wgpu crate in rust. I have some Buffers already allocated in the GPU. Is it possible to use those already existing buffers directly as inputs to a predict call in CoreML? I want to prevent gpu to cpu download time as much as possible. Or are there any other ways to do something like this. Is this only possible using the latest Tensor object which came out with Metal 4 ?
0
0
548
Nov ’25
jax-metal failing due to incompatibility with jax 0.5.1 or later.
Hello, I am interested in using jax-metal to train ML models using Apple Silicon. I understand this is experimental. After installing jax-metal according to https://developer.apple.com/metal/jax/, my python code fails with the following error JaxRuntimeError: UNKNOWN: -:0:0: error: unknown attribute code: 22 -:0:0: note: in bytecode version 6 produced by: StableHLO_v1.12.1 My issue is identical to the one reported here https://github.com/jax-ml/jax/issues/26968#issuecomment-2733120325, and is fixed by pinning to jax-metal 0.1.1., jax 0.5.0 and jaxlib 0.5.0. Thank you!
0
0
560
Dec ’25
Train adapter with tool calling
Documentation on adapter train is lacking any details related to training on dataset with tool calling. And page about tool calling itself only explain how to use it from Swift without any internal details useful in training. Question is how schema should looks like for including tool calling in dataset?
1
0
251
Jun ’25