Hi,
I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3).
If someone could help work me through this error that would be great!
here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App
this file with the model code is called SecondView.swift
and here is the model info:
Input: conv2d_input-> image (color 224x224)
Output: Identity -> MultiArray (Float32 1x4)
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
Hi Apple team,
When using AppShortcutsProvider, I hit the hard limit:
Each app may have at most 10 App Shortcuts.
This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration.
Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities?
AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Shortcuts
App Intents
Apple Intelligence
Is Private Cloud Compute allowed? Or are on-device Foundational Models allowed only? Thanks!
When I initialize a session with an existing transcript using this initializer:
public convenience init(model: SystemLanguageModel = .default, guardrails: LanguageModelSession.Guardrails = .default, tools: [any Tool] = [], transcript: Transcript)
The tools get ignored. I noticed that when doing that, the model never use the tools. When inspecting the transcript, I can see that the instruction entry does not have any tools available to it.
I tried this for both transcripts that already include an instruction entry and ones that don't - both yielding the same result..
Is this the intended behavior / am I missing something here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm working on my Swift Student Challenge submission and developing a Vision framework-based image classifier. I want to ensure I'm following best practices for training data and follow to guidelines for what images I use to train my image classifier.
What types of images can I use for training my model?
Are there specific image databases or resources recommended by Apple that are safe to use for Swift Student Challenge submissions?
Currently considering images used from wikipedia, and my own images
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project?
A tutorial and some sample code would be helpful.
Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am trying to create a slightly different version of the content tagging code in the documentation:
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging
In the playground I am getting an "Inference Provider crashed with 2:5" error.
I have no idea what that means or how to address the error. Any assistance would be appreciated.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue?
Here’s the relevant code snippet:
https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2
The error message is also not very helpful. It simply states that the creation failed.
Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app.
do {
let creator = try await ImageCreator()
let concept = ImagePlaygroundConcept.text("Love")
let images = creator.images(for: [concept], style: .externalProvider, limit: 1)
for try await image in images {
// Handle image
break
}
} catch {
// Handle error
}
I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider.
[ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example:
"Given the following workout routine: \(routine), suggest one additional exercise to complement it."
In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld.
Has anyone found a way to accomplish this?
Introduced in the Keynote was the 3D Lock Screen images with the kangaroo:
https://9to5mac.com/wp-content/uploads/sites/6/2025/06/3d-lock-screen-2.gif
I can't see any mention on if this effect is available for developers with an API to convert flat 2D photos in to the same 3D feeling image.
Does anyone know if there is an API?
Topic:
Machine Learning & AI
SubTopic:
General
Hi everyone 👋
I'd like to use coremltools to see how well a model performs on a remote device as part of a CI/CD pipeline. According to the Core ML Tools "Debugging and Performance Utilities" guide, remote devices must be in a "connected" state in order for coremltools to install the ModelRunner application.
The devices in our system have a "paired" state, and I'm unable to set the them as "connected." The only way I know how to connect a device is to physically plug it in to a computer and open Xcode. I don't have physical access to the devices in the CI/CD system, and the host computer that interacts with them doesn't have Xcode installed.
Here are some questions I've been looking into and would love some help answering:
Has anyone managed to use the coremltools performance utilities in a similar system?
Can you put a device in a "connected" state if you don't have physical access to the device and if you only have access to Xcode command line tools and not the Xcode app?
Is it at all possible to install the coremltools ModelRunner application on a "paired" device, for example, by manually building the app and installing it with devicectl? Would other utilities, such as the MLModelBenchmarker work as expected if the app is installed this way?
Thank you!
I've been successfully integrating the Foundation Models framework into my healthcare app using structured generation with @Generable schemas. While my initial testing (20-30 iterations) shows promising results, I need to validate consistency and reliability at scale before production deployment.
Question
Is there a recommended approach for automated, large-scale testing of Foundation Models responses?
Specifically, I'm looking to:
Automate 1000+ test iterations with consistent prompts and structured schemas
Measure response consistency across identical inputs
Validate structured output reliability (proper schema adherence, no generation failures)
Collect performance metrics (TTFT, TPS) for optimization
Specific Questions
Framework Limitations: Are there any undocumented rate limits or thermal throttling considerations for rapid session creation/destruction?
Performance Tools: Can Xcode's Foundation Models Instrument be used programmatically, or only through Instruments UI?
Automation Integration: Any recommendations for integrating with testing frameworks?
Session Reuse: Is it better to reuse a single LanguageModelSession or create fresh sessions for each test iteration?
Use Case Context
My wellness app provides medically safe activity recommendations based on user health profiles. The Foundation Models framework processes health context and generates structured recommendations for exercises, nutrition, and lifestyle activities. Given the safety implications of providing health-related guidance, I need rigorous validation to ensure the model consistently produces appropriate, well-formed recommendations across diverse user scenarios and health conditions.
Has anyone in the community built similar large-scale testing infrastructure for Foundation Models? Any insights on best practices or potential pitfalls would be greatly appreciated.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm trying to use Apple's new Visual Intelligence API for recommending content through screenshot image search. The problem I encountered is that the SemanticContentDescriptor labels are either completely empty or super misleading, making it impossible to query for similar content on my app. Even the closest matching example was inaccurate, returning a single label ["cardigan"] for a Supreme T-Shirt.
I see other apps using this API like Etsy for example, and I'm wondering if they're using the input pixel buffer to query for similar content rather than using the labels?
If anyone has a similar experience or something that wasn't called out in the documentation please lmk! Thanks.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g)
Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs…
Loading the asset pack at “test.aar”…
Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125…
running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!"
There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards.
{
"assetPackID": "testVideoAssetPack",
"downloadPolicy": {
"prefetch": {
"installationEventTypes": ["firstInstallation", "subsequentUpdate"]
}
},
"fileSelectors": [
{
"file": "video/test.mp4"
}
],
"platforms": [
"iOS"
]
}
this is my Manifest.json
While training a text classifier model with a few thousand samples completes in seconds, when using 100,000 or 1 million samples, CreateML's training time increases exponentially (to hours or days). During these hours/days, GPU usage is low and almost every CPU core is idle. When using the Swift APIs for model training, resource utilization does not increase. I'm using Xcode 16.2, macOS 15.2 on either an M2 Ultra 64 GB or an M3 Max 48 GB laptop (both using built-in SSD with ~500 GB free) running no other applications.
Is there a setting I've missed to allow training to take over more of my computing resources? Is this expected of CreateML (i.e., when looking to exploit a larger corpus, I should move to other tooling)? I'd love to speed up my iteration cycle time.
Topic:
Machine Learning & AI
SubTopic:
Create ML
When trying to open an encrypted CoreML model file on a system with SIP disabled, the error message is
Failed to generate key request for <...> with error: -42187
This should state that SIP is disabled and needs to be enabled.
I can’t seem to find a way to include an image when prompting the new on-device model in Xcode, even though Apple explicitly states that the model was trained and tested with image data (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates).
Has anyone managed to get this working, or are VLM-style capabilities simply not exposed yet?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When I am doing an uncached load of CoreML model on ANE, I received this warning in Xcode console
Type of hiddenStates in function main's I/O contains unknown strides. Using unknown strides for MIL tensor buffers with unknown shapes is not recommended in E5ML. Please use row_alignment_in_bytes property instead. Refer to https://e5-ml.apple.com/more-info/memory-layouts.html for more information.
However, the web link does not seem to be working. Where can I find more information about about this and how can I fix it?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?