I was generating models using the code:-
import Foundation
import CreateML
import TabularData
import CoreML
....
func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] {
var returnmodels = [MLLinearRegressor]()
var result = 0.0
for i in 0...numberofmodels {
let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic))
do {
let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict)
returnmodels.append(tm)
}
catch let error as NSError {
print("Error: \(error.localizedDescription)")
}
}
return returnmodels
}
Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing.
I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0%
What am I doing wrong? Is there a flag I need to set somewhere in Xcode?
This is on an M1 MacBook Pro
Any help would be greatly appreciated
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
Are there rules around using Foundation Models:
In a background task/session?
Concurrently, i.e. a bunch simultaneously using Swift Concurrency?
I couldn't find this in the docs (sorry if I missed it) so wondering what's supported and what the best practice is here.
In case it matters, my primary platform is Vision Pro (so, M2).
Itself been 4-5 days my Image playground has showing the “Downloading Support for Image Playground “
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
Is it possible to train an Adaptor for the Foundation Models to produce Generable output? If so what would the response part of the training data need to look like? Presumably, under the hood, the model is outputting JSON (or some other similar structure) that can be decoded to a Generable type. Would the response part of the training data for an Adaptor need to be in that structured format?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When I use the FoundationModel framework to generate long text, it will always hit an error.
"Passing along Client rate limit exceeded, try again later in response to ExecuteRequest"
And stop generating.
eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation.
do{
let session = LanguageModelSession()
let prompt: String = "Write a long story"
let response = try await session.respond(to: prompt)
}catch{}
If possible, I want to know how to prevent that error or at least how to handle it.
Hello, is it allowed to use Foundation Model Framework in submission app for WWDC26? The thing is that Apple Intelligence needs to be enabled in the settings. So, does that mean the jury won't be able to fully utilize the app's AI functionality?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details.
Current Setup:
Device: iPhone 16 Pro Max
iOS: 26 beta 3
Development: Tested on both Xcode 16 and Xcode 26
Implementation: Following the official documentation examples
The Problem:
Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing.
Error Output (Xcode 26):
[QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)"
[CSUserQuery][qid=5] got a nil / empty embedding data dictionary
[CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)"
In Xcode 16, there are no error messages at all - the semantic search just silently fails.
Missing Resources:
The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct.
Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
I couldn't find information about this in the documentation. Could someone clarify if this API is available and how to access it?
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'.
Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels
I can create iOS builds using the FoundationModels framework without issues.
Hope this will be fixed soon!
Config:
Xcode 26.0 beta (17A5241e)
macOS 26.0 Beta (25A5279m)
15-inch, M4, 2025 MacBook Air
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have images, and I annotated with polygon, actually simple trapezoid, so 4 points. I have been trying and trying but can't get Create ML to work. I am trying Object Detection. I am not a real programmer so really would greatly appreciate some guidance to help to get this model created. I think I made a Detectron2 model, and tried to get that converted into a mlmodel I need for xcode but had troubles there also. thank you.
{
"annotation": "IMG_1803.JPG",
"annotations": [
{
"label": "court",
"coordinates": {
"x": [
187,
3710,
2780,
929
],
"y": [
1689,
1770,
478,
508
]
}
}
]
},
Topic:
Machine Learning & AI
SubTopic:
Create ML
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled):
com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1
Are there entitlements / permissions I need to enable in Xcode that I forgot to do?
Code example
Here's how I'm initializing the language model session:
private func setupLanguageModelSession() {
if #available(iOS 26.0, *) {
let instructions = """
my instructions
"""
do {
languageModelSession = try LanguageModelSession(instructions: instructions)
print("Foundation Models language model session initialized")
} catch {
print("Error creating language model session: \(error)")
languageModelSession = nil
}
} else {
print("Device does not support Foundation Models (requires iOS 26.0+)")
languageModelSession = nil
}
}
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In an App Playground Xcode project there is no Targets menu in the UI, When I try use the model, it says the model is not in scope. When I did it in a regular project it automatically generated a Swift Class and had no erorrs because it had a target but I see no place to add a target on an App playground.
For example:
I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb.
I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed.
I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model.
I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably.
Any suggestions?
I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
Hi everyone,
I've been struggling for a few weeks to integrate my Core ML Image Classifier model into my .swiftpm project, and I’m hoping someone can help.
Here’s what I’ve done so far:
I converted my .mlmodel file to .mlmodelc manually via the terminal.
In my Package.swift file, I tried both "copy" and "process" options for the resource.
The issues I’m facing:
When using "process", Xcode gives me the error:
"multiple resources named 'coremldata.bin' in target 'AppModule'."
When using "copy", the app runs, but the model doesn’t work, and the terminal shows:
"A valid manifest does not exist at path: .../Manifest.json."
I even tried creating a Manifest.json manually to test, but this led to more errors, such as:
"File format version must be in the form of major.minor.patch."
"Failed to look up root model."
To check if the problem was specific to my model, I tested other Core ML models in the same setup, but none of them worked either.
I feel stuck and unsure of how to resolve these issues. Any guidance or suggestions would be greatly appreciated. Thanks in advance! :)
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
Swift Packages
Swift Student Challenge
Swift Playground
Core ML
WWDC25: Combine Metal 4 machine learning and graphics
Demonstrated a way to combine neural network in the graphics pipeline directly through the shaders, using an example of Texture Compression. However there is no mention of using which ML technique texture is compressed.
Can anyone point me to some well known model/s for this particular use case shown in WWDC25.
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU).
However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine.
It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file).
based on this example, I've created a simple code to try it:
import Foundation
import CoreML
print("Starting...")
let semaphore = DispatchSemaphore(value: 0)
Task {
await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) {
let v1 = MLTensor([1.0, 2.0, 3.0, 4.0])
let v2 = MLTensor([5.0, 6.0, 7.0, 8.0])
let v3 = v1.matmul(v2)
await v3.shapedArray(of: Float.self) // is 70.0
let m1 = MLTensor(shape: [2, 3], scalars: [
1, 2, 3,
4, 5, 6
], scalarType: Float.self)
let m2 = MLTensor(shape: [3, 2], scalars: [
7, 8,
9, 10,
11, 12
], scalarType: Float.self)
let m3 = m1.matmul(m2)
let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]]
// Supports broadcasting
let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self)
let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self)
let m6 = m4.matmul(m5)
print("Done")
return result;
}
semaphore.signal()
}
semaphore.wait()
Here's what I get on the Instruments app:
Notice how the Neural Engine line shows no usage.
Ive run this test on an M1 Max MacBook Pro.
If users turn off Apple Intelligence, what happens to apps that leverage Foundation Model Framework?
Would there be a popup automatically shown to a user saying to enable Apple Intelligence if our user has the toggle turned off? Just curious about how that experience looks for both us as developers and users.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello everybody!
I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase.
Reproduction Context
I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic.
To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications.
Simplified Working Example
I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine:
import Foundation
import Playgrounds
import FoundationModels
@Generable
struct GeneableLandmark {
@Guide(description: "Name of the landmark to visit")
var name: String
}
final class LandmarkSuggestionGenerator {
var landmarkSuggestion: GeneableLandmark.PartiallyGenerated?
private var session: LanguageModelSession
init(){
self.session = LanguageModelSession(
instructions: Instructions {
"""
generate a list of landmarks to visit
"""
}
)
}
func createLandmarkSuggestion(location: String) async throws {
let stream = session.streamResponse(
generating: GeneableLandmark.self,
options: GenerationOptions(sampling: .greedy),
includeSchemaInPrompt: false
) {
"""
Generate a list of landmarks to viist in \(location)
"""
}
for try await partialResponse in stream {
landmarkSuggestion = partialResponse
}
}
}
#Playground {
let generator = LandmarkSuggestionGenerator()
Task {
do {
try await generator.createLandmarkSuggestion(location: "New york")
if let suggestion = generator.landmarkSuggestion {
print("Suggested landmark: \(suggestion)")
} else {
print("No suggestion generated.")
}
} catch {
print("Error generating landmark suggestion: \(error)")
}
}
}
But as soon as I use the Sample ItineraryPlanner:
#Playground {
// Example landmark for demonstration
let exampleLandmark = Landmark(
id: 1,
name: "San Francisco",
continent: "North America",
description: "A vibrant city by the bay known for the Golden Gate Bridge.",
shortDescription: "Iconic Californian city.",
latitude: 37.7749,
longitude: -122.4194,
span: 0.2,
placeID: nil
)
let planner = ItineraryPlanner(landmark: exampleLandmark)
Task {
do {
try await planner.suggestItinerary(dayCount: 3)
if let itinerary = planner.itinerary {
print("Suggested itinerary: \(itinerary)")
} else {
print("No itinerary generated.")
}
} catch {
print("Error generating itinerary: \(error)")
}
}
}
The error pops up:
Multiline
Error generating itinerary:
guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug
Description: "May contain sensitive or unsafe content", >underlyingErrors:
[FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))]))
Based on my tests:
The error may not be tied to structure complexity (since more nested structures work)
The issue may stem from the tools or prompt content used inside the ItineraryPlanner
The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas
Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them.
Best,
Sasha Morozov