Keep getting error :
I have tried Picker for File, Photo Library , both same results .
Debugging the resize for 360x360 but still facing this error.
The model I'm trying to implement is created with CreateMLComponents
The process is from example of WWDC 2022 Banana Ripeness , I have used index for each .jpg .
Prediction Failed: The VNCoreMLTransform request failed
Is there some possible way to solve it or is error somewhere in training of model ?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
While building an app with large language model inferencing on device, I got gibberish output. After carefully examining every detail, I found it's caused by the fused scaledDotProductAttention operation. I switched back to the discrete operations and problem solved. To reproduce the bug, please check https://github.com/zhoudan111/MPSGraph_SDPA_bug
Topic:
Machine Learning & AI
SubTopic:
General
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session:
Error Resource (Local Sanitizer Asset) unavailable error.
import FoundationModels
#Playground {
let session = LanguageModelSession()
do {
let result = try await session.respond(to: "Tell me 3 colors")
print(result.content)
} catch {
print("Error", error)
}
}
I couldn't find any resource guiding me on how to solve this. Any help/workaround?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
During testing the “Bringing advanced speech-to-text capabilities to your app” sample app demonstrating the use of iOS 26 SpeechAnalyzer, I noticed that the language model for the English locale was presumably already downloaded. Upon checking the documentation of AssetInventory, I found out that indeed, the language model can be preinstalled on the system.
Can someone from the dev team share more info about what assets are preinstalled by the system? For example, can we safely assume that the English language model will almost certainly be already preinstalled by the OS if the phone has the English locale?
Is it possible to expose a custom VirtIO device to a Linux guest running inside a VM — likely using QEMU backed by Hypervisor.framework. The guest would see this device as something like /dev/npu0, and it would use a kernel driver + userspace library to submit inference requests.
On the macOS host, these requests would be executed using CoreML, MPSGraph, or BNNS. The results would be passed back to the guest via IPC.
Does the macOS allow this kind of "fake" NPU / GPU
Hello!
I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly.
I have some code that may be relevant in debugging this issue: How I use the model:
model = BallTrackerNet() # this is the model architecture which will be referenced later
device = self.device # cpu
model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights
model = model.to(device)
model.eval()
Here is the BallTrackerNet() model itself:
import torch.nn as nn
import torch
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias),
nn.ReLU(),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
return self.block(x)
class BallTrackerNet(nn.Module):
def __init__(self, out_channels=256):
super().__init__()
self.out_channels = out_channels
self.conv1 = ConvBlock(in_channels=9, out_channels=64)
self.conv2 = ConvBlock(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = ConvBlock(in_channels=64, out_channels=128)
self.conv4 = ConvBlock(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv5 = ConvBlock(in_channels=128, out_channels=256)
self.conv6 = ConvBlock(in_channels=256, out_channels=256)
self.conv7 = ConvBlock(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv8 = ConvBlock(in_channels=256, out_channels=512)
self.conv9 = ConvBlock(in_channels=512, out_channels=512)
self.conv10 = ConvBlock(in_channels=512, out_channels=512)
self.ups1 = nn.Upsample(scale_factor=2)
self.conv11 = ConvBlock(in_channels=512, out_channels=256)
self.conv12 = ConvBlock(in_channels=256, out_channels=256)
self.conv13 = ConvBlock(in_channels=256, out_channels=256)
self.ups2 = nn.Upsample(scale_factor=2)
self.conv14 = ConvBlock(in_channels=256, out_channels=128)
self.conv15 = ConvBlock(in_channels=128, out_channels=128)
self.ups3 = nn.Upsample(scale_factor=2)
self.conv16 = ConvBlock(in_channels=128, out_channels=64)
self.conv17 = ConvBlock(in_channels=64, out_channels=64)
self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels)
self.softmax = nn.Softmax(dim=1)
self._init_weights()
def forward(self, x, testing=False):
batch_size = x.size(0)
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ups1(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.ups2(x)
x = self.conv14(x)
x = self.conv15(x)
x = self.ups3(x)
x = self.conv16(x)
x = self.conv17(x)
x = self.conv18(x)
# x = self.softmax(x)
out = x.reshape(batch_size, self.out_channels, -1)
if testing:
out = self.softmax(out)
return out
def _init_weights(self):
for module in self.modules():
if isinstance(module, nn.Conv2d):
nn.init.uniform_(module.weight, -0.05, 0.05)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
elif isinstance(module, nn.BatchNorm2d):
nn.init.constant_(module.weight, 1)
nn.init.constant_(module.bias, 0)
Here is also the meta data of my model:
[
{
"metadataOutputVersion" : "3.0",
"storagePrecision" : "Float16",
"outputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 256 × 230400)",
"shortDescription" : "",
"shape" : "[1, 256, 230400]",
"name" : "var_462",
"type" : "MultiArray"
}
],
"modelParameters" : [
],
"specificationVersion" : 6,
"mlProgramOperationTypeHistogram" : {
"Cast" : 2,
"Conv" : 18,
"Relu" : 18,
"BatchNorm" : 18,
"Reshape" : 1,
"UpsampleNearestNeighbor" : 3,
"MaxPool" : 3
},
"computePrecision" : "Mixed (Float16, Float32, Int32)",
"isUpdatable" : "0",
"availability" : {
"macOS" : "12.0",
"tvOS" : "15.0",
"visionOS" : "1.0",
"watchOS" : "8.0",
"iOS" : "15.0",
"macCatalyst" : "15.0"
},
"modelType" : {
"name" : "MLModelType_mlProgram"
},
"userDefinedMetadata" : {
"com.github.apple.coremltools.source_dialect" : "TorchScript",
"com.github.apple.coremltools.source" : "torch==2.5.1",
"com.github.apple.coremltools.version" : "8.1"
},
"inputSchema" : [
{
"hasShapeFlexibility" : "0",
"isOptional" : "0",
"dataType" : "Float32",
"formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)",
"shortDescription" : "",
"shape" : "[1, 9, 360, 640]",
"name" : "input_frames",
"type" : "MultiArray"
}
],
"generatedClassName" : "BallTracker",
"method" : "predict"
}
]
I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well.
Thanks!
Michael
Hi all,
I'm capturing a photo using AVCapturePhotoOutput, and I've set:
let photoSettings = AVCapturePhotoSettings()
photoSettings.isDepthDataDeliveryEnabled = true
Then I create the handler like this:
let data = photo.fileDataRepresentation()
let handler = try ImageRequestHandler(data: data, orientation: .right)
Now I’m wondering:
If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler?
Or do I need to explicitly pass the depth data using the other initializer?
let handler = try ImageRequestHandler(
cvPixelBuffer: photo.pixelBuffer!,
depthData: photo.depthData,
orientation: .right
)
In short:
Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required?
Thanks for any clarification!
I have an app that streams in data from the Foundation Model and I have a card that shows one of the outputs. I want my card to accept a partially generated model but I keep getting a nonsensical error.
The error I get on line 59 is:
Cannot convert value of type 'FrostDate.VegetableSuggestion.PartiallyGenerated' (aka 'FrostDate.VegetableSuggestion') to expected argument type 'FrostDate.VegetableSuggestion.PartiallyGenerated'
Here is my card with preview:
import SwiftUI
import FoundationModels
struct VegetableSuggestionCard: View {
let vegetableSuggestion: VegetableSuggestion.PartiallyGenerated
init(vegetableSuggestion: VegetableSuggestion.PartiallyGenerated) {
self.vegetableSuggestion = vegetableSuggestion
}
var body: some View {
VStack(alignment: .leading, spacing: 8) {
if let name = vegetableSuggestion.vegetableName {
Text(name)
.font(.headline)
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startIndoors = vegetableSuggestion.startSeedsIndoors {
Text("Start indoors: \(startIndoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startOutdoors = vegetableSuggestion.startSeedsOutdoors {
Text("Start outdoors: \(startOutdoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let transplant = vegetableSuggestion.transplantSeedlingsOutdoors {
Text("Transplant: \(transplant)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let tips = vegetableSuggestion.tips {
Text("Tips: \(tips)")
.foregroundStyle(.secondary)
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding(16)
.frame(maxWidth: .infinity, alignment: .leading)
.background(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.fill(.background)
.overlay(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.strokeBorder(.quaternary, lineWidth: 1)
)
.shadow(color: Color.black.opacity(0.05), radius: 6, x: 0, y: 2)
)
}
}
#Preview("Vegetable Suggestion Card") {
let sample = VegetableSuggestion.PartiallyGenerated(
vegetableName: "Tomato",
startSeedsIndoors: "6–8 weeks before last frost",
startSeedsOutdoors: "After last frost when soil is warm",
transplantSeedlingsOutdoors: "1–2 weeks after last frost",
tips: "Harden off seedlings; provide full sun and consistent moisture."
)
VegetableSuggestionCard(vegetableSuggestion: sample)
.padding()
.previewLayout(.sizeThatFits)
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Pretty much as per the title and I suspect I know the answer. Given that Foundation Models run on device, is it possible to use Foundation Models framework inside of a DeviceActivityReport? I've been tinkering with it, and all I get is errors and "Sandbox restrictions". Am I missing something? Seems like a missed trick to utilise on device AI/ML with other frameworks.
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example:
"Given the following workout routine: \(routine), suggest one additional exercise to complement it."
In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld.
Has anyone found a way to accomplish this?
I am currently training a Tabular Classification model in CreatML. The dataset comprises 30 features, including 1,000,000 training data points and 1,000,000 verification data points. Could you please estimate the approximate training time for an M4Max MacBook Pro?
During the training process, CreatML has been displaying the “Processing” status, but there is no progress bar. I would like to ascertain whether the training is still ongoing, as I have often suspected that it has ceased.
I’ve been testing silent Siri engagement via typing on iOS 18 and also on iOS 26 beta 1 and beta 2. While normal typing works perfectly in type-to-Siri mode, I’ve noticed that swipe-to-type gestures don’t work within Siri’s input field. Interestingly, you still feel the usual haptic feedback associated with swipe typing, but no text appears in the Siri text box. Swipe-to-type continues to work flawlessly in other apps like Messages and Notes, so this seems to be an issue specific to Siri’s typing input handler in these betas. Hopefully, it will be fixed in the next release because swipe typing is essential to my silent Siri workflow.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I've checked on pypi.org and it appears to only have arm64 packages, has x86 with AMD been deprecated?
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Machine Learning and AI Frameworks.
What are you most excited about in the Foundation Models framework?
The Foundation Models framework provides access to an on-device Large Language Model (LLM), enabling entirely on-device processing for intelligent features. This allows you to build features such as personalized search suggestions and dynamic NPC generation in games. The combination of guided generation and streaming capabilities is particularly exciting for creating delightful animations and features with reliable output. The seamless integration with SwiftUI and the new design material Liquid Glass is also a major advantage.
When should I still bring my own LLM via CoreML?
It's generally recommended to first explore Apple's built-in system models and APIs, including the Foundation Models framework, as they are highly optimized for Apple devices and cover a wide range of use cases. However, Core ML is still valuable if you need more control or choice over the specific model being deployed, such as customizing existing system models or augmenting prompts. Core ML provides the tools to get these models on-device, but you are responsible for model distribution and updates.
Should I migrate PyTorch code to MLX?
MLX is an open-source, general-purpose machine learning framework designed for Apple Silicon from the ground up. It offers a familiar API, similar to PyTorch, and supports C, C++, Python, and Swift. MLX emphasizes unified memory, a key feature of Apple Silicon hardware, which can improve performance. It's recommended to try MLX and see if its programming model and features better suit your application's needs. MLX shines when working with state-of-the-art, larger models.
Can I test Foundation Models in Xcode simulator or device?
Yes, you can use the Xcode simulator to test Foundation Models use cases. However, your Mac must be running macOS Tahoe. You can test on a physical iPhone running iOS 18 by connecting it to your Mac and running Playgrounds or live previews directly on the device.
Which on-device models will be supported? any open source models?
The Foundation Models framework currently supports Apple's first-party models only. This allows for platform-wide optimizations, improving battery life and reducing latency. While Core ML can be used to integrate open-source models, it's generally recommended to first explore the built-in system models and APIs provided by Apple, including those in the Vision, Natural Language, and Speech frameworks, as they are highly optimized for Apple devices. For frontier models, MLX can run very large models.
How often will the Foundational Model be updated? How do we test for stability when the model is updated?
The Foundation Model will be updated in sync with operating system updates. You can test your app against new model versions during the beta period by downloading the beta OS and running your app. It is highly recommended to create an "eval set" of golden prompts and responses to evaluate the performance of your features as the model changes or as you tweak your prompts. Report any unsatisfactory or satisfactory cases using Feedback Assistant.
Which on-device model/API can I use to extract text data from images such as: nutrition labels, ingredient lists, cashier receipts, etc? Thank you.
The Vision framework offers the RecognizeDocumentRequest which is specifically designed for these use cases. It not only recognizes text in images but also provides the structure of the document, such as rows in a receipt or the layout of a nutrition label. It can also identify data like phone numbers, addresses, and prices.
What is the context window for the model? What are max tokens in and max tokens out?
The context window for the Foundation Model is 4,096 tokens. The split between input and output tokens is flexible. For example, if you input 4,000 tokens, you'll have 96 tokens remaining for the output. The API takes in text, converting it to tokens under the hood. When estimating token count, a good rule of thumb is 3-4 characters per token for languages like English, and 1 character per token for languages like Japanese or Chinese. Handle potential errors gracefully by asking for shorter prompts or starting a new session if the token limit is exceeded.
Is there a rate limit for Foundation Models API that is limited by power or temperature condition on the iPhone?
Yes, there are rate limits, particularly when your app is in the background. A budget is allocated for background app usage, but exceeding it will result in rate-limiting errors. In the foreground, there is no rate limit unless the device is under heavy load (e.g., camera open, game mode). The system dynamically balances performance, battery life, and thermal conditions, which can affect the token throughput. Use appropriate quality of service settings for your tasks (e.g., background priority for background work) to help the system manage resources effectively.
Do the foundation models support languages other than English?
Yes, the on-device Foundation Model is multilingual and supports all languages supported by Apple Intelligence. To get the model to output in a specific language, prompt it with instructions indicating the user's preferred language using the locale API (e.g., "The user's preferred language is en-US"). Putting the instructions in English, but then putting the user prompt in the desired output language is a recommended practice.
Are larger server-based models available through Foundation Models?
No, the Foundation Models API currently only provides access to the on-device Large Language Model at the core of Apple Intelligence. It does not support server-side models. On-device models are preferred for privacy and for performance reasons.
Is it possible to run Retrieval-Augmented Generation (RAG) using the Foundation Models framework?
Yes, it is possible to run RAG on-device, but the Foundation Models framework does not include a built-in embedding model. You'll need to use a separate database to store vectors and implement nearest neighbor or cosine distance searches. The Natural Language framework offers simple word and sentence embeddings that can be used. Consider using a combination of Foundation Models and Core ML, using Core ML for your embedding model.
Topic:
Machine Learning & AI
SubTopic:
General
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1)
My code is below:
https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift
I couldn't add the code as it was too long so I hope github link is fine!
Can't import data in create ML word tagging project
training data is 100% correct I guarantee it:
I mean look this one has one entry in it.
[
{
"tokens": [
"a", "august", "gruters"
],
"labels": [
"BUILDER", "BUILDER", "BUILDER"
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
Recently, I'm trying to deploy some third-party LLM to Apple devices.
The methodoloy is similar to https://github.com/Anemll/Anemll.
The biggest issue I'm having now is the runtime memory usage.
When there are multiple functions in a model (mlpackage or mlmodelc), the runtime memory usage for weights is somehow duplicated when I load all of them. Here's the detail:
I created my multifunction mlpackage following https://apple.github.io/coremltools/docs-guides/source/multifunction-models.html
I loaded each of the functions using the generated swift class:
let config = MLModelConfiguration()
config.computeUnits = MLComputeUnits.cpuAndNeuralEngine
config.functionName = "infer_512";
let ffn1_infer_512 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
config.functionName = "infer_1024";
let ffn1_infer_1024 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
config.functionName = "infer_2048";
let ffn1_infer_2048 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
I observed that RAM usage increases linearly as I load each of the functions.
Using instruments, I see that there are multiple HWX files generated and loaded, each of which contains all the weight data.
My understanding of what's happening here:
The CoreML framework did some MIL->MIL preprocessing before further compilation, which includes separating CPU workload from ANE workload.
The ANE part of each function is moved into a separate MIL file then compile separately into a HWX file each.
The problem is that the weight data of these HWX files are duplicated. Since that the weight data of LLMs is huge, it will cause out-of-memory issue on mobile devices.
The improvement I'm hoping from Apple:
I hope we can try to merge the processed MIL files back into one before calling ANECCompile(), so that the weights can be merged. I don't have control over that in user space and I'm not sure if that is feasible. So I'm asking for help here.
Thanks.
Topic:
Machine Learning & AI
SubTopic:
Core ML
In WWDC25 Metal 4 released quite excited new features for machine learning optimization, but as we all know the pytorch based on metal shader performance (mps) is the one of most important tools for Mac machine learning area.but on mps introduced website we cannot see any support information for metal4.
In my quantization code, the line:
compressed_model_a8 = cto.coreml.experimental.linear_quantize_activations(
model, activation_config, [{'img':np.random.randn(1,13,1024,1024)}]
)
has taken 90 minutes to run so far and is still not completed. From debugging, I can see that the line it's stuck on is line 261 in _model_debugger.py:
model = ct.models.MLModel(
cloned_spec,
weights_dir=self.weights_dir,
compute_units=compute_units,
skip_model_load=False, # Don't skip model load as we need model prediction to get activations range.
)
Is this expected behaviour? Would it be quicker to run on another computer with more RAM?