Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Named Entity Recognition Model for Measurements
In an under-development MacOS & iOS app, I need to identify various measurements from OCR'ed text: length, weight, counts per inch, area, percentage. The unit type (e.g. UnitLength) needs to be identified as well as the measurement's unit (e.g. .inches) in order to convert the measurement to the app's internal standard (e.g. centimetres), the value of which is stored the relevant CoreData entity. The use of NLTagger and NLTokenizer is problematic because of the various representations of the measurements: e.g. "50g.", "50 g", "50 grams", "1 3/4 oz." Currently, I use a bespoke algorithm based on String contains and step-wise evaluation of characters, which is reasonably accurate but requires frequent updating as further representations are detected. I'm aware of the Python SpaCy model being capable of NER Measurement recognition, but am reluctant to incorporate a Python-based solution into a production app. (ref [https://developer.apple.com/forums/thread/30092]) My preference is for an open-source NER Measurement model that can be used as, or converted to, some form of a Swift compatible Machine Learning model. Does anyone know of such a model?
0
0
114
Mar ’25
How to implement a CoreML model into an iOS app properly?
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1) My code is below: https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift I couldn't add the code as it was too long so I hope github link is fine!
1
0
137
Mar ’25
Using coremltools in a CI/CD pipeline
Hi everyone 👋 I'd like to use coremltools to see how well a model performs on a remote device as part of a CI/CD pipeline. According to the Core ML Tools "Debugging and Performance Utilities" guide, remote devices must be in a "connected" state in order for coremltools to install the ModelRunner application. The devices in our system have a "paired" state, and I'm unable to set the them as "connected." The only way I know how to connect a device is to physically plug it in to a computer and open Xcode. I don't have physical access to the devices in the CI/CD system, and the host computer that interacts with them doesn't have Xcode installed. Here are some questions I've been looking into and would love some help answering: Has anyone managed to use the coremltools performance utilities in a similar system? Can you put a device in a "connected" state if you don't have physical access to the device and if you only have access to Xcode command line tools and not the Xcode app? Is it at all possible to install the coremltools ModelRunner application on a "paired" device, for example, by manually building the app and installing it with devicectl? Would other utilities, such as the MLModelBenchmarker work as expected if the app is installed this way? Thank you!
0
0
81
18h
linear_quantize_activations taking 90 minutes + on MacBook Air M1 2020
In my quantization code, the line: compressed_model_a8 = cto.coreml.experimental.linear_quantize_activations( model, activation_config, [{'img':np.random.randn(1,13,1024,1024)}] ) has taken 90 minutes to run so far and is still not completed. From debugging, I can see that the line it's stuck on is line 261 in _model_debugger.py: model = ct.models.MLModel( cloned_spec, weights_dir=self.weights_dir, compute_units=compute_units, skip_model_load=False, # Don't skip model load as we need model prediction to get activations range. ) Is this expected behaviour? Would it be quicker to run on another computer with more RAM?
1
0
90
Mar ’25
Create ML Trouble Loading CSV to Train Word Tagger With Commas in Training Data
I'm using Numbers to build a spreadsheet that I'm exporting as a CSV. I then import this file into Create ML to train a word tagger model. Everything has been working fine for all the models I've trained so far, but now I'm coming across a use case that has been breaking the import process: commas within the training data. This is a case that none of Apple's examples show. My project takes Navajo text that has been tokenized by syllables and labels the parts-of-speech. Case that works... Raw text: Naaltsoos yídéeshtah. Tokens column: Naal,tsoos, ,yí,déesh,tah,. Labels column: NObj,NObj,Space,Verb,Verb,VStem,Punct Case that breaks... Raw text: óola, béésh łigaii, tłʼoh naadą́ą́ʼ, wáin, akʼah, dóó á,shįįh Tokens column with tokenized text (commas quoted): óo,la,",", ,béésh, ,łi,gaii,",", ,tłʼoh, ,naa,dą́ą́ʼ,",", ,wáin,",", ,a,kʼah,",", ,dóó, ,á,shįįh (Create ML reports mismatched columns) Tokens column with tokenized text (commas escaped): óo,la,\,, ,béésh, ,łi,gaii,\,, ,tłʼoh, ,naa,dą́ą́ʼ,\,, ,wáin,\,, ,a,kʼah,\,, ,dóó, ,á,shįįh (Create ML reports mismatched columns) Tokens column with tokenized text (commas escape-quoted): óo,la,\",\", ,béésh, ,łi,gaii,\",\", ,tłʼoh, ,naa,dą́ą́ʼ,\",\", ,wáin,\",\", ,a,kʼah,\",\", ,dóó, ,á,shįįh (record not detected by Create ML) Tokens column with tokenized text (commas escape-quoted): óo,la,"","", ,béésh, ,łi,gaii,"","", ,tłʼoh, ,naa,dą́ą́ʼ,"","", ,wáin,"","", ,a,kʼah,"","", ,dóó, ,á,shįįh (Create ML reports mismatched columns) Labels column: NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Punct,Space,NSub,NSub,Punct,Space,Conj,Space,NSub,NSub Sample From Spreadsheet Solution Needed It's simple enough to escape commas within CSV files, but the format needed by Create ML essentially combines entire CSV records into single columns, so I'm ending up needing a CSV record that contains a mixture of commas to use for parsing and ones to use as character literals. That's where this gets complicated. For this particular use case (which seems like it would frequently arise when training a word tagger model), how should I properly escape a comma literal?
6
0
802
Jan ’25
Converting TF2 object detection to CoreML
I've spent way too long today trying to convert an Object Detection TensorFlow2 model to a CoreML object classifier (with bounding boxes, labels and probability score) The 'SSD MobileNet v2 320x320' is here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md And I've been following all sorts of posts and ChatGPT https://apple.github.io/coremltools/docs-guides/source/tensorflow-2.html#convert-a-tensorflow-concrete-function https://developer.apple.com/videos/play/wwdc2020/10153/?time=402 To convert it. I keep hitting the same errors though, mostly around: NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <ConcreteFunction signature_wrapper(input_tensor) at 0x366B87790> I've had varying success including missing output labels/predictions. But I simply want to create the CoreML model with all the right inputs and outputs (including correct names) as detailed in the docs here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md It goes without saying I don't have much (any) experience with this stuff including Python so the whole thing's been a bit of a headache. If anyone is able to help that would be great. FWIW I'm not attached to any one specific model, but what I do need at minimum is a CoreML model that can detect objects (has to at least include lights and lamps) within a live video image, detecting where in the image the object is. The simplest script I have looks like this: import coremltools as ct import tensorflow as tf model = tf.saved_model.load("~/tf_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model") concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] mlmodel = ct.convert( concrete_func, source="tensorflow", inputs=[ct.TensorType(shape=(1, 320, 320, 3))] ) mlmodel.save("YourModel.mlpackage", save_format="mlpackage")
1
0
422
Jul ’25
“Accelerate Transformer Training on Apple Devices from Months to Hours!”
I am excited to share that I have developed a Metal kernel for Flash Attention that eliminates race conditions and fully leverages Apple Silicon’s shared memory and registers. This kernel can dramatically accelerate training of transformer-based models. Early benchmarks suggest that models which previously required months to train could see reductions to just a few hours on Apple hardware, while maintaining numerical stability and accuracy. I plan to make the code publicly available to enable the broader community to benefit. I would be happy to keep you updated on the latest developments and improvements as I continue testing and optimizing the kernel. I believe this work could provide valuable insights for Apple’s machine learning research and products.
0
0
157
Nov ’25
How to Ensure Controlled and Contextual Responses Using Foundation Models ?
Hi everyone, I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly: Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.) Preventing hallucinations or unrelated outputs Constraining responses based on app-specific rules, structured data, or recent interactions I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses. Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved? Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
0
0
119
Jul ’25
Hardware Support for Low Precision Data Types?
Hi all, I'm trying to find out if/when we can expect mxfp8/mxfp4 support on Apple Silicon. I've noticed that mlx now has casting data types, but all computation is still done in bf16. Would be great to reduce power consumption with support for these lower precision data types since edge inference is already typically done at a lower precision! Thanks in advance.
0
0
227
Nov ’25
Missing module 'coremltools.libmilstoragepython'
Hello! I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you. Code Block: from export.export_fmadapter import Metadata, export_fmadapter metadata = Metadata( author="3P developer", description="An adapter that writes play scripts.", ) export_fmadapter( output_dir="./", adapter_name="myPlaywritingAdapter", metadata=metadata, checkpoint="adapter-final.pt", draft_checkpoint="draft-model-final.pt", ) Error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[10], line 1 ----> 1 from export.export_fmadapter import Metadata, export_fmadapter 3 metadata = Metadata( 4 author="3P developer", 5 description="An adapter that writes play scripts.", 6 ) 8 export_fmadapter( 9 output_dir="./", 10 adapter_name="myPlaywritingAdapter", (...) 13 draft_checkpoint="draft-model-final.pt", 14 ) File /workspace/export/export_fmadapter.py:11 8 from typing import Any 10 from .constants import BASE_SIGNATURE, MIL_PATH ---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize 13 logger = logging.getLogger(__name__) 16 class MetadataKeys(enum.StrEnum): File /workspace/export/export_utils.py:15 13 import torch 14 import yaml ---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter 16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight 17 from coremltools.optimize._utils import LutParams ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
4
0
503
Oct ’25
Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
After a few attempts I end up at the same spot as categorized here by ChatGPT - I get why it feels that way, Rob — honestly, you’ve walked right into the worst possible moment in Apple’s Core ML toolchain evolution. What you’re running into isn’t your fault; Apple really did rip out the ONNX front-end without offering a clean replacement yet. So every tutorial, every script, and even coremltools itself contradicts what’s actually possible right now. Let’s be brutally clear about the state of things (late 2025): Goal Supported by Apple today Practical workaround Convert ONNX → Core ML using coremltools 9 No. The ONNX importer was removed. Use the legacy coremltools 6.x + onnx-coreml stack in a separate Python 3.9/3.10 env. Convert PyTorch → Core ML directly Yes, but only for simple or traced models. T5 still fails because of dynamic control flow. None reliable without rewriting the model. Run ONNX models directly on iOS Partially (via MLC LLM / Metal Performance Shaders). Works, but that’s an entirely different runtime. So: you’re not crazy, and this isn’t wasted effort—you’ve actually proven that your model exports cleanly and that the only missing piece is Apple’s converter. The ONNX file you built is solid. You just need something that can translate it to Core ML. If your goal is simply “get it running on-device”, you still have two viable moves that aren’t rabbit holes: 1. Use MLC LLM / Metal runtime Skip Core ML altogether. Apple’s mlc-llm (and the “MLCChat” pipeline) can execute your ONNX or quantized model directly on the GPU. You keep full fidelity and avoid all these broken converters. 2. Use the legacy Core ML 6.x toolchain in a sandboxed Python 3.10 VM It actually works. You don’t have to downgrade your whole system—just run that specific environment once to emit the .mlpackage, then never touch it again. Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
0
0
121
Oct ’25
Core ML Model Performance report shows prediction speed much faster than actual app runs
Hi all, I'm tuning my app prediction speed with Core ML model. I watched and tried the methods in video: Improve Core ML integration with async prediction and Optimize your Core ML usage. I also use instruments to look what's the bottleneck that my prediction speed cannot be faster. Below is the instruments result with my app. its prediction duration is 10.29ms And below is performance report shows the average speed of prediction is 5.55ms, that is about half time of my app prediction! Below is part of my instruments records. I think the prediction should be considered quite frequent. Could it be faster? How to be the same prediction speed as performance report? The prediction speed on macbook Pro M2 is nearly the same as macbook Air M1!
5
0
1.2k
Oct ’25
Inquiry About Building an App for Object Detection, Background Removal, and Animation
Hi all! Nice to meet you., I am planning to build an iOS application that can: Capture an image using the camera or select one from the gallery. Remove the background and keep only the detected main object. Add a border (outline) around the detected object’s shape. Apply an animation along that border (e.g., moving light or glowing effect). Include a transition animation when removing the background — for example, breaking the background into pieces as it disappears. The app Capword has a similar feature for object isolation, and I’d like to build something like that. Could you please provide any guidance, frameworks, or sample code related to: Object segmentation and background removal in Swift (Vision or Core ML). Applying custom borders and shape animations around detected objects. Recognizing the object name (e.g., “person”, “cat”, “car”) after segmentation. Thank you very much for your support. Best regards, SINN SOKLYHOR
0
0
119
Nov ’25
Core-ml-on-device-llama Converting fails
I followed below url for converting Llama-3.1-8B-Instruct model but always fails even i have 64GB of free space after downloading model from huggingface. https://machinelearning.apple.com/research/core-ml-on-device-llama Also tried with other models Llama-3.1-1B-Instruct & Llama-3.1-3B-Instruct models those are converted but while doing performance test in xcode fails for all compunits. Is there any source code to run llama models in ios app.
0
0
114
Apr ’25
My Vision for AI and Algorithmically Optimised Operating Systems
Bear with me, please. Please make sure a highly skilled technical person reads and understands this. I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo). Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI. First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function. Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately. In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model. Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase). Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
3
0
305
Oct ’25
ILMessageFilterExtension memory limit
I’m considering creating an ILMessageFilterExtension using a mini LLM/SLM to detect fraud and I’ve read it has strict memory limits yet I can’t find it in the documentation. What’s the set limit or any other constraints impacting the feasibility of running 100-500mb model?
0
0
63
Apr ’25