I am writing to inquire about content exclusion capabilities within Apple Intelligence, particularly regarding the use of configuration files such as .aiignore or .aiexclude—similar to what exists in other AI-assisted coding tools. These mechanisms are highly valuable in managing what content AI systems can access, especially in environments that involve sensitive code or proprietary frameworks.
I would appreciate it if anyone could clarify whether Apple Intelligence currently supports any exclusion configuration for AI-assisted features. If so, could you kindly provide documentation or guidance on how developers can implement these controls?
If not, Is there any plan to include such feature in future updates?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4.
please suggest advice on how do I proceed
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi friends,
I have just found that the inference speed dropped to only 1/10 of the original model.
Had anyone encountered this?
Thank you.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I am using a contact tool to help get contact from my address book. but the model ins't invoking my tool call method. Even tried with a simple tool the outcome is the same my simple tool is not being invoked.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello
I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points:
• Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model?
• Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS?
• When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)?
• Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply?
Thanks
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries.
Worked fine on beta 1, on beta 2 there's this error
dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm
Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Is this a part of an API transition, if so -
Apple, please update your documentation
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run
import tensorflow as tf
It gives
Traceback (most recent call last):
File "/Users/haoduoyu/Code/demo.py", line 1, in <module>
import tensorflow as tf
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module>
_ll.load_library(_plugin_dir)
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file)
As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
Somehow I'm not able to decrypt our ml models on my machine. It does not matter:
If I clean the build / delete the build folder
If it's a local build or a build downloaded from our build server
I log in as a different user
I reboot my system (15.4.1 (24E263)
I use a different network
Re-generate the encryption keys.
I'm the only one in my team confronted with this issue. Using the encrypted models works fine for everyone else.
As soon as our application tries to load the bundled ml model the following error is logged and returned:
Could not create persistent key blob for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 : error=Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 with error: -42908"
Error code 9 points to a decryption issue, but offers no useful pointers and suggests that some sort of network request needs to be made in order to decrypt our models.
/*! Core ML throws/returns this error when the framework encounters an error in the model
decryption subsystem.
The typical cause for this error is in the key server configuration and the client application
cannot do much about it.
For example, a model loading method will throw/return the error when it uses incorrect model
decryption key.
*/
MLModelErrorModelDecryption API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0)) = 9,
I could not find a reference to error '-42908' anywhere.
ChatGPT just lied to me, as usual...
How do can I resolve this or diagnose this further?
Thanks.
HI,
I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
I generate an array of random floats using the code shown below. However, I would like to do this with Double instead of Float. Are there any BNNS random number generators for double values, something like BNNSRandomFillUniformDouble? If not, is there a way I can convert BNNSNDArrayDescriptor from float to double?
import Accelerate
let n = 100_000_000
let result = Array<Float>(unsafeUninitializedCapacity: n) { buffer, initCount in
var descriptor = BNNSNDArrayDescriptor(data: buffer, shape: .vector(n))!
let randomGenerator = BNNSCreateRandomGenerator(BNNSRandomGeneratorMethodAES_CTR, nil)
BNNSRandomFillUniformFloat(randomGenerator, &descriptor, 0, 1)
initCount = n
}
Lately I am getting this error.
GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en
Does anyone know what this is and how it can be resolved. The error does not crash the app
Hello Apple Developer Community,
I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities:
Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs?
Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs?
I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference.
Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization?
Any insights, documentation links, or experiences shared would be greatly appreciated.
Thank you in advance for your assistance!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;)
However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.)
I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
Hi everyone, I’m working on an iOS app that uses a Core ML model to run live image recognition. I’ve run into a persistent issue with the mlpackage not being turned into a swift class. This following error is in the code, and in carDetection.mlpackage, it says that model class has not been generated yet. The error in the code is as follows:
What I’ve tried:
Verified Target Membership is checked for carDetectionModel.mlpackage
Confirmed the file is listed under Copy Bundle Resources (and removed from Compile Sources)
Cleaned the build folder (Shift + Cmd + K) and rebuilt
Renamed and re-added the .mlpackage file
Restarted Xcode and re-added the file
Logged bundle contents at runtime, but the .mlpackage still doesn’t appear
The mlpackage is in Copy bundle resources, and is not in the compile sources. I just don't know why a swift class is not being generated for the mlpackage.
Could someone please give me some guidance on what to do to resolve this issue?
Sorry if my error is a bit naive, I'm pretty new to iOS app development
Topic:
Machine Learning & AI
SubTopic:
Core ML
Attempted to download the Adapter Toolkit linked to from https://developer.apple.com/apple-intelligence/foundation-models-adapter/. Failed on all attempts, with a "403 Forbidden" error. I had accepted the agreement on the first attempt. How would we get access please?
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady.
let model = SystemLanguageModel.default
...
switch model.availability {
case .available:
print("LM available")
case .unavailable(let reason):
print("unavailable reason: ", String(describing: reason))
}
I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either.
Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components .
Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ?
Code is from [https://developer.apple.com/videos/play/wwdc2022/10019)
import CoreImage
import CreateMLComponents
struct ImageRegressor {
static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas")
static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters")
static func train() async throws -> some Transformer<CIImage, Float> {
let estimator = ImageFeaturePrint()
.appending(LinearRegressor())
// File name example: banana-5.jpg
let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image)
.mapFeatures(ImageReader.read)
.mapAnnotations({ Float($0)! })
let (training, validation) = data.randomSplit(by: 0.8)
let transformer = try await estimator.fitted(to: training, validateOn: validation)
try estimator.write(transformer, to: parametersURL)
return transformer
}
}
I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with
"pipeline.json,
parameters,
optimizer.json,
optimizer"