How reliable is the Models, to use as a comparison, such as a cholesterol test, to inform, for example, whether it is worth it to go see a doctor?
I would like to use Tool to attach the simple blood test data to the session and with this the Model can analyse and made a simple suggestion if is necessary to see a doctor etc.. ?
ps.: Local model
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hey,
I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw!
I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following:
Find the tool in my [any Tool] array via the tool name I get from the model
if let tool = tools.first(where: { $0.name == functionCall.name }) {
// ...
}
Parse the arguments of the tool call via GeneratedContent(json:)
let generatedContent = try GeneratedContent(json: functionCall.arguments)
Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type
private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output {
let arguments = try T.Arguments.init(generatedContent)
return try await tool.call(arguments: arguments)
}
Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer.
Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello Apple Team,
Thank you for the recent Group Lab and for your continued work on advancing Xcode and developer tools.
I’d like to submit a feature request:
Are there any plans to introduce support for Agentic AI Mode (MCP protocol) in future versions of iOS or Xcode?
As developer tools evolve toward more intelligent and context-aware environments, the integration of agentic AI capabilities could significantly enhance productivity and unlock new creative workflows.
Looking forward to your consideration, and thank you again for the excellent session.
Best regards
Hi,
One can configure the languages of a (VN)RecognizeTextRequest with either:
.automatic: language to be detected
a specific language, say Spanish
If the request is configured with .automatic and successfully detects Spanish, will the results be exactly equivalent compared to a request made with Spanish set as language?
I could not find any information about this, and this is very important for the core architecture of my app.
Thanks!
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro.
#Playground {
let session = LanguageModelSession()
let topic = "pandas"
let prompt = "Write a safe and respectful story about (topic)."
let response = try await session.respond(to: prompt)
Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas?
Thanks for any help!
Version 26.0 beta 5 (17A5295f)
macOS 26.0 Beta (25A5316i)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
使用MPS来加速机器学习功能,有时是否与torch会有适配性问题?
I'm developing a macOS application using the FoundationModels framework
(LanguageModelSession) and encountering issues with the content sanitizer
blocking legitimate text input.
** Issue Description:**
The content sanitizer is flagging text strings that contain certain
substrings, even when they represent legitimate technical content. For
example:
F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model)
Technical product identifiers
Serial numbers and version codes
** Broader Concern:**
The content sanitizer appears to be applying restrictions that seem
inappropriate for user-owned content. Even if a filename were something
like "human sex.wav", users should have the right to process their own
legitimate files on their own devices without content filtering
interference.
** Error Messages:**
SensitiveContentSettings: Sanitizer model found unsafe content in value
FoundationModels.LanguageModelSession.GenerationError error 2
** Questions:**
Is there a way to disable content sanitization for processing
user-owned content?
2. What's the recommended approach for applications that need to handle
arbitrary user text?
3. Are there APIs to process personal content without filtering
restrictions?
** Environment:**
macOS 26.0
FoundationModels framework
LanguageModelSession
Any guidance would be appreciated.
Hello,
We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block:
let requestHandler = ImageRequestHandler(paddedImage)
var request = CoreMLRequest(model: model)
request.cropAndScaleAction = .scaleToFit
let results = try await requestHandler.perform(request)
The client using this code is wrapped inside an actor, following Swift concurrency principles.
The issue has been consistently reproduced across multiple iPadOS versions, including:
iPad OS - 18.4.0
iPad OS - 18.4.1
iPad OS - 18.5.0
This is the crash log -
Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
0 libobjc.A.dylib 0x7b98 objc_retain + 16
1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16
2 libobjc.A.dylib 0xbf18 objc_getProperty + 100
3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148
4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748
5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132
6 Vision 0x14600 VNExecuteBlock + 80
7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56
8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240
9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452
12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60
13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324
14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336
15 Vision 0x14600 VNExecuteBlock + 80
16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256
17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284
19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632
20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124
21 Vision 0x14600 VNExecuteBlock + 80
22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340
23 Vision 0x125410 __swift_memcpy112_8 + 4852
24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292
25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156
26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232
29 libsystem_pthread.dylib 0xaac start_wqthread + 8
We found an issue similar to us - https://developer.apple.com/forums/thread/770771.
But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies.
Please let us know if any additional information would help diagnose this issue.
I'm a bit new to the LLM stuff and with Foundation Models. My understanding is that there is a token limit of around 4K.
I want to process the contents of files which may be quite large. I first tried going the Tool route but that didn't work out so I then tried manually chunking the text to keep things under the limit.
It mostly works except that every now and then it'll exceed the limit. This happens even when the chunks are less than 100 characters. Instructions themselves are about 500 characters but still overall, well below 1000 characters per prompt, all told, which, in my limited understanding, should not result in 4K tokens being parsed.
Any ideas on what is going on here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
My sample app has been working with the following code:
func call(arguments: Arguments) async throws -> ToolOutput {
var temp:Int
switch arguments.city {
case .singapore: temp = Int.random(in: 30..<40)
case .china: temp = Int.random(in: 10..<30)
}
let content = GeneratedContent(temp)
let output = ToolOutput(content)
return output
}
However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content.
At the end, I want to pass the final version (not partially generated) to another function.
I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable.
Am I missing some functionality in the APIs?
Was just wondering why the foundation model documentation is no longer available, thanks!
https://developer.apple.com/documentation/FoundationModels
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
Encountered a few times when the answer get "stuck" (I am now at beta 6).
This is an example.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development.
I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects.
Thanks in advance!
Hi Apple product owners.
I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac.
I need a recommendation on how to use categories in combination with the spam filter to get most out of it.
So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition.
What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
From tensorflow-metal example:
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
I know that Apple silicon uses UMA, and that memory copies are typical of CUDA, but wouldn't the GPU memory still be faster overall?
I have an iMac Pro with a Radeon Pro Vega 64 16 GB GPU and an Intel iMac with a Radeon Pro 5700 8 GB GPU.
But using tensorflow-metal is still WAY faster than using the CPUs. Thanks for that. I am surprised the 5700 is twice as fast as the Vega though.
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hi everyone! 👋
I'm working on a C++ project using TensorFlow Lite and was wondering if anyone has a prebuilt TensorFlow Lite C++ library (libtensorflowlite) for macOS (Apple Silicon M1/M2) that they’d be willing to share.
I’m looking specifically for the TensorFlow Lite C++ API — something that lets me use tflite::Interpreter, tflite::FlatBufferModel, etc. Building it from source using Bazel on macOS has been quite challenging and time-consuming, so a ready-to-use .dylib or .a build along with the required headers would be incredibly helpful.
TensorFlow Lite version: v2.18.0 preferred
Target: macOS arm64 (Apple Silicon)
What I need:
libtensorflowlite.dylib or .a
Corresponding headers (ideally organized in a clean include/ folder)
If you have one available or know where I can find a reliable prebuilt version, I’d be super grateful. Thanks in advance! 🙏