Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Model Rate Limits?
Trying the Foundation Model framework and when I try to run several sessions in a loop, I'm getting a thrown error that I'm hitting a rate limit. Are these rate limits documented? What's the best practice here? I'm trying to run the models against new content downloaded from a web service where I might get ~200 items in a given download. They're relatively small but there can be that many that want to be processed in a loop.
4
1
581
Jun ’25
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
6
1
1.5k
Feb ’25
FoundationModels tool calling not working (iOS 26, beta 6)
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response. However, I cannot get the language model session to see/use my tool. I have code like this passing the tool to my prompt: class Parser { func populate(locations: String, latitude: Double, longitude: Double) async { let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude) let session = LanguageModelSession(tools: [findLatLonTool]) { """ A prompt that populates a model with a list of locations. """ """ Use the findLatLon tool to populate the latitude and longitude for the name of each location. """ } let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self) let locationsModel = LocationsModels(); do { for try await partialParsedLocations in stream { locationsModel.parsedLocations = partialParsedLocations.content } } catch { print("Error parsing") } } } And then the tool that looks something like this: import Foundation import FoundationModels import MapKit struct FindLatLonTool: Tool { typealias Output = GeneratedContent let name = "findLatLon" let description = "Find the latitude / longitude of a location for a place name." let latitude: Double let longitude: Double @Generable struct Arguments { @Guide(description: "This is the location name to look up.") let locationName: String } func call(arguments: Arguments) async throws -> GeneratedContent { let request = MKLocalSearch.Request() request.naturalLanguageQuery = arguments.locationName request.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude), latitudinalMeters: 1_000_000, longitudinalMeters: 1_000_000 ) let search = MKLocalSearch(request: request) let coordinate = try await search.start().mapItems.first?.location.coordinate if let coordinate = coordinate { return GeneratedContent( LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude) ) } return GeneratedContent("Location was not found - no latitude / longitude is available.") } } But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code. Has anybody successfully gotten a tool to be called?
2
1
476
Aug ’25
Xcode Beta 1 and FoundationsModel access
I downloaded Xcode Beta 1 on my mac (did not upgrade the OS). The target OS level of iOS26 and the device simulator for iOS26 is downloaded and selected as the target. When I try a simple Playground in Xcode ( #Playground ) I get a session error. #Playground { let avail = SystemLanguageModel.default.availability if avail != .available { print("SystemLanguageModel not available") return } let session = LanguageModelSession() do { let response = try await session.respond(to: "Create a recipe for apple pie") } catch { print(error) } } The error I get is: Asset com.apple.gm.safety_deny_input.foundation_models.framework.api not found in Model Catalog Is there a way to test drive the FoundationModel code without upgrading to macos26?
1
1
363
Jun ’25
Xcode 26.1 RC ( RC1 ?) Apple Intelligence using GPT (with account or without) or Sonnet (via OpenRouter) much slower
I didn't run benchmarks before update, but it seems at least 5x slower. Of course all the LLM work is on remote servers, so is non-intuitive to me this should be happening. Had updated MacOS and Xcode to 26.1RC at the same time, so can't even say I think it is MacOS or I think it is Xcode. Before the update the progress indicator for each piece of code might seem to get stuck at the very end (and toggling between Navigators and Coding Assistant) in Xcode UI seemed to refresh the UI and confirm coding complete... but now it seems progress races to 50%, then often is stuck at 75%... well earlier than used to get stuck. And it like something is legitimately processing not just a UI glitch. I'm wondering if this is somehow tied to visual rendering of the code in the little white window? CMD-TAB into Xcode seems laggy. Xcode is pinning a CPU. Why, this is all remote LLM work? MacBook Pro 2021 M1 64GB RAM. Went from 26.01 to 26.1RC. Didn't touch any of the betas until RC1.
1
1
296
Oct ’25
Core ML model decryption on Intel chips
About the Core ML model encryption mention in:https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app When I encrypted the model, if the machine is M chip, the model will load perfectly. One the other hand, when I test the executable on an Intel chip macbook, there will be an error: Error Domain=com.apple.CoreML Code=9 "Operation not supported on this platform." UserInfo={NSLocalizedDescription=Operation not supported on this platform.} Intel test machine is 2019 macbook air with CPU: Intel i5-8210Y, OS: 14.7.6 23H626, With Apple T2 Security Chip. The encrypted model do load on M2 and M4 macbook air. If the model is NOT encrypted, it will also load on the Intel test machine. I did not find in Core ML document that suggest if the encryption/decryption support Intel chips. May I check if the decryption indeed does NOT support Intel chip?
2
1
339
6d
FoundationModels not supported on Mac Catalyst?
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'. Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels I can create iOS builds using the FoundationModels framework without issues. Hope this will be fixed soon! Config: Xcode 26.0 beta (17A5241e) macOS 26.0 Beta (25A5279m) 15-inch, M4, 2025 MacBook Air
2
1
309
Jun ’25
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.5k
5d
Overly strict foundation model rate limit when used in app extension
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides. In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario. The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit. My suggestions: Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me. Please document the rate limit. Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again. Filed a feedback here: FB18332004
3
1
214
Jun ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
1
0
1.2k
Sep ’25
Is MCP (Model Context Protocol) supported on iOS/macOS?
Hi team, I’m exploring the Model Context Protocol (MCP), which is used to connect LLMs/AI agents to external tools in a structured way. It's becoming a common standard for automation and agent workflows. Before I go deeper, I want to confirm: Does Apple currently provide any official MCP server, API surface, or SDK on iOS/macOS? From what I see, only third-party MCP servers exist for iOS simulators/devices, and Apple’s own frameworks (Foundation Models, Apple Intelligence) don’t expose MCP endpoints. Is there any chance Apple might introduce MCP support—or publish recommended patterns for safely integrating MCP inside apps or developer tools? I would like to see if I can share my app's data to the MCP server to enable other third-party apps/services to integrate easily
1
1
441
Dec ’25
DockKit .track() has no effect using VNDetectFaceRectanglesRequest
Hi, I'm testing DockKit with a very simple setup: I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box. The stand is correctly docked (state == .docked) and dockAccessory is valid. I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown. To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation. However: the stand does not move at all. There is no visible reaction to the tracking calls. Is there anything I'm missing or doing wrong? Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements? Would really appreciate any help or pointers – thanks! That's my complete code: extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } detectFace(image: frame) func detectFace(image: CVPixelBuffer) { let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in guard let results = vnRequest.results as? [VNFaceObservation] else { return } guard let observation = results.first else { return } let boundingBoxHeight = observation.boundingBox.size.height * 100 #if canImport(DockKit) if let dockAccessory = self.dockAccessory { Task { try? await trackRider( observation.boundingBox, dockAccessory, frame, sampleBuffer ) } } #endif } let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up) try? imageResultHandler.perform([faceDetectionRequest]) func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect { let minX = min(box1.minX, box2.minX) let minY = min(box1.minY, box2.minY) let maxX = max(box1.maxX, box2.maxX) let maxY = max(box1.maxY, box2.maxY) let combinedWidth = maxX - minX let combinedHeight = maxY - minY return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight) } #if canImport(DockKit) func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws { // Zähle den Aufruf TrackMonitor.shared.trackCalled() let invertedBoundingBox = CGRect( x: boundingBox.origin.x, y: 1.0 - boundingBox.origin.y - boundingBox.height, width: boundingBox.width, height: boundingBox.height ) guard let device = captureDevice else { fatalError("Kamera nicht verfügbar") } let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)), height: Double(CVPixelBufferGetHeight(pixelBuffer))) var cameraIntrinsics: matrix_float3x3? = nil if let cameraIntrinsicsUnwrapped = CMGetAttachment( sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil ) as? Data { cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) } } Task { let orientation = getCameraOrientation() let cameraInfo = DockAccessory.CameraInformation( captureDevice: device.deviceType, cameraPosition: device.position, orientation: orientation, cameraIntrinsics: cameraIntrinsics, referenceDimensions: size ) let observation = DockAccessory.Observation( identifier: 0, type: .object, rect: invertedBoundingBox ) let observations = [observation] guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else { print("no image") return } do { try await dockAccessory.track(observations, cameraInformation: cameraInfo) } catch { print(error) } } } #endif func clearDrawings() { boundingBoxLayer?.removeFromSuperlayer() boundingBoxSizeLayer?.removeFromSuperlayer() } } } } @MainActor private func getCameraOrientation() -> DockAccessory.CameraOrientation { switch UIDevice.current.orientation { case .portrait: return .portrait case .portraitUpsideDown: return .portraitUpsideDown case .landscapeRight: return .landscapeRight case .landscapeLeft: return .landscapeLeft case .faceDown: return .faceDown case .faceUp: return .faceUp default: return .corrected } }
1
1
391
Dec ’25
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
1
1
1.9k
Mar ’25
What is the proper way to integrate a CoreML app into Xcode
Hi, I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3). If someone could help work me through this error that would be great! here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App this file with the model code is called SecondView.swift and here is the model info: Input: conv2d_input-> image (color 224x224) Output: Identity -> MultiArray (Float32 1x4)
1
1
211
Apr ’25
A Summary of the WWDC25 Group Lab - Machine Learning and AI Frameworks
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Machine Learning and AI Frameworks. What are you most excited about in the Foundation Models framework? The Foundation Models framework provides access to an on-device Large Language Model (LLM), enabling entirely on-device processing for intelligent features. This allows you to build features such as personalized search suggestions and dynamic NPC generation in games. The combination of guided generation and streaming capabilities is particularly exciting for creating delightful animations and features with reliable output. The seamless integration with SwiftUI and the new design material Liquid Glass is also a major advantage. When should I still bring my own LLM via CoreML? It's generally recommended to first explore Apple's built-in system models and APIs, including the Foundation Models framework, as they are highly optimized for Apple devices and cover a wide range of use cases. However, Core ML is still valuable if you need more control or choice over the specific model being deployed, such as customizing existing system models or augmenting prompts. Core ML provides the tools to get these models on-device, but you are responsible for model distribution and updates. Should I migrate PyTorch code to MLX? MLX is an open-source, general-purpose machine learning framework designed for Apple Silicon from the ground up. It offers a familiar API, similar to PyTorch, and supports C, C++, Python, and Swift. MLX emphasizes unified memory, a key feature of Apple Silicon hardware, which can improve performance. It's recommended to try MLX and see if its programming model and features better suit your application's needs. MLX shines when working with state-of-the-art, larger models. Can I test Foundation Models in Xcode simulator or device? Yes, you can use the Xcode simulator to test Foundation Models use cases. However, your Mac must be running macOS Tahoe. You can test on a physical iPhone running iOS 18 by connecting it to your Mac and running Playgrounds or live previews directly on the device. Which on-device models will be supported? any open source models? The Foundation Models framework currently supports Apple's first-party models only. This allows for platform-wide optimizations, improving battery life and reducing latency. While Core ML can be used to integrate open-source models, it's generally recommended to first explore the built-in system models and APIs provided by Apple, including those in the Vision, Natural Language, and Speech frameworks, as they are highly optimized for Apple devices. For frontier models, MLX can run very large models. How often will the Foundational Model be updated? How do we test for stability when the model is updated? The Foundation Model will be updated in sync with operating system updates. You can test your app against new model versions during the beta period by downloading the beta OS and running your app. It is highly recommended to create an "eval set" of golden prompts and responses to evaluate the performance of your features as the model changes or as you tweak your prompts. Report any unsatisfactory or satisfactory cases using Feedback Assistant. Which on-device model/API can I use to extract text data from images such as: nutrition labels, ingredient lists, cashier receipts, etc? Thank you. The Vision framework offers the RecognizeDocumentRequest which is specifically designed for these use cases. It not only recognizes text in images but also provides the structure of the document, such as rows in a receipt or the layout of a nutrition label. It can also identify data like phone numbers, addresses, and prices. What is the context window for the model? What are max tokens in and max tokens out? The context window for the Foundation Model is 4,096 tokens. The split between input and output tokens is flexible. For example, if you input 4,000 tokens, you'll have 96 tokens remaining for the output. The API takes in text, converting it to tokens under the hood. When estimating token count, a good rule of thumb is 3-4 characters per token for languages like English, and 1 character per token for languages like Japanese or Chinese. Handle potential errors gracefully by asking for shorter prompts or starting a new session if the token limit is exceeded. Is there a rate limit for Foundation Models API that is limited by power or temperature condition on the iPhone? Yes, there are rate limits, particularly when your app is in the background. A budget is allocated for background app usage, but exceeding it will result in rate-limiting errors. In the foreground, there is no rate limit unless the device is under heavy load (e.g., camera open, game mode). The system dynamically balances performance, battery life, and thermal conditions, which can affect the token throughput. Use appropriate quality of service settings for your tasks (e.g., background priority for background work) to help the system manage resources effectively. Do the foundation models support languages other than English? Yes, the on-device Foundation Model is multilingual and supports all languages supported by Apple Intelligence. To get the model to output in a specific language, prompt it with instructions indicating the user's preferred language using the locale API (e.g., "The user's preferred language is en-US"). Putting the instructions in English, but then putting the user prompt in the desired output language is a recommended practice. Are larger server-based models available through Foundation Models? No, the Foundation Models API currently only provides access to the on-device Large Language Model at the core of Apple Intelligence. It does not support server-side models. On-device models are preferred for privacy and for performance reasons. Is it possible to run Retrieval-Augmented Generation (RAG) using the Foundation Models framework? Yes, it is possible to run RAG on-device, but the Foundation Models framework does not include a built-in embedding model. You'll need to use a separate database to store vectors and implement nearest neighbor or cosine distance searches. The Natural Language framework offers simple word and sentence embeddings that can be used. Consider using a combination of Foundation Models and Core ML, using Core ML for your embedding model.
1
0
1.3k
Jun ’25
FoundationModels guardrailViolation on Beta 3
Hello everybody! I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase. Reproduction Context I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic. To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications. Simplified Working Example I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine: import Foundation import Playgrounds import FoundationModels @Generable struct GeneableLandmark { @Guide(description: "Name of the landmark to visit") var name: String } final class LandmarkSuggestionGenerator { var landmarkSuggestion: GeneableLandmark.PartiallyGenerated? private var session: LanguageModelSession init(){ self.session = LanguageModelSession( instructions: Instructions { """ generate a list of landmarks to visit """ } ) } func createLandmarkSuggestion(location: String) async throws { let stream = session.streamResponse( generating: GeneableLandmark.self, options: GenerationOptions(sampling: .greedy), includeSchemaInPrompt: false ) { """ Generate a list of landmarks to viist in \(location) """ } for try await partialResponse in stream { landmarkSuggestion = partialResponse } } } #Playground { let generator = LandmarkSuggestionGenerator() Task { do { try await generator.createLandmarkSuggestion(location: "New york") if let suggestion = generator.landmarkSuggestion { print("Suggested landmark: \(suggestion)") } else { print("No suggestion generated.") } } catch { print("Error generating landmark suggestion: \(error)") } } } But as soon as I use the Sample ItineraryPlanner: #Playground { // Example landmark for demonstration let exampleLandmark = Landmark( id: 1, name: "San Francisco", continent: "North America", description: "A vibrant city by the bay known for the Golden Gate Bridge.", shortDescription: "Iconic Californian city.", latitude: 37.7749, longitude: -122.4194, span: 0.2, placeID: nil ) let planner = ItineraryPlanner(landmark: exampleLandmark) Task { do { try await planner.suggestItinerary(dayCount: 3) if let itinerary = planner.itinerary { print("Suggested itinerary: \(itinerary)") } else { print("No itinerary generated.") } } catch { print("Error generating itinerary: \(error)") } } } The error pops up: Multiline Error generating itinerary: guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug Description: "May contain sensitive or unsafe content", >underlyingErrors: [FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))])) Based on my tests: The error may not be tied to structure complexity (since more nested structures work) The issue may stem from the tools or prompt content used inside the ItineraryPlanner The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them. Best, Sasha Morozov
2
1
410
Jul ’25
Tone, Sentiment, language analysis on iPhone - Ideas
Hi everyone, I’m exploring ideas around on-device analysis of user typing behavior on iPhone, and I’d love input from others who’ve worked in this area or thought about similar problems. Conceptually, I’m interested in things like: High-level sentiment or tone inferred from what a user types over time using ML-models Identifying a user’s most important or frequent topics over a recent window (e.g., “last week”) Aggregated insights rather than raw text (privacy-preserving summaries: e.g., your typo-rate by hour to infer highly efficient time slots or "take-a-break" warning typing errors increase) I understand the significant privacy restrictions around keyboard input on iOS, especially for third-party keyboards and system text fields. I’m not trying to bypass those constraints—rather, I’m curious about what’s realistically possible within Apple’s frameworks and policies. (For instance, Grammarly as a correction tool includes some information about tone) Questions I’m thinking through: Are there any recommended approaches for on-device text analysis that don’t rely on capturing raw keystrokes? Has anyone used NLP / Core ML / Natural Language successfully for similar summarization or sentiment tasks, scoped only to user-explicit input? For custom keyboards, what kinds of derived or transient signals (if any) are acceptable to process and summarize locally? Any design patterns that balance usefulness with Apple’s privacy expectations? If you’ve built something adjacent—journaling, writing analytics, well-being apps, etc.—I’d appreciate hearing what worked, what didn’t, and what Apple reviewers were comfortable with. Thanks in advance for any ideas or references 🙏
1
1
242
13h
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
852
Dec ’25