Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
4
1
499
Mar ’25
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
2
1
236
Jul ’25
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
1
1
724
Jul ’25
What causes »ARSessionDelegate is retaining X ARFrames« console warning?
Hi, since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it? If I remember correctly I didn't even assign an ARSessionDelegate. Thank you!
4
1
3.4k
Feb ’25
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
172
Jun ’25
AudioPlaybackController stop playing when .plain window is closed
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below) When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop. However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore. To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug. RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Put skybox here. See example in World project available at // https://developer.apple.com/ if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") { let ent = Entity() immersiveContentEntity.addChild(ent) let controller = ent.prepareAudio(audioResource) controller.play() } } } I wonder why this happen? I mean how should I keep the music playing when I close the .plain window? Thanks!
1
1
535
Jan ’25
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
2
1
969
21h
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents
My friend cannot build my visionOS project in the simulator. He gets the following error. Error: [xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets") In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error. Error: Tool exited with code 1 Yikes, please help!
1
1
499
Feb ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
128
Jun ’25
ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
2
1
1.3k
2w
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
629
Feb ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
193
6d
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
283
Sep ’25
RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
4
1
1.1k
Sep ’25
Human pose detection failing on Vision Pro
Hi! I attempted to run a sample project for detecting human pose in photos, which can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
3
1
173
Mar ’25