Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

RealityView doesn't free up memory after disappearing
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack. Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB. Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager. Here's the sample app to illustrate the issue. PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
0
0
120
Sep ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
135
May ’25
The participantIdentifier of Shared Coordinate Space invalid in Visionos26 Enterprise api
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received. But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier? Or If it's a bug, please fix it in later beta release version, thank you.
0
0
203
Jul ’25
vision shareplay nearby codes expired
it looks like one week after accepting as a nearby other AVP device... it expires since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together is there any roundabout for this issue or straight to the wishlist ? thanks for the support !!
0
0
53
Sep ’25
How to Implement a Curved Surface Effect for Video Playback and Allow Dynamic Width Adjustment in visionOS?
Dear Apple Engineers, I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface. I have the following specific questions: How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface? How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user? Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques? Thank you very much for your help!
0
0
529
Dec ’24
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
110
May ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
108
May ’25
VisionOS Unity - VR Mode - Control Render Resolution?
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'. It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays. I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible? I tried setting: UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f For example, but this doesn't seem to do anything to the render resolution in the build.
0
0
349
Feb ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
0
0
102
Apr ’25
AVQueuePlayer and AVPlayerLooper implementation.
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code. Please see attached. There are no errors showing up but the videoMaterial is no longer working. Please see the attached for the working code that plays the videoMaterial. I am stumped can anyone help me solve this? Thank you.
0
0
93
Jun ’25
iOS AR Kit Blendshapes on Vision OS
I work on motion capture systems for VTubing. I can't seem to find any information on gaining access to the Face Tracking features on iOS while developing for Vision OS. I would love to bring VStreamer Live to Vision OS
0
0
49
Jun ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
105
Apr ’25
Apple Vision Pro - hand tracking with gloves
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery. But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers? Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
0
0
226
Feb ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
122
May ’25
The Question Of Two ARView Together
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time. Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room) But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
0
0
109
Mar ’25
Pass Video/ Frames to a Shader Graph?
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive. Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource: if let textureResource = try? await TextureResource(named: "fuji") { let value = MaterialParameters.Value.textureResource(textureResource) try? material.setParameter(name: "MyImage", value: value) model.model?.materials = [material] } Thanks in advance
0
0
99
Apr ’25