Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
1
0
90
Apr ’25
Is `ParticleEmitterComponent` implemented for `RealityKit` on iOS?
Hi there, I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
1
0
134
May ’25
WebXR and PSVR2
We got very excited when we saw support for the PSVR2 on WWDC! Particularly interesting is WebXR to us, so we got the controllers to give it a try. Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
1
5
255
Jun ’25
CapturedRoom.Section is missing a lot of information
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation. When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
1
0
353
Sep ’25
Run to Device (on WLAN)?
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
1
0
322
Jan ’25
UICollectionViewDataSourcePrefetching does not work on SwiftUI wrapped VisionOS
prefetching logic for UICollectionView on VisionOS does not work. I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic. in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable. on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices. With the same code it behaves correctly on iOS devices
1
0
159
Jul ’25
Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve. I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected. However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored. Any guidance or clarification on this behavior would be greatly appreciated. //  ImmersiveView.swift //  estudos_vision // //  Created by Lailan Rogerio Rodrigues Matos on 15/05/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel @State private var session: SpatialTrackingSession? @State private var box = ModelEntity() @State private var subs: [EventSubscription] = [] @State private var ballEntity: Entity? var body: some View { RealityView { content in // Load initial content from the RealityKit scene. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } // Create and run a spatial tracking session. let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) _ = await session.run(configuration) self.session = session // Create a red box. let boxMesh = MeshResource.generateBox(size: 0.2) let material = SimpleMaterial(color: .red, isMetallic: false) box = ModelEntity(mesh: boxMesh, materials: [material]) box.position.y += 0.15 // Position the box slightly above the origin. // Configure the box for user interaction and physics. box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive. box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics. box.components.set(PhysicsBodyComponent( // Add physics behavior. massProperties: .default, material: .default, mode: .kinematic // Use kinematic mode so it can be moved by user interaction. )) box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow. //content.add(box) //commented out to add to hand anchor // Create a left hand anchor and add the box as a child. let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene. // Create a sphere. let ball = ModelEntity(mesh: .generateSphere(radius: 0.15)) ball.position = [0.0, 1.5, -1.0] // Initial position of the ball. ball.generateCollisionShapes(recursive: false) // Add collision. ball.name = "Sphere" content.add(ball) ballEntity = ball // Subscribe to collision events between the box and other entities. let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred") //ce.entityA.removeFromParent() // removes the colliding object //ce.entityB.removeFromParent() } Task { subs.append(event) } } // Add a drag gesture to the box, allowing the user to move it. .gesture( DragGesture() .targetedToEntity(box) // Target the drag gesture to the box. .onChanged({ value in // Update the position of the box based on the drag gesture. box.position = value.convert(value.location3D, from: .local, to: box.parent!) }) ) } } #Preview(immersionStyle: .full) { ImmersiveView() .environment(AppModel()) }
1
0
96
May ’25
Creating spatial video with one camera
Hello everyone I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example? I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
1
0
291
Jun ’25
How to handle tasks when the Vision Pro is taken off?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on. I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server). I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
1
0
290
Mar ’25
The multiview video screen turns blank when returning
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
1
0
368
Feb ’25
Issue: Closing Bounded Volume Never Re-Opens
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
1
0
101
Jun ’25
AudioPlaybackController stop playing when .plain window is closed
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below) When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop. However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore. To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug. RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Put skybox here. See example in World project available at // https://developer.apple.com/ if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") { let ent = Entity() immersiveContentEntity.addChild(ent) let controller = ent.prepareAudio(audioResource) controller.play() } } } I wonder why this happen? I mean how should I keep the music playing when I close the .plain window? Thanks!
1
1
535
Jan ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
283
Sep ’25