Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Gaze, Eye Tracking responsive Stuff
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ? For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ? Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
1
0
227
Mar ’25
Dynamically assigning texture resource to ShaderGraphMaterial on VisionOS
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows: And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird). Is there something I am missing?
1
0
545
Jan ’25
Loading USDZ with particle system crashes on Intel Macs
Hello, we have a RealityKit app that also runs on macOS via Catalyst. For specific USD assets containing particle systems we have observed a reproducible crash. Steps to reproduce: Open Reality Composer Pro Create new file Create simple particle system (default one is fine) export as USDZ Create project in Xcode Call Entity.load(… and pass in your USD Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log: validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' iptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 8) must match. ' Xcode version: 16.2.0 iMac 2020 3,8GHz Intel Core i7 macOS Sequoia 15.3 FB16477373 It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
1
0
424
Mar ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
182
May ’25
How to programmatically update Model Position Offset of GeometryModifier?
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image? in my code I set up the parameter for "DepthMapTexture" universal input node and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture this is where I define the plane entity. and this is the shader graph what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions. and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly. what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier. Maybe I'm missing something when defining Entity, something that allows geometric modifications?
1
0
295
Mar ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
493
Feb ’25
Missing Properties in BillboardComponent
In an earlier beta, BillboardComponent had rotationAxis and upDirection properties which allowed more fine-grained control of how an entity rotates towards the camera. Currently, it is only possible to orient the z axis of the entity. Looking at the robot in the documentation, the rotation of its z axis causes its feet to lift off the ground. Before, it was possible to restrain the rotation to one axis (y, for example) so that the robot's feet stayed on the ground with billboard.upDirection = [0, 1, 0] billboard.rotationAxis = [0, 1, 0] Is there an alternative way to achieve this? Are these properties (or similar) coming back?
1
0
294
Mar ’25
What's the relation of SwiftUI frames' sizes and RealityKit Entities sizes
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window. Also, I want to know the z and y position of the system window in the immersive space.
1
0
342
Jan ’25
Is there any way, I can use the Object Tracking applications on an iOS (iPhone) AR App.
I have been referencing the Object Tracking Tutorial from WWDC 2024 on Vision OS, how Create ML is used to create a reference object, and we can track them in the ARSession. I am looking forward to building this feature on an AR app for iPhone, I am using iPhone 13 Pro Max. I have created couple of reference objects from the Create ML.
1
0
322
Mar ’25
Apple Vision Pro Developer Strap: Video Out and Recording Time Limit
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround? Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool? I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
1
0
141
Jun ’25
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
1
0
225
Jun ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
107
May ’25
Obect Capture App getting rejected for not supporting non Pro Models
Hi, I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself. I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
1
0
350
Feb ’25
Implementing multi-pass rendering in VisionOS
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output. What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal? Thanks!
1
0
363
Mar ’25
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
493
Feb ’25