Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Accessing pupil diameter in visionOS
Previously I had developed software using SMI eye trackers, both screen mounted and their mobile glasses, for unique therapeutic and physiology applications. Sadly, after SMI was bought by Apple, their hardware and software have been taken off the market and now it is very difficult to get secondhand-market systems. The Apple Vision Pro integrates the SMI hardware. While I can use ARKit to get gaze position, I do not see a way to access information that was previously made accessible on the SMI hardware, particularly: dwell time and pupil diameter information. I am hopeful (or asking) to see that if a user has a properly set up Optic ID and would opt-in if, either on the present or a future version of visionOS, it might be possible to get access to the data streams for dwell times and pupil diameter. Pupil diameter is particularly important as it is a very good physiological measure of how much stress a person is encountering, which is critical to some of the therapeutic applications that formerly we used SMI hardware. Any ideas, or, if this is not possible, proposing this to the visionOS team would be appreciated!
2
0
248
Jul ’25
Control mirroring of Apple Vision Pro on any devices ?
Hi ! I'm new on this forum, so if I need to update this post to have more info, or anything else, please let me know. I'm using the Apple Vision Pro to develop some app (with unity). To demonstrate what the user see on the headset, I would like to mirror the view on a device (an iPad in this case). I managed to do this without any issue. My problem is that, in the Vision Pro, I have an interface that the user can interact with. But I would like to be able to manage myself the interface on the iPad. What I mean is that the user can (or can't, doesn't matter) see the interface in the headset, and the interface is controlled by myself on the iPad. Is there any way to do this ? Is this a question I should ask on unity's forum ? (I don't think so, because it should be related to the mirroring function non ?)
2
0
351
Mar ’25
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
755
Nov ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
747
Sep ’25
Automatic Plane Measurements like in the Apple Measure App
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes. Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it. Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar. Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
2
0
711
Jan ’25
CameraFrameProvider distortion correction
Hi, I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table. Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?
2
0
401
Mar ’25
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
206
1d
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
2
0
275
Oct ’25
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
2
0
232
3d
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
2
0
287
Oct ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
407
Oct ’25
Getting the world position of a QR code
Hi, would love for your help in that matter. I try to get the position in space of two QR codes to make an alignment to their positions in space. The detection shows that the QR codes position is always 0,0,0 and I don't understand why. Here's my code: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @ObservedObject var qrCoordinator: QRCoordinator @ObservedObject var coordinator: ImmersiveCoordinator let qrName: String @Binding var startQRDetection: Bool @State private var anchor: AnchorEntity? = nil @State private var detectionTask: Task<Void, Never>? = nil var body: some View { RealityView { content in // Add the QR anchor once (must exist before detection starts) if anchor == nil { let imageAnchor = AnchorEntity(.image(group: "QRs", name: qrName)) content.add(imageAnchor) anchor = imageAnchor print("📌 Created anchor for \(qrName)") } } .onChange(of: startQRDetection) { enabled in if enabled { startDetection() } else { stopDetection() } } .onDisappear { stopDetection() } } private func startDetection() { guard detectionTask == nil, let anchor = anchor else { return } detectionTask = Task { var detected = false while !Task.isCancelled && !detected { print("🔎 Checking \(qrName)... isAnchored=\(anchor.isAnchored)") if anchor.isAnchored { // wait a short moment to let transform update try? await Task.sleep(nanoseconds: 100_000_000) let worldPos = anchor.position(relativeTo: nil) if worldPos != .zero { // relative to modelRootEntity if available var posToSave = worldPos if let modelEntity = coordinator.modelRootEntity { posToSave = anchor.position(relativeTo: modelEntity) print("converted to model position") } else { print("⚠️ modelRootEntity not available, using world position") } print("✅ \(qrName) detected at position: world=\(worldPos) saved=\(posToSave)") if qrName == "reanchor1" { qrCoordinator.qr1Position = posToSave let marker = createMarker(color: [0,1,0]) marker.position = .zero // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker1 added") } else if qrName == "reanchor2" { qrCoordinator.qr2Position = posToSave let marker = createMarker(color: [0,0,1]) marker.position = posToSave // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker2 added") } detected = true } else { print("⚠️ \(qrName) anchored but still at origin, retrying...") } } try? await Task.sleep(nanoseconds: 500_000_000) // throttle loop } print("🛑 QR detection loop ended for \(qrName)") detectionTask = nil } } private func stopDetection() { detectionTask?.cancel() detectionTask = nil } private func createMarker(color: SIMD3<Float>) -> ModelEntity { let sphere = MeshResource.generateSphere(radius: 0.05) let material = SimpleMaterial(color: UIColor( red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0 ), isMetallic: false) let marker = ModelEntity(mesh: sphere, materials: [material]) marker.name = "marker" return marker } }
2
0
479
Oct ’25
Ground shadow and visibility
Hey, I'm building an interior design app In Vision OS 2.0. I'm fetching the planes detected by ARKit and I then proceed to add them with an "OcclusionMaterial" to make sure my object are occluded accordingly. However, I'm facing two problems with this: The ground shadows are completely disabled as soon as an occlusion material is added, even if I inset the planes doing the occlusion. I've looked into this: https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) but when I tried to use it, it behaved exactly as "OcclusionMaterial". The planes are also occluding all windows (mines and the system ones), which is a behavior I'd like to avoid. I only want to occluded the Entity I added. Is there a way to achieve this? Thanks in advance
2
1
477
Jan ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
277
Oct ’25
HoverEffectStyle in visionOS 26.0
This is no longer highlighting my entity when looking at it: RealityView { content let hoverComponent = HoverEffectComponent(.spotlight( HoverEffectComponent.SpotlightHoverEffectStyle( color: .white, strength: 2.0 ) )) entity.components.set(hoverComponent) The entity is in a window. The same code works in an immersive view. Collision Component and Input type are set in RCP. It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device. If I use a 2.x simulator, it works. Is this a bug or is there something I'm missing? Thanks.
2
0
843
Oct ’25
Building a custom render pipeline with RealityKit
Hello experts, and question seekers, I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me. The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats. However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer. It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information. Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages: Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path. exiting spatial tracking service update thread because wait returned 37” I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows. Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/ Inside draw(in view: MTKView), in a class derived by MTKViewDelegate: guard let currentDrawable = view.currentDrawable else { return } let realityRenderer = try! RealityRenderer() try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in print("Rendering scheduled") }, onComplete: { RealityRenderer in print("Rendering completed") }) Can you please tell me, what I am doing wrong? Is there any solution, that enables me to use RealityKit with for example Gaussian splats? Any help is greatly appreciated. All the best, Ethem Kurt
2
1
751
Feb ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
374
Sep ’25