Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
417
Feb ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
129
May ’25
Setting immerstionStyle while in immersive space breaks all entities.
I have my immersive space set up like: ImmersiveSpace(id: "Theater") { ImmersiveTeleopView() .environment(appModel) .onAppear() { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full) Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities. My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this: public enum IStyle: String, CaseIterable, Identifiable { case mixedStyle, fullStyle public var id: Self { self } var style: ImmersionStyle { switch self { case .mixedStyle: return .mixed case .fullStyle: return .full } } } /// Maintains app-wide state @MainActor @Observable class AppModel { // Immersion Style public var immersionStyle: IStyle = .mixedStyle
1
0
230
Oct ’25
Hand Tracking Latency When UITextView Becomes Active in Vision Pro Immersive Space
I'm placing sphere at finger tip and updating its position as hand move. Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately. When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly. Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load) yet the lag still occurs even when CPU usage remains below 100%. I’m using the following code for hand tracking: private func processHandTrackingUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor if handAnchor.isTracked { switch handAnchor.chirality { case .left: leftHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: leftHandJointEntities) case .right: rightHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: rightHandJointEntities) } } else { switch handAnchor.chirality { case .left: leftHandAnchor = nil hideAllJoints(in: leftHandJointEntities) case .right: rightHandAnchor = nil hideAllJoints(in: rightHandJointEntities) } } await MainActor.run { handTrackingData.processNewHandAnchors( leftHand: self.leftHandAnchor, rightHand: self.rightHandAnchor ) } } } And here’s the function I’m using to update the joint positions: private func updateHandJoints( for handAnchor: HandAnchor, with jointEntities: [HandSkeleton.JointName: Entity] ) { guard handAnchor.isTracked else { hideAllJoints(in: jointEntities) return } // Check if the little finger tip and intermediate base are both tracked. if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip), let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip), tipJoint.isTracked, intermediateBaseJoint.isTracked, let pinkySphere = jointEntities[.littleFingerTip] { // Convert joint transforms to world space. let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform // Extract positions from the transforms. let tipPosition = SIMD3<Float>(tipTransform.columns.3.x, tipTransform.columns.3.y, tipTransform.columns.3.z) let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x, intermediateBaseTransform.columns.3.y, intermediateBaseTransform.columns.3.z) // Calculate the midpoint. let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0 // Position the sphere at the midpoint and make it visible. pinkySphere.isEnabled = true pinkySphere.transform.translation = midpointPosition } else { // If either joint is not tracked, hide the sphere. jointEntities[.littleFingerTip]?.isEnabled = false } // Update the positions of all other hand joint spheres. for (jointName, entity) in jointEntities { if jointName == .littleFingerTip { // Already handled the pinky above. continue } guard let joint = handAnchor.handSkeleton?.joint(jointName), joint.isTracked else { entity.isEnabled = false continue } entity.isEnabled = true let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x, jointTransform.columns.3.y, jointTransform.columns.3.z) } } I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue. Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ Vision Pro Recordings: https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz- https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0 Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated. Thanks!
0
0
396
Sep ’25
Persistent Entity Position
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts. What I’ve tried: Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position. Questions: Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent? If not, what’s the recommended pattern to reliably restore wall‑anchored content? Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
1
0
240
Oct ’25
ManipulationComponent create parent/child crash
Hello, If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message: Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported. CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro The problem occurs precisely with this code: ManipulationComponent.configureEntity(object) I adapted Apple's ObjectPlacementExample and made the changes available via GitHub. The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly. GitHub Repo Thanks Andre
3
0
495
Oct ’25
USDZ Security
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
0
0
484
Jul ’25
Getting the world position of a QR code
Hi, would love for your help in that matter. I try to get the position in space of two QR codes to make an alignment to their positions in space. The detection shows that the QR codes position is always 0,0,0 and I don't understand why. Here's my code: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @ObservedObject var qrCoordinator: QRCoordinator @ObservedObject var coordinator: ImmersiveCoordinator let qrName: String @Binding var startQRDetection: Bool @State private var anchor: AnchorEntity? = nil @State private var detectionTask: Task<Void, Never>? = nil var body: some View { RealityView { content in // Add the QR anchor once (must exist before detection starts) if anchor == nil { let imageAnchor = AnchorEntity(.image(group: "QRs", name: qrName)) content.add(imageAnchor) anchor = imageAnchor print("📌 Created anchor for \(qrName)") } } .onChange(of: startQRDetection) { enabled in if enabled { startDetection() } else { stopDetection() } } .onDisappear { stopDetection() } } private func startDetection() { guard detectionTask == nil, let anchor = anchor else { return } detectionTask = Task { var detected = false while !Task.isCancelled && !detected { print("🔎 Checking \(qrName)... isAnchored=\(anchor.isAnchored)") if anchor.isAnchored { // wait a short moment to let transform update try? await Task.sleep(nanoseconds: 100_000_000) let worldPos = anchor.position(relativeTo: nil) if worldPos != .zero { // relative to modelRootEntity if available var posToSave = worldPos if let modelEntity = coordinator.modelRootEntity { posToSave = anchor.position(relativeTo: modelEntity) print("converted to model position") } else { print("⚠️ modelRootEntity not available, using world position") } print("✅ \(qrName) detected at position: world=\(worldPos) saved=\(posToSave)") if qrName == "reanchor1" { qrCoordinator.qr1Position = posToSave let marker = createMarker(color: [0,1,0]) marker.position = .zero // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker1 added") } else if qrName == "reanchor2" { qrCoordinator.qr2Position = posToSave let marker = createMarker(color: [0,0,1]) marker.position = posToSave // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker2 added") } detected = true } else { print("⚠️ \(qrName) anchored but still at origin, retrying...") } } try? await Task.sleep(nanoseconds: 500_000_000) // throttle loop } print("🛑 QR detection loop ended for \(qrName)") detectionTask = nil } } private func stopDetection() { detectionTask?.cancel() detectionTask = nil } private func createMarker(color: SIMD3<Float>) -> ModelEntity { let sphere = MeshResource.generateSphere(radius: 0.05) let material = SimpleMaterial(color: UIColor( red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0 ), isMetallic: false) let marker = ModelEntity(mesh: sphere, materials: [material]) marker.name = "marker" return marker } }
2
0
491
Oct ’25
Template Project Entity Overlapping and Sticking Issues
Hello, There are three issues I am running into with a default template project + additional minimal code changes: the Sphere_Left entity always overlaps the Sphere_Right entity. when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior. These issues are simple to replicate: Create a new project in XCode Choose visionOS -> App, then click Next Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next Save you project anywhere... Replace the entire ImmersiveView.swift file with the below code. Run. Try to manipulate the left sphere, you should get the same issues I mentioned above If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues. I am running this in macOS 26, XCode 26, on visionOS 26, all released lately. ImmersiveView Code: // // ImmersiveView.swift // import OSLog import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView") @State var collisionBeganUnfiltered: EventSubscription? var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add manipulation components setupManipulationComponents(in: immersiveContentEntity) collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in Task { @MainActor in handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB) } } } } } private func setupManipulationComponents(in rootEntity: Entity) { logger.info("\(#function) \(#line) ") let sphereNames = ["Sphere_Left", "Sphere_Right"] for name in sphereNames { guard let sphere = rootEntity.findEntity(named: name) else { logger.error("\(#function) \(#line) Failed to find \(name) entity") assertionFailure("Failed to find \(name) entity") continue } ManipulationComponent.configureEntity(sphere) var manipulationComponent = ManipulationComponent() manipulationComponent.releaseBehavior = .stay sphere.components.set(manipulationComponent) } logger.info("\(#function) \(#line) Successfully set up manipulation components") } private func handleCollision(entityA: Entity, entityB: Entity) { logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)") guard entityA !== entityB else { return } if entityB.isAncestor(of: entityA) { logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent") return } if entityA.isAncestor(of: entityB) { logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)") return } reparentEntities(child: entityA, parent: entityB) entityA.components[ParticleEmitterComponent.self]?.burst() } private func reparentEntities(child: Entity, parent: Entity) { let childBounds = child.visualBounds(relativeTo: nil) let parentBounds = parent.visualBounds(relativeTo: nil) let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x) let childPosition = child.position(relativeTo: nil) let parentPosition = parent.position(relativeTo: nil) let currentDistance = distance(childPosition, parentPosition) child.setParent(parent, preservingWorldTransform: true) logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)") child.components.remove(ManipulationComponent.self) logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)") if currentDistance > maxEntityWidth { let direction = normalize(childPosition - parentPosition) let newPosition = parentPosition + direction * maxEntityWidth child.setPosition(newPosition - parentPosition, relativeTo: parent) logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)") } } } fileprivate extension Entity { func isAncestor(of other: Entity) -> Bool { var current: Entity? = other.parent while let node = current { if node === self { return true } current = node.parent } return false } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) }
8
0
421
Sep ’25
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
0
0
194
Nov ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
576
Jul ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
270
Apr ’25
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
2
0
317
Oct ’25
Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
250
Aug ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
68
Apr ’25
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
863
Nov ’25