Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
118
Apr ’25
Getting the world position of a QR code
Hi, would love for your help in that matter. I try to get the position in space of two QR codes to make an alignment to their positions in space. The detection shows that the QR codes position is always 0,0,0 and I don't understand why. Here's my code: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @ObservedObject var qrCoordinator: QRCoordinator @ObservedObject var coordinator: ImmersiveCoordinator let qrName: String @Binding var startQRDetection: Bool @State private var anchor: AnchorEntity? = nil @State private var detectionTask: Task<Void, Never>? = nil var body: some View { RealityView { content in // Add the QR anchor once (must exist before detection starts) if anchor == nil { let imageAnchor = AnchorEntity(.image(group: "QRs", name: qrName)) content.add(imageAnchor) anchor = imageAnchor print("📌 Created anchor for \(qrName)") } } .onChange(of: startQRDetection) { enabled in if enabled { startDetection() } else { stopDetection() } } .onDisappear { stopDetection() } } private func startDetection() { guard detectionTask == nil, let anchor = anchor else { return } detectionTask = Task { var detected = false while !Task.isCancelled && !detected { print("🔎 Checking \(qrName)... isAnchored=\(anchor.isAnchored)") if anchor.isAnchored { // wait a short moment to let transform update try? await Task.sleep(nanoseconds: 100_000_000) let worldPos = anchor.position(relativeTo: nil) if worldPos != .zero { // relative to modelRootEntity if available var posToSave = worldPos if let modelEntity = coordinator.modelRootEntity { posToSave = anchor.position(relativeTo: modelEntity) print("converted to model position") } else { print("⚠️ modelRootEntity not available, using world position") } print("✅ \(qrName) detected at position: world=\(worldPos) saved=\(posToSave)") if qrName == "reanchor1" { qrCoordinator.qr1Position = posToSave let marker = createMarker(color: [0,1,0]) marker.position = .zero // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker1 added") } else if qrName == "reanchor2" { qrCoordinator.qr2Position = posToSave let marker = createMarker(color: [0,0,1]) marker.position = posToSave // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker2 added") } detected = true } else { print("⚠️ \(qrName) anchored but still at origin, retrying...") } } try? await Task.sleep(nanoseconds: 500_000_000) // throttle loop } print("🛑 QR detection loop ended for \(qrName)") detectionTask = nil } } private func stopDetection() { detectionTask?.cancel() detectionTask = nil } private func createMarker(color: SIMD3<Float>) -> ModelEntity { let sphere = MeshResource.generateSphere(radius: 0.05) let material = SimpleMaterial(color: UIColor( red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0 ), isMetallic: false) let marker = ModelEntity(mesh: sphere, materials: [material]) marker.name = "marker" return marker } }
2
0
491
Oct ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
129
May ’25
AVPlayer stutters when using AVPlayerItemVideoOutput
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes. For testing, we used this 8K video: https://deovr.com/nwfnq1 The video was played using the following code: @objc public func playVideo(urlString: String) { guard let url = URL(string: urlString) else { return } let pItem = AVPlayerItem(url: url) playerItem = pItem pItem.preferredForwardBufferDuration = 10.0 let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelBufferMetalCompatibilityKey as String: true, ] let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes ) pItem.add(output) playerItemObserver = pItem.observe(\.status) { [weak self] pItem, _ in guard pItem.status == .readyToPlay else { return } self?.playerItemObserver = nil self?.player.play() } player = AVPlayer(playerItem: pItem) player.currentItem?.preferredPeakBitRate = 35_000_000 } When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: -0.07s | buffer: 0.00s 🟡 Buffer ahead: 2.94s | buffer: 3.49s 🟡 Buffer ahead: 2.50s | buffer: 4.06s 🟡 Buffer ahead: 1.74s | buffer: 4.30s 🟡 Buffer ahead: 0.74s | buffer: 4.30s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.09s | buffer: 4.30s 🟠 Playback may stall 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.07s | buffer: 1.43s 🟣 Buffer full 🟡 Buffer ahead: 0.47s | buffer: 1.65s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.10s | buffer: 1.65s 🟠 Playback may stall 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟣 Buffer full 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 2.00s 🟡 Buffer ahead: 0.68s | buffer: 2.27s 🟡 Buffer ahead: 0.09s | buffer: 2.27s 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.22s | buffer: 2.22s 🟡 Buffer ahead: 1.05s | buffer: 3.05s 🟡 Buffer ahead: 1.12s | buffer: 4.12s 🟡 Buffer ahead: 1.18s | buffer: 5.18s 🟡 Buffer ahead: 0.72s | buffer: 5.72s 🟡 Buffer ahead: 1.27s | buffer: 7.28s 🟡 Buffer ahead: 2.09s | buffer: 3.03s 🟡 Buffer ahead: 4.16s | buffer: 6.10s 🟡 Buffer ahead: 6.66s | buffer: 7.09s 🟡 Buffer ahead: 5.66s | buffer: 7.09s 🟡 Buffer ahead: 4.66s | buffer: 7.09s 🟡 Buffer ahead: 4.02s | buffer: 7.45s 🟡 Buffer ahead: 3.62s | buffer: 8.05s 🟡 Buffer ahead: 2.62s | buffer: 8.05s 🟡 Buffer ahead: 2.49s | buffer: 3.53s 🟡 Buffer ahead: 2.43s | buffer: 3.38s 🟡 Buffer ahead: 1.90s | buffer: 3.85s We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained. Can you advise whether this is a player issue or if we’re doing something wrong?
1
0
402
Oct ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
330
Oct ’25
mipmapsMode trade-off?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options). I noticed two situations here in terms of mipmaps options. When setting "mipmapsMode: .none": The graphic quality within the "gaze area" looks sharp and clear The two poles (top and bottom) are perfectly rendered Massive shimmer around the "gaze area" When setting "mipmapsMode: .allocateAndGenerateAll": The graphic looks slightly blurrier than in ".none" within the "gaze area" The two poles are very blurry and hard to recognize the texture Much less shimmer around the "gaze area" My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer? Thank you! Screenshots: mipmapsMode: .none mipmapsMode: .allocateAndGenerateAll
0
0
230
Oct ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
120
May ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
153
May ’25
HoverEffectStyle in visionOS 26.0
This is no longer highlighting my entity when looking at it: RealityView { content let hoverComponent = HoverEffectComponent(.spotlight( HoverEffectComponent.SpotlightHoverEffectStyle( color: .white, strength: 2.0 ) )) entity.components.set(hoverComponent) The entity is in a window. The same code works in an immersive view. Collision Component and Input type are set in RCP. It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device. If I use a 2.x simulator, it works. Is this a bug or is there something I'm missing? Thanks.
2
0
855
Oct ’25
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possible to implement the experience described above using visionOS? Are there any important implementation considerations or limitations that should be taken into account? For example, when two participants experience the app simultaneously, how is the content positioned for each participant? Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint? For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct? When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information? I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details: https://developer.apple.com/videos/play/wwdc2025/318/ Thank you.
1
0
278
3w
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
304
Dec ’25
画面抖动导致观众眩晕
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。 希望 优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
0
0
116
Dec ’25
拍摄画面亮度不稳定(动态波动)
画面亮度存在无规律动态波动(时亮时暗),且无手动控制入口,导致商品颜色还原失真、主播面部曝光异常(过曝 / 欠曝),严重影响直播展示效果。 期望 "· 优化直播模式的自动曝光算法,提升复杂光线环境下的亮度稳定性; · 增加 “直播模式” 专属亮度锁定功能,支持手动设定亮度参数并锁定,满足直播场景下的画质可控需求。 "
0
0
266
Dec ’25
Is it possible to raycast with PSVR2 controllers when developing in Unity for the Apple Vision Pro?
Hi all, I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe? Thank you!
1
0
539
Dec ’25
Setting clip shape of a RealityView
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)): struct StereoImage: View { var body: some View { let spacing: CGFloat = 10.0 let padding: CGFloat = 40.0 VStack(spacing: spacing) { Text("Stereoscopic Image Example") .font(.largeTitle) RealityView { content in let creator = StereoImageCreator() guard let entity = await creator.createImageEntity() else { print("Failed to create the stereoscopic image entity.") return } content.add(entity) } .frame(depth: .zero) } .padding(padding) .clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE! } } This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers. How can I properly apply a clipshape to RealityViews like this? Thanks!
3
0
491
Feb ’25
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
84
Jul ’25
Hidden window/volume system overlays in Full Space
When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes. They only appear once you get closer to the window or if the sky sphere gets removed. Is this a known issue or is there a fix for that? .persistentSystemOverlays(.visible)does not fix it Xcode 16.3.0 Beta, visionOS 2.4
5
0
357
Mar ’25
Spatial-backdrop standards process
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark). I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress. Is there any place I can keep an eye on this standards process? Has Apple announced any feature updates or news on spatial-backdrops?
0
0
121
Nov ’25