Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
Topic:
Spatial Computing
SubTopic:
General
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro.
I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images.
to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below)
**Does anybody know how to fix this issue? **
Topic:
Spatial Computing
SubTopic:
General
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Hello,
Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it.
I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience?
Thanks,
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS.
https://developer.apple.com/design/human-interface-guidelines/widgets
My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
I work on a game where I use timeline animations in Reality Composer Pro.
The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView.
And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture.
For example in the picture, the yellow sphere loops the following animation:
0 to 100
100 to -100
-100 to 0
If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops:
100 to 200
200 to 0
0 to 100
I already tried all kind of setups - like:
Setting the animations relative to root, parent, local
Using behaviors (on Added to Scene, on Notification)
And finally even by accessing the availableAnimations directly and saving the playback controller of the animation
There I saw, if I manually trigger the following code before switching the scene, everything works as expected:
Button("Reset") {
animationPlaybackController.time = 0
animationPlaybackController.pause()
animationPlaybackController.stop(blendOutDuration: 0.00001)
}
But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene.
I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked.
So am I doing something wrong in general or is there a way to fix this?
I'm placing sphere at finger tip and updating its position as hand move.
Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately.
When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly.
Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load)
yet the lag still occurs even when CPU usage remains below 100%.
I’m using the following code for hand tracking:
private func processHandTrackingUpdates() async {
for await update in handTracking.anchorUpdates {
let handAnchor = update.anchor
if handAnchor.isTracked {
switch handAnchor.chirality {
case .left:
leftHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: leftHandJointEntities)
case .right:
rightHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: rightHandJointEntities)
}
} else {
switch handAnchor.chirality {
case .left:
leftHandAnchor = nil
hideAllJoints(in: leftHandJointEntities)
case .right:
rightHandAnchor = nil
hideAllJoints(in: rightHandJointEntities)
}
}
await MainActor.run {
handTrackingData.processNewHandAnchors(
leftHand: self.leftHandAnchor,
rightHand: self.rightHandAnchor
)
}
}
}
And here’s the function I’m using to update the joint positions:
private func updateHandJoints(
for handAnchor: HandAnchor,
with jointEntities: [HandSkeleton.JointName: Entity]
) {
guard handAnchor.isTracked else {
hideAllJoints(in: jointEntities)
return
}
// Check if the little finger tip and intermediate base are both tracked.
if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip),
let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip),
tipJoint.isTracked,
intermediateBaseJoint.isTracked,
let pinkySphere = jointEntities[.littleFingerTip] {
// Convert joint transforms to world space.
let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform
let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform
// Extract positions from the transforms.
let tipPosition = SIMD3<Float>(tipTransform.columns.3.x,
tipTransform.columns.3.y,
tipTransform.columns.3.z)
let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x,
intermediateBaseTransform.columns.3.y,
intermediateBaseTransform.columns.3.z)
// Calculate the midpoint.
let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0
// Position the sphere at the midpoint and make it visible.
pinkySphere.isEnabled = true
pinkySphere.transform.translation = midpointPosition
} else {
// If either joint is not tracked, hide the sphere.
jointEntities[.littleFingerTip]?.isEnabled = false
}
// Update the positions of all other hand joint spheres.
for (jointName, entity) in jointEntities {
if jointName == .littleFingerTip {
// Already handled the pinky above.
continue
}
guard let joint = handAnchor.handSkeleton?.joint(jointName),
joint.isTracked else {
entity.isEnabled = false
continue
}
entity.isEnabled = true
let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform
entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x,
jointTransform.columns.3.y,
jointTransform.columns.3.z)
}
}
I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue.
Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ
Vision Pro Recordings:
https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz-
https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0
Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated.
Thanks!
当我进入混合空间时,出现一个模型,但模型后面有一个 windowGroup,无法完全查看。如果我想点击进入 mix 空间,我需要使用代码将 windowGroup 移动到另一个位置,而不是手动移动

Topic:
Spatial Computing
SubTopic:
General
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark).
I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress.
Is there any place I can keep an eye on this standards process?
Has Apple announced any feature updates or news on spatial-backdrops?
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
Hello all, I saw this interesting VisionOS app: https://apps.apple.com/us/app/splitscreen-multi-display/id6478007837
I was wondering if there was any documentation on the Swift APIs that were used to create this app.
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
Using a 360 image that I have taken with 72MP with a Insta360 X3 I would like to add those images into my VisionPro and see them surrounding me completely as we expect of a 360 image. I was able to do by performing the described on some tutorial.
The problem is the quality. On my 2D window the image looks with great quality.
I will still write down the code:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
content.add(createImmersivePicture(imageName: appModel.activeSpace))
}
}
func createImmersivePicture(imageName: String) -> Entity {
let sphereRadius: Float = 1000
let modelEntity = Entity()
let texture = try? TextureResource.load(named: imageName, options: .init(semantic: .raw, compression: .none))
var material = UnlitMaterial()
material.color = .init(texture: .init(texture!))
modelEntity.components.set(
ModelComponent(
mesh: .generateSphere(
radius: sphereRadius
),
materials: [material]
)
)
modelEntity.scale = .init(x: -1, y: 1, z: 1)
modelEntity.transform.translation += SIMD3<Float>(0.0, 10.0, 0.0)
return modelEntity
}
}
Since the quality is a problem. I thought about reducing the radius of the sphere or decreasing the scale. On both cases, nothing changes.
I have tried: modelEntity.scale = .init(x: -0.5, y: 0.5, z: 0.5)
And also let sphereRadius: Float = 2000, let sphereRadius: Float = 500, but nothing is changed.
I also get the warning:
IOSurface creation failed: e00002c2 parentID: 00000000 properties: {
IOSurfaceAddress = 4651830624;
IOSurfaceAllocSize = 35478941;
IOSurfaceCacheMode = 0;
IOSurfaceMapCacheAttribute = 1;
IOSurfaceName = CMPhoto;
IOSurfacePixelFormat = 1246774599;
}
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceCacheMode
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfacePixelFormat
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceMapCacheAttribute
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAddress
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAllocSize
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceName
Is there anything I can do to reduce the radius or just to improve the quality itself?
There a way to use contentCaptureProtected with Quick Look on VisionOS 26? Or exist a way to see a spatial photo with Quick Look without sharing options ?
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'.
It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays.
I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible?
I tried setting:
UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f
For example, but this doesn't seem to do anything to the render resolution in the build.
Topic:
Spatial Computing
SubTopic:
General
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!