I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation.
When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
I am trying to run widgets on visionOS 26. Specifically I am trying to pin them to the simulator room's walls, however I am unable to do so.
Is this a limitation with the visionOS simulator right now, or am I missing a trick here?
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS.
You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it.
I personally know multiple people who are not buying the headset simply because you locked those features out.
No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
Posting this here in case this information is helpful to other developers:
As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639)
Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked.
Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed.
Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances.
It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
Hi,
When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image.
Thanks.
In Vision OS app, We have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility, and state preservation, but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I want to display a model image in the windowGroup window. This image is not unique. To display the model image, how should I convert it into an image
Hi, I'm working on a VisionOS app and would like to integrate Background Assets to download large files after the app is installed.
I'm wondering what would happen if the user takes off the headset while a background asset is being downloaded. Would it continue downloading or would the download be stopped/paused?
I was looking for a way to download large assets while the user is not wearing the Vision Pro, is there any other alternative?
Thanks in advance.
Platform: visionOS 2.6
Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent
I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context.
This is what I’m seeing:
Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace
Mode switching API: All mode transitions work correctly (logs confirm the component updates)
Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts.
This is where it’s breaking for me:
Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change
The API calls succeed, state updates correctly, but the immersive content doesn’t render.
Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve:
Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc.
Tapping a spatial photo smoothly transitions it to immersive mode in-place.
The immersive content appears to “grow” from the original window position by just changing IPC viewing modes.
This proves the functionality should be possible, but I can’t determine the correct configuration.
So, my question to is:
Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of?
Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints?
Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances?
How do you think Apple’s SG app achieves this functionality?
For a little more context:
All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive]
The spatial photos are valid and work correctly in pure immersive space
Mixed immersive space is active when testing window context
No errors or warnings in console beyond the successful mode switching logs I’m getting
Any insights into the proper configuration for window-hosted immersive content
Apple's new Spatial Personas use Gaussian Splatting,
but I have not found any APIs for visionOS to display a Gaussian Splat like a PLY file.
Am I just missing the Apple documentation? If not, are there common practices developers are using for displaying Gaussian Splats in visionOS?
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes.
For testing, we used this 8K video:
https://deovr.com/nwfnq1
The video was played using the following code:
@objc public func playVideo(urlString: String)
{
guard let url = URL(string: urlString) else { return }
let pItem = AVPlayerItem(url: url)
playerItem = pItem
pItem.preferredForwardBufferDuration = 10.0
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
kCVPixelBufferMetalCompatibilityKey as String: true,
]
let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes )
pItem.add(output)
playerItemObserver = pItem.observe(\.status)
{
[weak self] pItem, _ in
guard pItem.status == .readyToPlay else { return }
self?.playerItemObserver = nil
self?.player.play()
}
player = AVPlayer(playerItem: pItem)
player.currentItem?.preferredPeakBitRate = 35_000_000
}
When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: -0.07s | buffer: 0.00s
🟡 Buffer ahead: 2.94s | buffer: 3.49s
🟡 Buffer ahead: 2.50s | buffer: 4.06s
🟡 Buffer ahead: 1.74s | buffer: 4.30s
🟡 Buffer ahead: 0.74s | buffer: 4.30s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.09s | buffer: 4.30s
🟠 Playback may stall
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.07s | buffer: 1.43s
🟣 Buffer full
🟡 Buffer ahead: 0.47s | buffer: 1.65s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.10s | buffer: 1.65s
🟠 Playback may stall
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟣 Buffer full
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 2.00s
🟡 Buffer ahead: 0.68s | buffer: 2.27s
🟡 Buffer ahead: 0.09s | buffer: 2.27s
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.22s | buffer: 2.22s
🟡 Buffer ahead: 1.05s | buffer: 3.05s
🟡 Buffer ahead: 1.12s | buffer: 4.12s
🟡 Buffer ahead: 1.18s | buffer: 5.18s
🟡 Buffer ahead: 0.72s | buffer: 5.72s
🟡 Buffer ahead: 1.27s | buffer: 7.28s
🟡 Buffer ahead: 2.09s | buffer: 3.03s
🟡 Buffer ahead: 4.16s | buffer: 6.10s
🟡 Buffer ahead: 6.66s | buffer: 7.09s
🟡 Buffer ahead: 5.66s | buffer: 7.09s
🟡 Buffer ahead: 4.66s | buffer: 7.09s
🟡 Buffer ahead: 4.02s | buffer: 7.45s
🟡 Buffer ahead: 3.62s | buffer: 8.05s
🟡 Buffer ahead: 2.62s | buffer: 8.05s
🟡 Buffer ahead: 2.49s | buffer: 3.53s
🟡 Buffer ahead: 2.43s | buffer: 3.38s
🟡 Buffer ahead: 1.90s | buffer: 3.85s
We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained.
Can you advise whether this is a player issue or if we’re doing something wrong?
It's much easier to use custom material to bridge metal shader onto Reality model than using LowLevelTexture does the same thing.
Why VisionOS doesn't support this material:
Can I combine FromToByAction and BindTarget.MaterialPath to animate my ShaderGraphMaterial. I don't know how to use the BindTarget.MaterialPath.
I'm playing about with the hand tracking systems in reality kit / Vision Pro
I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter)
I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand.
The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand.
It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position.
I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand.
It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here?
Note: HandTrackingSystem.handTracking is a HandTrackingProvider()
guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else {
return
}
if
let thumb = anchors.handSkeleton?.joint(.thumbTip),
let little = anchors.handSkeleton?.joint(.littleFingerTip)
{
let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3)
let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3)
let midPos = (thumbPos + littlePos) / 2
let direction = normalize(littlePos - thumbPos)
let rotation = simd_quatf(from: [0, 1, 0], to: direction)
wandEntity.transform.translation = midPos
wandEntity.transform.rotation = rotation
content.add(wandEntity)
}
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Hello,
I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools?
Thanks!
Mehdi
I like the toolbar visionOS's Safari uses for back & forward page, share, etc. It floats above the window.
My attempt to do this with ornaments isn't as satisfying as they partially cover the window. My attempts with toolbar haven't produced visible results.
Is this Safari-style toolbar for visionOS exposed by Apple in the API's? If so, could someone point me to documentation or sample code? Thanks!
Hi everyone,
I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.”
At timestamp 3:36, the presenter states:
“You can now access your enterprise license files directly within your Apple Developer account.”
I’ve checked every section of my Developer account, including:
• Membership and Agreements
• Certificates, Identifiers & Profiles
• App Store Connect
• Additional Resources
• Account settings
…but no UI or section exposes these enterprise license files.
Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file.
Could someone from Apple or anyone who has seen this feature clarify:
1. Where exactly do these enterprise license files appear in the Developer account UI, or
2. Whether this feature has not rolled out yet?
Any guidance or screenshots from those who have access would be invaluable.
Thanks,
Topic:
Spatial Computing
SubTopic:
General
Tags:
Enterprise
Entitlements
Business and Enterprise
visionOS