If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images.
to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below)
**Does anybody know how to fix this issue? **
Topic:
Spatial Computing
SubTopic:
General
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Hi everyone,
I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world.
Observed behavior:
When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected).
However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter.
As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values.
So effectively:
Recenter happens
Visual position is correct
Programmatic position remains stale
First gesture causes the position to “snap” to the correct updated value
Questions:
Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button?
Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture?
Is this expected behavior due to deferred/lazy transform updates in RealityKit?
Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs.
Any guidance or best-practice patterns for handling this would be appreciated.
Thanks!
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
Topic:
Spatial Computing
SubTopic:
General
Hi,
I wanted to ask if you are familiar with a way of making the Logitech Muse sterile for operation room use?
Topic:
Spatial Computing
SubTopic:
General
The initial startup of visionOS 26 after install is glacially slow.
BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API
Type: Error | Timestamp: 2026-01-13 09:21:57.242191+02:00 | Process: Neuron XR | Library: CompositorNonUI | TID: 0x75e2c
Topic:
Spatial Computing
SubTopic:
General
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。
期望
优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。
期望
"· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力;
· 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is
ask the user to take a screenshot
wait until the user taps a button indicating the screenshot has been taken
then the app asks the user to select the screenshot when the app opens the PhotoPicker
when the user presses Done, the screenshot is handed off to the app.
One wonders why there is no Apple Api for doing this in a simple privacy protective way such as:
When called, the Apple api captures the screenshot in Apple secured memory
The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to
a. share this screenshot with the app, or
b. cancel,
c. retake the screenshot
If the user approves, the app receives the screenshot
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV?
APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical.
If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency?
Thank you for any insights.
In Reality Composer, it is possible to create child components and manipulate them within the hierarchy of a ModelEntity. Is there a way to create child components in other 3D modeling programs, such as Blender?
Environment
visionOS 26.1, Xcode 26.1.1
Problem
When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit.
Steps
WindowGroup opens ImmersiveSpace in .onAppear
User clicks X to close window
.onDisappear fires but async cleanup cancelled
ImmersiveSpace remains active, user trapped
Expected
ImmersiveSpace dismissed when window closes
Actual
ImmersiveSpace remains active
Code
.onAppear {
Task {
await openImmersiveSpace(id: "VideoCallMainCamera")
}
}
.onDisappear {
Task {
await dismissImmersiveSpace() // Gets cancelled
}
}
What I've Tried
Task in .onDisappear ❌
scenePhase monitoring ❌
High priority Task ❌
.restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup)
Question
What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Hello,
Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it.
I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience?
Thanks,
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
Topic:
Spatial Computing
SubTopic:
General