it looks like one week after accepting as a nearby other AVP device... it expires
since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together
is there any roundabout for this issue or straight to the wishlist ?
thanks for the support !!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi guys,
In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness.
However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness.
I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value.
My question is:
Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture?
Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred.
Here’s the test code:
struct ContentView: View {
var body: some View {
VStack(spacing: 20) {
Text("Above the RealityView")
.font(.title)
RealityView { content, attachments in
if let text = attachments.entity(for: "2dView") {
text.position.y = 0.1
content.add(text)
}
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
content.add(box)
} attachments: {
Attachment(id: "2dView") {
Text("Above the Box")
.font(.title)
}
}
.frame(width: 300, height: 300)
.border(.blue)
.blur(radius: 99) // Has no visual effect
Text("Below the RealityView")
.font(.subheadline)
}
.padding()
}
}
My question:
How can I make .blur(radius:) visually affect the content rendered in a RealityView?
Can you provide a working example that .blur() to visually affect any part of a RealityView?
Thanks!
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
Hi, I would like to train Gaussian splats from my object captures. So I need a pointcloud and camera positions together with the original photos taken to train GS In an app like postShot.
I could do this with Reality Capture, which supports exporting pointclouds and camera position but it does not do well with turntable photogrammetry.
While the Apple object capture API does produce really solid results with turntable images.
so my question is, can I export camera data from my object captures to use in another application? Or is there may be a plan to at this feature in the future?
It would be really helpful in creating ultra realistic, 3-D objects in Gaussian splat format.
Thanks for any isuggestions…
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
I'm currently implementing 180° / 360° immersive video for my app.
I easily implemented 360° by just applying VideoMaterial to flipped sphere.
But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear.
Would there be any advice / information / idea to implement this? Your help would be grateful.
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here)
Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
Summary
After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier.
These regressions affect applications that dynamically control scene opacity (via OpacityComponent).
Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects.
Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26.
Scene Setup Overview
The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture).
We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view).
Transparency is achieved using OpacityComponent, applied dynamically when the user moves.
Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials
We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent
Observed Behavior by Scenario
(I can share a video showing the results of each scenario if needed.)
Scenario 1 — Severe Flickering (Root Opacity)
Setup:
OpacityComponent applied to the root entity
NO ModelSortGroupComponent used
Symptoms:
Strong flickering when transparency is active
Triangles within the same mesh render at inconsistent opacity levels
Appears as if per-triangle alpha sorting is broken
Workaround:
Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker
Pros:
No conflicts with portals or alpha materials
Scenario 2 — Partially Stable, But Alpha Conflicts
Setup:
OpacityComponent applied per USDZ entity
ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes
Other entities have NO ModelSortGroupComponent
Symptoms:
Frequent alpha blending conflicts:
Transparent surfaces behind other transparent surfaces flicker or disappear
Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one
Even opaque meshes behind glass flicker due to depth buffer confusion
Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely
Analysis:
Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26
Pros:
Most stable setup so far
Cons:
Still unreliable when OpacityComponent is active
Scenario 3 — Layer Separation Attempt (Regression)
Setup:
Same as Scenario 2, but:
Entities with alpha materials moved to separate USDZs
Explicit ModelSortGroupComponent order set (alpha surfaces rendered last)
Symptoms:
Transparent surfaces behind other transparent surfaces flicker or disappear
Depth is completely broken when there's a large transparent surface
Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely
Workaround Attempt:
Re-ordering and further separating models did not solve it
Pros:
None — this setup makes transparency unusable
Conclusion
There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when:
OpacityComponent is applied dynamically, and
Scenes rely on multiple overlapping transparent materials.
These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions.
Request
We’d appreciate any insight or confirmation from Apple engineers regarding:
Whether alpha sorting or opacity blending behavior changed in visionOS 26
If there are new recommended practices for combining OpacityComponent with transparent materials
If a bug report already exists for this regression
Thanks in advance!
I have tested the MagnifyGesture code below on multiple devices:
Vision Pro - working
iPhone - working
iPad - working
macOS - not working
In Reality Composer Pro, I have also added the below components to the test model entity:
Input Target
Collision
For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you.
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
.gesture(MagnifyGesture()
.targetedToAnyEntity()
.onChanged(onMagnifyChanged)
.onEnded(onMagnifyEnded))
}
func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyChanged")
}
func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyEnded")
}
}
Topic:
Spatial Computing
SubTopic:
General
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.'
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
General
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
I am running a Spatial Rendering App template demo, it shows “No People Found ” “There is no one nearby to share with”.
How can I stream videos rendered by Mac to my vision pro
I am using macOS 26.0, visionOS 26, Xcode 26
Topic:
Spatial Computing
SubTopic:
General
Environment Versions
・macOS15.6.1
・visionOS26.0.1
・Xcode16.1 or 26.0.1
・unity6000.2.9f1
・Apple.core3.2.0
・Apple.PHASE1.2.7
・polyspatial2.4.2
With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted.
What is required to make Early Reflection and Late Reverb take effect on a visionOS device build?
action taken
・created a SoundEvent.
・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer.
・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb.
・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None.
・in project settings > Audio, set Spatializer plugin to PHASE Spatializer.
・from there, build for visionOS.
At a recent community meeting we were wondering how Apple creates this soft-edge effect around the occlusion cutouts. We see this effect on keyboard cutouts, iPhone cutouts, and in progressive spaces.
An example: Notice the soft edged around the occlusion cutout for the keyboard
One of our members created some Shader Graph materials to explore soft edges. These work by sending data into the opacity channel of the PreviewSurface node.
Unfortunately, the Occlusion Surface nodes lack any sort of input. If you know how to blend these concepts with RealityKit Occlusion, please let us know!
I noticed in the latest macOS beta 3 that there was this update:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
I am not noticing any difference to before with the reconstructions I tested so I am assuming it's reverting to the old model but in the logs there is no way to see if it succeeds or fails to download that new model.
do you have any more information on what was improved here with some examples and what we should be looking for? also how can confirm the download of that new model has not failed?