When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes.
They only appear once you get closer to the window or if the sky sphere gets removed.
Is this a known issue or is there a fix for that?
.persistentSystemOverlays(.visible)does not fix it
Xcode 16.3.0 Beta, visionOS 2.4
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi all,
I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using:
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravityAndHeading
Thus, I would expect the world alignment to behave as given in the gravityAndHeading page.
The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated.
However, I'm seeing a major inconsistency:
When the rear camera is physically pointed toward true North, I would expect:
cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North)
But instead, I'm consistently seeing:
cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South
Meanwhile, the translation vector behaves as expected:
As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption.
For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this:
Camera Transform = simd_float4x4(
[
[0.98446155, -0.030119859, 0.172998, 0.0],
[0.023979114, 0.9990097, 0.037477385, 0.0],
[-0.17395553, -0.032746706, 0.98420894, 0.0],
[0.024039675, -0.037087332, -0.22780673, 0.99999994]
])
As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North.
So here is my question:
Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North?
Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch.
Thanks in advance!
Hello
When processing an ARPlaneAnchor geometry using its ARPlaneGeometry, the triangleIndices is an array of Int16. It's supposed to be an index buffer, which can only be uint16 or uint32 metal. What am I supposed to do with negative indices ? Negative indices are rare but do appear sometimes.
Thank you
Topic:
Spatial Computing
SubTopic:
ARKit
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation.
In the ARKit documentation for ARCamera.transform, it says the following:
This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side.
Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as:
The device is in landscape mode, with the device held upright and the front-facing camera on the right side.
There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread
The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation.
It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit.
So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
Can I apply .scrollInputBehavior(.enabled, for: .look) to a WebView (wrapped UIViewRepresentable) in a visionOS 26 app?
I tried it myself, but I couldn't do it, so I would like to know if there is any way to do this.
Best regards.
Hello,
We are developing an AR app that requires the lidar meshes. Unfortunately the ARMeshAnchors that allows us to retrieve the mesh data are very unreliable. It happens very often that the ARSession removes all ARMeshAnchors and takes anywhere from 5s to 30s to reappear. The planes detection (ARPlaneAnchors) are still working fine and the camera tracking is also working normally.
I tried a basic ARKit sample app, and got the same behaviour as our own app.
Is this a known issue ? Anything we can do to mitigate the issue ?
Thank you
Topic:
Spatial Computing
SubTopic:
ARKit
I like to compose an APN message. (using FCM)
what shall I do for it?
Topic:
Spatial Computing
SubTopic:
ARKit
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Since using Quick Look exits you from both your app and Immersive Space. Is there a way to view immersive images within Immersive Space?
Topic:
Spatial Computing
SubTopic:
General
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :(
Can someone help me?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Can we constrain or clamp translation with the new ManipulationComponent? For example, allow free movement within certain bounds.
I work on motion capture systems for VTubing. I can't seem to find any information on gaining access to the Face Tracking features on iOS while developing for Vision OS.
I would love to bring VStreamer Live to Vision OS
Topic:
Spatial Computing
SubTopic:
ARKit
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Hi,
since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:)
Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so:
let unlitMat = UnlitMaterial(applyPostProcessToneMap: false)
let customMaterial = try CustomMaterial(
from: unlitMat,
surfaceShader: surfaceShader,
geometryModifier: geometryModifier
)
but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled.
Any hints? Thank you!
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
The initial startup of visionOS 26 after install is glacially slow.
I need to loop my videoMaterial and I don't know how to make it happen in my code.
I have included an image of my videoMaterial code.
Any help making this happen with be greatly appreciated.
Thank you,
Christopher
Topic:
Spatial Computing
SubTopic:
ARKit