Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error)
with the following error:
Error Domain=com.apple.arkit.error
Code=102 "Required sensor failed."
NSLocalizedFailureReason="A sensor failed to deliver the required input.,"
NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings."
The underlying error seems to point to the CoreMotion framework:
Domain=CMErrorDomain
Code=102 "(null)
Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services.
For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity
The thing is it is already ON when I experience this issue.
I also noticed that this issue happens way more often on the iPhone 16e than in any other device.
Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited:
Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app.
I know that an app can request access to these "Files & Folders":
So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Hello Community,
I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible.
Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part.
Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue.
Thank you in advance for any assistance you can provide.
Best regards
Platform: visionOS 2.6
Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent
I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context.
This is what I’m seeing:
Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace
Mode switching API: All mode transitions work correctly (logs confirm the component updates)
Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts.
This is where it’s breaking for me:
Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change
The API calls succeed, state updates correctly, but the immersive content doesn’t render.
Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve:
Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc.
Tapping a spatial photo smoothly transitions it to immersive mode in-place.
The immersive content appears to “grow” from the original window position by just changing IPC viewing modes.
This proves the functionality should be possible, but I can’t determine the correct configuration.
So, my question to is:
Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of?
Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints?
Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances?
How do you think Apple’s SG app achieves this functionality?
For a little more context:
All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive]
The spatial photos are valid and work correctly in pure immersive space
Mixed immersive space is active when testing window context
No errors or warnings in console beyond the successful mode switching logs I’m getting
Any insights into the proper configuration for window-hosted immersive content
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
Hello,
Thank you for your time. I have a question regarding visionOS app development.
When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area.
However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments.
We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance.
Best regards,
Sadao Tokuyama
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect:
When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world.
I want an effect like this:
https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/
I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code.
Besides, it will be better if the shadow will also respond to the light correctly.
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider.
Here are what I have done:
Get camera's instrinsics from frame.primarySample.parameters.intrinsics
Get camera's extrinsics from frame.primarySample.parameters.extrinsics
Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames
let realityRenderer = try RealityKit.RealityRenderer()
realityRenderer.cameraSettings.colorBackground = .outputTexture()
let cameraEntity = PerspectiveCamera()
// see https://developer.apple.com/forums/thread/770235
let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse
cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil)
cameraEntity.camera.near = 0.01
cameraEntity.camera.far = 100
cameraEntity.camera.fieldOfViewOrientation = .horizontal
// manually calculated based on camera intrinsics
cameraEntity.camera.fieldOfViewInDegrees = 105
realityRenderer.entities.append(cameraEntity)
realityRenderer.activeCamera = cameraEntity
Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform.
If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position).
My question is how to use the camera extrinsic matrix for this purpose?
Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction.
simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis
[-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis
[-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis
[0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
Hi,
I am creating an ECS. With this ECS I will need to register several DragGesture.
Question: Is it possible to define DragGestures in ECS? If yes, how do we do that? If not, what is the best way to do that?
Question: Is there a "gesture" method that takes an array of gestures as a parameter?
I am interested in any information that can help me, if possible with an example of code.
Regards
Tof
I have a MeshResource and I would like to create a collision component from it.
let childBounds = child.visualBounds(relativeTo: self)
var childShape: ShapeResource
do {
// Crashed by the following line instead of throwing a Swift Error
childShape = try await ShapeResource.generateConvex(from: childModel.mesh);
} catch {
childShape = ShapeResource.generateBox(size: childBounds.extents)
childShape = childShape.offsetBy(translation: childBounds.center)
}
Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9
Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar.
But, the method crashes the app instead of throwing a Swift error
Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C
CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE
Hardware Model: Mac15,11
...
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd_sim [2057]
Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496]
Responsible Process: SimulatorTrampoline [910]
Date/Time: 2025-01-26 16:13:17.5053 +0800
Launch Time: 2025-01-26 16:13:09.5755 +0800
OS Version: macOS 15.2 (24C101)
Release Type: User
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x00000001abf841d0
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [17316]
Triggered by Thread: 0
Thread 0 Crashed:
0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232
1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868
2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148
Here is the message on the app console from Xcode
/Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar.
Failed to cook convex mesh (0x3)
assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation.
Message from debug
The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
I am using RealityKit's ObjectCaptureSession API to capture objects, presenting the process with ObjectCaptureView. During the object capture session, there is default background audio that plays automatically.
I noticed this same audio behavior in Apple's official Composer app, which seems to use the same API. I'd like to disable this audio in my app, but I have not been able to find any API or configuration option to do so.
However, the audio persists, and I cannot find a way to turn it off. Is there an official method or workaround to disable this default audio in the ObjectCaptureSession API?
Any guidance would be appreciated. Thank you!
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
Hi, I'm trying to achieve 3D photo effects using a photo and a depth map, using GeomertyModifier. The offset is getting applied correctly in Reality Composer Pro, and in Xcode, but when I launch the app, the model offset is not getting applied. here are the screenshots of shader graphs and how it looks in the app
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111
At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
Hey there,
since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation:
I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths.
What I tested:
I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space.
ARView in nonAR mode still shows the object like in normal AR mode without camera stream.
I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t).
Model3D is only available for visionOS (that would be amazing to have it for iOS)
I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app.
Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS.
Long story short 😃:
Does someone have an idea what is the alternative to SceneView for USDZ 3D models?
I appreciate your support!!
Thanks in advance!
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation