Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
434
Feb ’25
openImmersiveSpace works as expected but never returns Result
As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active: ` @MainActor func openSpaceWithStateCheck() async { if scenePhase == .active { Task { switch await openImmersiveSpace(id: "RoomCaptureInteraction") { case .opened: isCapturingImagery = true break case .error: print("!! An error occurred when trying to open the immersive space captureRoomImagery") case .userCancelled: print("!! The user declined opening immersive space captureRoomImagery") @unknown default: print("!! unknown default result of opening space") break } } } else { print("Scene not active, deferring immersive space opening") } } I'm on visionOS 2.4 and SDK 2.2. I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace. The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
1
0
84
Mar ’25
Background Assets in VisionOS
Hi, I'm working on a VisionOS app and would like to integrate Background Assets to download large files after the app is installed. I'm wondering what would happen if the user takes off the headset while a background asset is being downloaded. Would it continue downloading or would the download be stopped/paused? I was looking for a way to download large assets while the user is not wearing the Vision Pro, is there any other alternative? Thanks in advance.
1
0
138
Jun ’25
Creating spatial video with one camera
Hello everyone I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example? I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
1
0
296
Jun ’25
RealityKit entity.write(to:) generates fatal protection error
My app for framing and arranging pictures from Photos on visionOS allows users to write the arrangements they create to .reality files using RealityKit entity.write(to:) that they then display to customers on their websites. This works perfectly on visionOS 2, but fails with a fatal protection error on visionOS 26 beta 1 and beta 2 when write(to:) attempts to write to its internal cache: 2025-06-29 14:03:04.688 Failed to write reality file Error Domain=RERealityFileWriterErrorDomain Code=10 "Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27." UserInfo={NSLocalizedDescription=Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27.} Has anyone else encountered this problem? Do you have a workaround? Have you filed a feedback? ChatGPT analysis of the error and my code reports: Why there is no workaround • entity.write(to:) is a black box — you cannot override where it builds its staging bundle • it always tries to create those random folders itself • you cannot supply a parent or working directory to RealityFileWriter • so if the system fails to create that folder, you cannot patch it 👉 This is why you see a fatal error with no recovery. See also feedbacks: FB18494954, FB18036627, FB18063766
10
0
501
Jul ’25
How to request several models simultaneously
I am using HelloPhotogrammetry in Xcode I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ]) But how would I request several models simultaneously? I only want to vary the detail. [ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
2
0
101
Apr ’25
Imitating a grip on an object
I'm playing about with the hand tracking systems in reality kit / Vision Pro I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter) I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand. The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand. It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position. I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand. It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here? Note: HandTrackingSystem.handTracking is a HandTrackingProvider() guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else { return } if let thumb = anchors.handSkeleton?.joint(.thumbTip), let little = anchors.handSkeleton?.joint(.littleFingerTip) { let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3) let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3) let midPos = (thumbPos + littlePos) / 2 let direction = normalize(littlePos - thumbPos) let rotation = simd_quatf(from: [0, 1, 0], to: direction) wandEntity.transform.translation = midPos wandEntity.transform.rotation = rotation content.add(wandEntity) }
1
0
567
Jul ’25
Converting a Stop Motion Animation to usdz
Hello everyone, I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format. In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays. Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender. Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far: I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion. I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle. I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation. I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either. 
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully. I would be very grateful for any tips or ideas, as I am running out of options on how to make this work. Thanks in advance!
2
0
192
Apr ’25
VisionOS Beta 3 spatial sculpting apple sample crash
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running. seems to be something in AnchorEntity or AccessoryAnchoringSource Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
1
0
338
Jul ’25
How to show only Spatial video using UIDocumentPickerViewController
Is there a suitable UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController? I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController. Our app needs to adapt both of these ways to pick spatial videos So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
1
0
513
Feb ’25
Unity on VisionOS development - best practice on structuring a project
Hello, I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project: Should I build the entire experience in Unity (both Windows and Volumes)? Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode? Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode? If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode? What's worked best for you? Please let me know if you have any recommendations, Thanks!
3
0
166
Apr ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
584
Sep ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
202
Apr ’25
Volumetric Windwos anchores
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space. Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this? The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
1
0
430
Feb ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
285
Jul ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
453
Jul ’25
VisionOS26 PresentationComponent not working
I am trying to get the new PresentationComponent working in VisionOS26 as seen in this WWDC video: https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video) Here is some other example code but it doesn't work either: https://stepinto.vision/devlogs/project-graveyard-devlog-002/ My simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project: struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) { content.add(materializedImmersiveContentEntity) var presentation = PresentationComponent( configuration: .popover(arrowEdge: .bottom), content: Text("Hello, World!") .foregroundColor(.red) ) presentation.isPresented = true materializedImmersiveContentEntity.components.set(presentation) } } } } } Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent
1
0
587
Jul ’25