Hi,
We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system.
But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc).
Is this planned in Apple's roadmap ?
Thanks
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
PLATFORM AND VERSION
Vision OS
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.3 (On Real Device, Not simulator)
Please someone confirm I'm not crazy and this issue is actually out of my control.
Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on.
STEPS TO REPRODUCE
https://github.com/nathan-707/ChoppyEntitySample
Open my test project, its a small, modified vision os project template that shows it clearly.
otherwise:
create immersive space with either full or progressive immersion style.
setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears.
if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy.
if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth.
alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
Hello, I am currently developing a Vision Pro VR application with Unreal Engine 5.5. Is it possible to interact with objects (grabbing, clicking on buttons)? I cannot find any information on this. Thank you.
Topic:
Spatial Computing
SubTopic:
General
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows:
And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird).
Is there something I am missing?
In visionOS, there are existing modifiers that can completely conceal the hands. However, I am interested in learning how to achieve the effect of only one hand disappearing while the other hand remains visible.
.upperLimbVisibility(.hidden)
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below.
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ImmersiveView: View {
@State private var arkitSession = ARKitSession()
@State private var root = Entity()
@State private var fadeCompleteSubscriptions: Set = []
var body: some View {
RealityView { content in
content.add(root)
}
.task {
// Check if barcode detection is supported; otherwise handle this case.
guard BarcodeDetectionProvider.isSupported else { return }
// Specify the symbologies you want to detect.
let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8])
do {
try await arkitSession.requestAuthorization(for: [.worldSensing])
try await arkitSession.run([barcodeDetection])
print("Barcode scanning started")
for await update in barcodeDetection.anchorUpdates where update.event == .added {
let anchor = update.anchor
// Play an animation to indicate the system detected a barcode.
playAnimation(for: anchor)
// Use the anchor's decoded contents and symbology to take action.
print(
"""
Payload: \(anchor.payloadString ?? "")
Symbology: \(anchor.symbology)
""")
}
} catch {
// Handle the error.
print(error)
}
}
}
// Define this function in ImmersiveView.
func playAnimation(for anchor: BarcodeAnchor) {
guard let scene = root.scene else { return }
// Create a plane sized to match the barcode.
let extent = anchor.extent
let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)])
entity.components.set(OpacityComponent(opacity: 0))
// Position the plane over the barcode.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
root.addChild(entity)
// Fade the plane in and out.
do {
let duration = 0.5
let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 0,
to: 1.0,
duration: duration,
isAdditive: true,
bindTarget: .opacity)
)
let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 1.0,
to: 0,
duration: duration,
isAdditive: true,
bindTarget: .opacity))
let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut])
_ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in
// Remove the plane after the animation completes.
entity.removeFromParent()
}).store(in: &fadeCompleteSubscriptions)
entity.playAnimation(fadeAnimation)
} catch {
print("Error")
}
}
}
I have a MeshResource and I would like to create a collision component from it.
let childBounds = child.visualBounds(relativeTo: self)
var childShape: ShapeResource
do {
// Crashed by the following line instead of throwing a Swift Error
childShape = try await ShapeResource.generateConvex(from: childModel.mesh);
} catch {
childShape = ShapeResource.generateBox(size: childBounds.extents)
childShape = childShape.offsetBy(translation: childBounds.center)
}
Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9
Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar.
But, the method crashes the app instead of throwing a Swift error
Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C
CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE
Hardware Model: Mac15,11
...
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd_sim [2057]
Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496]
Responsible Process: SimulatorTrampoline [910]
Date/Time: 2025-01-26 16:13:17.5053 +0800
Launch Time: 2025-01-26 16:13:09.5755 +0800
OS Version: macOS 15.2 (24C101)
Release Type: User
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x00000001abf841d0
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [17316]
Triggered by Thread: 0
Thread 0 Crashed:
0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232
1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868
2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148
Here is the message on the app console from Xcode
/Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar.
Failed to cook convex mesh (0x3)
assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation.
Message from debug
The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that?
// model is a ModelResource and bounds is a BoundingBox
var shape = ShapeResource.generateConvex(from: model.mesh);
shape = shape.offsetBy(translation: bounds.center)
// How can I scale the shape to fit within the bounds?
The following API only provide the rotation and translation support. and I cannot find the scale support.
offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>())
I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
Hi there!
I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures.
My question is. How do I release the memory from the current texture before loading the next?
In theory the garbagecollector should erase it eventually?
Hope someone can help =)
Thanks in advance!
Best regards,
Kim
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed:
dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil)
AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>))
AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>))
It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
Topic:
Spatial Computing
SubTopic:
ARKit
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene.
the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters.
below is my code of generating tiff file to record depth map data:
Generated TIFF file (captured from ARKit):
as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out.
Generated TIFF file (captured from AVFoundation):
In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic.
Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments.
So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere.
OS/Sys: VisionOS 2.3/XCode 16.3
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer.
Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession.
Also, the issue is more noticable on real device than the simulator
//
// IssueApp.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
@main
struct IssueApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
if let sphere = scene.findEntity(named: "Sphere") {
sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05))
}
if let button = attachments.entity(for: "Button") {
button.position.y -= 0.3
scene.addChild(button)
}
content.add(scene)
}
} attachments: {
Attachment(id: "Button") {
VStack {
Button {
SoundManager.instance.playSound(filePath: "apple_en")
} label: {
Text("Play audio")
}
.animation(.none, value: 0)
.fontWeight(.semibold)
}
.padding()
.glassBackgroundEffect()
}
}
.onAppear {
UpAndDownSystem.registerSystem()
}
}
}
//
// SoundManager.swift
// LinguaBubble
//
// Created by Zhendong Chen on 1/14/25.
//
import Foundation
import AVFoundation
class SoundManager {
static let instance = SoundManager()
private var audioPlayer: AVAudioPlayer?
func playSound(filePath: String) {
guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return }
do {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.play()
} catch let error {
print("Error playing sound. \(error.localizedDescription)")
}
}
}
//
// UpAndDownComponent+System.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import RealityKit
struct UpAndDownComponent: Component {
var speed: Float
var axis: SIMD3<Float>
var minY: Float
var maxY: Float
var direction: Float = 1.0 // 1 for up, -1 for down
var initialY: Float?
init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) {
self.speed = speed
self.axis = axis
self.minY = minY
self.maxY = maxY
}
}
struct UpAndDownSystem: System {
static let query = EntityQuery(where: .has(UpAndDownComponent.self))
init(scene: RealityKit.Scene) {}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime) // Time between frames
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue }
// Ensure we have the initial Y value set
if component.initialY == nil {
component.initialY = entity.transform.translation.y
}
// Calculate the current position
let currentY = entity.transform.translation.y
// Move the entity up or down
let newY = currentY + (component.speed * component.direction * deltaTime)
// If the entity moves out of the allowed range, reverse the direction
if newY >= component.initialY! + component.maxY {
component.direction = -1.0 // Move down
} else if newY <= component.initialY! + component.minY {
component.direction = 1.0 // Move up
}
// Apply the new position
entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z)
// Update the component with the new direction
entity.components[UpAndDownComponent.self] = component
}
}
}
Could someone help me with this?
Is it possible to have a skydome which influences lighting in the scene but is otherwise invisible? in a raytracer that would be visible to secondary rays but invisible to primary rays.
Cheers, thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hi there.
Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package.
Is there a way to access the filesystem on AVP outside the app?
I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP?
My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves.
I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction.
The only way to change the images should be to change the contents of the hardcoded imagefolder.
I hope this makes sense =)
Thanks in advance!
Regards,
Kim
Topic:
Spatial Computing
SubTopic:
General