my coworkers and i are guessing at what data defines an anchor. i tried searching but struggled to find anything helpful.
our best guess was a combination Triangular Irregular Networks (TIN), gps, magnetic compass direction and maybe elevation sensors.
is this documented anywhere? if not, can a definition or description be provided?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When viewing an immersive space and I open a spatial photo in Quick Look, which hides the entire app interface to show the photo. Is there a memory limit? If the inmersive space is not active, the application keep the interface.
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Hello, esteemed tech developer.
I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes.
iOS 26 / visionOS 26
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a
When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace.
How to resolve: restart simulator device.
See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
Hi,
I am creating an ECS. With this ECS I will need to register several DragGesture.
Question: Is it possible to define DragGestures in ECS? If yes, how do we do that? If not, what is the best way to do that?
Question: Is there a "gesture" method that takes an array of gestures as a parameter?
I am interested in any information that can help me, if possible with an example of code.
Regards
Tof
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
Hi,
I'm developing an app for the Apple Vision Pro.
Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures.
My question:
I'm working with RealityKit and use RealityView..
I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function.
Also I succeeded in loading multiple objects into the scene.
My problem is that I can't figure out how to add my gestures to multiple objects independently.
I can't use Reality composer since I'm loading the objects dynamically into the scene.
Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities.
I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens,
even though my gesture is targeted to every entity having my GestureComponent.
I only found solutions that are not usable on Visionos / RealityView like installGestures.
Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve.
I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected.
However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored.
Any guidance or clarification on this behavior would be greatly appreciated.
// ImmersiveView.swift
// estudos_vision
//
// Created by Lailan Rogerio Rodrigues Matos on 15/05/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
@State private var session: SpatialTrackingSession?
@State private var box = ModelEntity()
@State private var subs: [EventSubscription] = []
@State private var ballEntity: Entity?
var body: some View {
RealityView { content in
// Load initial content from the RealityKit scene.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
// Create and run a spatial tracking session.
let session = SpatialTrackingSession()
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
_ = await session.run(configuration)
self.session = session
// Create a red box.
let boxMesh = MeshResource.generateBox(size: 0.2)
let material = SimpleMaterial(color: .red, isMetallic: false)
box = ModelEntity(mesh: boxMesh, materials: [material])
box.position.y += 0.15 // Position the box slightly above the origin.
// Configure the box for user interaction and physics.
box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive.
box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics.
box.components.set(PhysicsBodyComponent( // Add physics behavior.
massProperties: .default,
material: .default,
mode: .kinematic // Use kinematic mode so it can be moved by user interaction.
))
box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow.
//content.add(box) //commented out to add to hand anchor
// Create a left hand anchor and add the box as a child.
let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous)
handAnchor.addChild(box)
content.add(handAnchor) // Add the hand anchor to the scene.
// Create a sphere.
let ball = ModelEntity(mesh: .generateSphere(radius: 0.15))
ball.position = [0.0, 1.5, -1.0] // Initial position of the ball.
ball.generateCollisionShapes(recursive: false) // Add collision.
ball.name = "Sphere"
content.add(ball)
ballEntity = ball
// Subscribe to collision events between the box and other entities.
let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in
print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred")
//ce.entityA.removeFromParent() // removes the colliding object
//ce.entityB.removeFromParent()
}
Task {
subs.append(event)
}
}
// Add a drag gesture to the box, allowing the user to move it.
.gesture(
DragGesture()
.targetedToEntity(box) // Target the drag gesture to the box.
.onChanged({ value in
// Update the position of the box based on the drag gesture.
box.position = value.convert(value.location3D, from: .local, to: box.parent!)
})
)
}
}
#Preview(immersionStyle: .full) {
ImmersiveView()
.environment(AppModel())
}
Topic:
Spatial Computing
SubTopic:
General
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window?
From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window.
Is this (at least roughly) correct? Is it documented anywhere?
PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
Hi !
I'm new on this forum, so if I need to update this post to have more info, or anything else, please let me know.
I'm using the Apple Vision Pro to develop some app (with unity). To demonstrate what the user see on the headset, I would like to mirror the view on a device (an iPad in this case). I managed to do this without any issue.
My problem is that, in the Vision Pro, I have an interface that the user can interact with. But I would like to be able to manage myself the interface on the iPad. What I mean is that the user can (or can't, doesn't matter) see the interface in the headset, and the interface is controlled by myself on the iPad.
Is there any way to do this ? Is this a question I should ask on unity's forum ? (I don't think so, because it should be related to the mirroring function non ?)
Topic:
Spatial Computing
SubTopic:
General
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on.
I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server).
I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
Topic:
Spatial Computing
SubTopic:
General
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation.
When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
I want to implement the functions in this video, how should I set the window
Hi, I'm working on a VisionOS app and would like to integrate Background Assets to download large files after the app is installed.
I'm wondering what would happen if the user takes off the headset while a background asset is being downloaded. Would it continue downloading or would the download be stopped/paused?
I was looking for a way to download large assets while the user is not wearing the Vision Pro, is there any other alternative?
Thanks in advance.