I have my immersive space set up like:
ImmersiveSpace(id: "Theater") {
ImmersiveTeleopView()
.environment(appModel)
.onAppear() {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full)
Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities.
My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this:
public enum IStyle: String, CaseIterable, Identifiable {
case mixedStyle, fullStyle
public var id: Self { self }
var style: ImmersionStyle {
switch self {
case .mixedStyle:
return .mixed
case .fullStyle:
return .full
}
}
}
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
// Immersion Style
public var immersionStyle: IStyle = .mixedStyle
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities.
So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API.
Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B:
let mc = ManipulationComponent()
model.components.set(mc)
var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25)
boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25))
ManipulationComponent.configureEntity(model, collisionShapes: [boxShape])
if var mc = model.components[ManipulationComponent.self] {
mc.releaseBehavior = .stay
mc.dynamics.inertia = .low
model.components.set(mc)
}
I am using visionOS 26.0; let me know if there's any additional information needed.
I want an AR character to be able to look at a position while still playing the characters animation.
So far, I managed to manually adjust a single bone rotation using
skeletalComponent.poses.default = Transform(
scale: baseTransform.scale,
rotation: lookAtRotation,
translation: baseTransform.translation
)
which I run at every rendering update, while a full body animation is running.
But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints.
I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro.
But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
Hello, esteemed tech developer.
I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Topic:
Spatial Computing
SubTopic:
General
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes.
iOS 26 / visionOS 26
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
Hi,
I'm developing an app for the Apple Vision Pro.
Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures.
My question:
I'm working with RealityKit and use RealityView..
I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function.
Also I succeeded in loading multiple objects into the scene.
My problem is that I can't figure out how to add my gestures to multiple objects independently.
I can't use Reality composer since I'm loading the objects dynamically into the scene.
Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities.
I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens,
even though my gesture is targeted to every entity having my GestureComponent.
I only found solutions that are not usable on Visionos / RealityView like installGestures.
Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
Can I combine FromToByAction and BindTarget.MaterialPath to animate my ShaderGraphMaterial. I don't know how to use the BindTarget.MaterialPath.
When you click the button in the background of three horizontal lines, when the view is about to appear, add buried event statistics, but click the button to close it, it will repeat the view will appear method API, equivalent to the view method repeated execution twice, resulting in incorrect buried event statistics
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences.
The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible.
A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active.
Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically.
Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast.
The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications.
Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
Dear all,
I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations:
...
void _initTTS(int);
...
I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows:
public static class TTSPluginManager
{
[DllImport("TTS_Vision"]
private static extern void _initTTS(int val);
...
public static void Initialize()
{
#if UNITY_VISIONOS
_initTTS(0);
#else
Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring.");
#endif
}
}
I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error:
DllNotFoundException: TTS_Vision assembly: type: member:(null)
TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33)
LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17)
InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24)
InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18)
If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error:
DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file)
at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0
at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0
I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success...
Any hints or suggestions are much more appreciated.
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive.
_ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
_ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false)
}
We could store these events in state. I've seen this in a few samples and apps.
@State var beginSubscription: EventSubscription?
...
beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear.
Please adopt container background api
It is worth mentioning that this problem does not occur on the visionOS virtual machine.
Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve.
I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected.
However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored.
Any guidance or clarification on this behavior would be greatly appreciated.
// ImmersiveView.swift
// estudos_vision
//
// Created by Lailan Rogerio Rodrigues Matos on 15/05/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
@State private var session: SpatialTrackingSession?
@State private var box = ModelEntity()
@State private var subs: [EventSubscription] = []
@State private var ballEntity: Entity?
var body: some View {
RealityView { content in
// Load initial content from the RealityKit scene.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
// Create and run a spatial tracking session.
let session = SpatialTrackingSession()
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
_ = await session.run(configuration)
self.session = session
// Create a red box.
let boxMesh = MeshResource.generateBox(size: 0.2)
let material = SimpleMaterial(color: .red, isMetallic: false)
box = ModelEntity(mesh: boxMesh, materials: [material])
box.position.y += 0.15 // Position the box slightly above the origin.
// Configure the box for user interaction and physics.
box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive.
box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics.
box.components.set(PhysicsBodyComponent( // Add physics behavior.
massProperties: .default,
material: .default,
mode: .kinematic // Use kinematic mode so it can be moved by user interaction.
))
box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow.
//content.add(box) //commented out to add to hand anchor
// Create a left hand anchor and add the box as a child.
let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous)
handAnchor.addChild(box)
content.add(handAnchor) // Add the hand anchor to the scene.
// Create a sphere.
let ball = ModelEntity(mesh: .generateSphere(radius: 0.15))
ball.position = [0.0, 1.5, -1.0] // Initial position of the ball.
ball.generateCollisionShapes(recursive: false) // Add collision.
ball.name = "Sphere"
content.add(ball)
ballEntity = ball
// Subscribe to collision events between the box and other entities.
let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in
print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred")
//ce.entityA.removeFromParent() // removes the colliding object
//ce.entityB.removeFromParent()
}
Task {
subs.append(event)
}
}
// Add a drag gesture to the box, allowing the user to move it.
.gesture(
DragGesture()
.targetedToEntity(box) // Target the drag gesture to the box.
.onChanged({ value in
// Update the position of the box based on the drag gesture.
box.position = value.convert(value.location3D, from: .local, to: box.parent!)
})
)
}
}
#Preview(immersionStyle: .full) {
ImmersiveView()
.environment(AppModel())
}
Topic:
Spatial Computing
SubTopic:
General
In the DestinationVideo demo, the onAppear in UpNextView is triggered again when it is closed, but I only want it to be triggered once. How can I achieve this?
Alternatively, I would like to capture the button click events in the player menu, as shown in the screenshot below.
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on.
I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server).
I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
Topic:
Spatial Computing
SubTopic:
General