my coworkers and i are guessing at what data defines an anchor. i tried searching but struggled to find anything helpful.
our best guess was a combination Triangular Irregular Networks (TIN), gps, magnetic compass direction and maybe elevation sensors.
is this documented anywhere? if not, can a definition or description be provided?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed.
The code is something like this:
WindowGroup(id: "image-display-window",.....
}
.windowResizability(.automatic)
.windowStyle(.plain)
I call dismissWindow() from the window view and it is dismissed correctly.
Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize.
Anyone knows how to fix this?
Thanks.
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below:
Already tried many methods to fix this issue, such as clear build folders, but don't work.
MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
On Xcode 26 and visionOS 26, apple provides observable property for Entity, so we can easily interact with Entity between RealityScene and SwiftUI, but there is a issue:
It's fine to observe Entity's position and scale properties in Slider, but can't observe orientation properties in Slider.
MacBook Air M2 / Xcode 26 beta6
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV?
APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical.
If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency?
Thank you for any insights.
I have my immersive space set up like:
ImmersiveSpace(id: "Theater") {
ImmersiveTeleopView()
.environment(appModel)
.onAppear() {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full)
Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities.
My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this:
public enum IStyle: String, CaseIterable, Identifiable {
case mixedStyle, fullStyle
public var id: Self { self }
var style: ImmersionStyle {
switch self {
case .mixedStyle:
return .mixed
case .fullStyle:
return .full
}
}
}
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
// Immersion Style
public var immersionStyle: IStyle = .mixedStyle
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations.
If I don't run testAnimation nothing happens.
If I run testAnimation I see the same animation as If I had called
entity.playAnimation(entity.availableAnimations[0],..)
here's the full code I use to animate a single bone:
func testAnimation() {
guard let jawAnim = jawAnimation(mouthOpen: 0.4) else {
print("Failed to create jawAnim")
return
}
guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return }
let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false)
print("controller: \(controller)")
}
func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? {
guard let basePose else { return nil }
guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else {
print("Target joint \(self.jawBoneName) not found in default pose joint names")
return nil
}
let fromTransforms = basePose.jointTransforms
let baseJawTransform = fromTransforms[index]
let maxAngle: Float = 40
let angle: Float = maxAngle * mouthOpen * (.pi / 180)
let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1))
var toTransforms = basePose.jointTransforms
toTransforms[index] = Transform(
scale: baseJawTransform.scale * 2,
rotation: baseJawTransform.rotation * extraRot,
translation: baseJawTransform.translation
)
let fromToBy = FromToByAnimation<JointTransforms>(
jointNames: basePose.jointNames,
name: "jaw-anim",
from: fromTransforms,
to: toTransforms,
duration: 0.1,
bindTarget: .jointTransforms,
repeatMode: .none,
)
return fromToBy
}
PS: I can confirm that I can set this bone to a specific position if I use
guard let index = newPose.jointNames.firstIndex(of: boneName) ...
let baseTransform = basePose.jointTransforms[index]
newPose.jointTransforms[index] = Transform(
scale: baseTransform.scale,
rotation: baseTransform.rotation * extraRot,
translation: baseTransform.translation
)
skeletalComponent.poses.default = newPose
creatureMeshEntity.components.set(skeletalComponent)
This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
I want an AR character to be able to look at a position while still playing the characters animation.
So far, I managed to manually adjust a single bone rotation using
skeletalComponent.poses.default = Transform(
scale: baseTransform.scale,
rotation: lookAtRotation,
translation: baseTransform.translation
)
which I run at every rendering update, while a full body animation is running.
But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints.
I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro.
But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
How can I renew visionOS Enterprise API?
I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates.
The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says:
"The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function."
But it doesn't explain HOW to renew.
When I asked Apple Support about this, they told me:
"After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions."
But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team.
Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful.
Thanks!
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
Hi,
I wanted to ask if you are familiar with a way of making the Logitech Muse sterile for operation room use?
Topic:
Spatial Computing
SubTopic:
General
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
Topic:
Spatial Computing
SubTopic:
General
Hello, esteemed tech developer.
I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
苹果12内存不够用,怎么办
Hello,
I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users.
Intended Features
A Mixed Full Space app in which dozens of 3D models are placed in the space.
These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines.
The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space.
Some media elements play automatically, while others are triggered by user interaction.
However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as:
audio only
spatial audio
video without audio
video with audio
I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario.
Questions
Is it technically possible to implement the experience described above using visionOS?
Are there any important implementation considerations or limitations that should be taken into account?
For example, when two participants experience the app simultaneously, how is the content positioned for each participant?
Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint?
For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct?
When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information?
I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details:
https://developer.apple.com/videos/play/wwdc2025/318/
Thank you.
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough.
Is there any way to disable this boundary or extend its radius?
This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit.
If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments?
Any insights would be greatly appreciated!
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.