Hi Apple Team and Developers,
First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results.
However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase.
Feature Suggestion:
Would it be possible to allow developers to supply two versions of the input images:
• One version (original) for texture generation
• A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only
This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas.
Question:
Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers?
Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies?
Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers.
Warm regards,
KitCheng
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there any way to convert TextureResource to Image
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen.
If I show ChildView directly, it works with camera feed.
Please help me on this issue? Thanks.
import RealityKit
import SwiftUI
struct ParentView: View{
@State private var showIt = false
var body: some View{
ZStack{
RealityView{content in
content.camera = .virtual
let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)])
content.add(box)
}
Button("Click here"){
showIt = true
}
}
.fullScreenCover(isPresented: $showIt){
ChildView()
.overlay(
Button("Close"){
showIt = false
}.padding(20),
alignment: .bottomLeading
)
}
.ignoresSafeArea(.all)
}
}
import ARKit
import RealityKit
import SwiftUI
struct ChildView: View{
var body: some View{
RealityView{content in
content.camera = .spatialTracking
}
}
}
Hello,
I have downloaded and run the sample object tracking app for visionos.
Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object.
However, I cannot see how to convert the Create ML output file (xxx.mlmodel) into a reference object like the files in the sample project.
is there a tool for converting them?
TIA
Topic:
Spatial Computing
SubTopic:
ARKit
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I saw at WWDC25 mentions of visionOS 26 now providing hand tracking poses at 90hz, but I also recall that being a feature in visionOS 2.
Is there something new happening in visionOS 26 that makes its implementation of hand tracking "better"?
Topic:
Spatial Computing
SubTopic:
General
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'.
It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays.
I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible?
I tried setting:
UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f
For example, but this doesn't seem to do anything to the render resolution in the build.
Topic:
Spatial Computing
SubTopic:
General
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
I was watching the Developer videos, and there was mention that RealityView handles persistent world data differently and also automatically for us.
I am having an issue finding the material I need to get up to speed on that.
In ARKit, I was able to place a model with the world data and recall that .map data. It even stored a reference image for the scene to help match the world data.
I'm looking for the information on how to implement and work with those same features with RealityView, as it seems to be better/automatically integrated?
I need help being pointed in the right direction. Sample code would be amazing.
Topic:
Spatial Computing
SubTopic:
ARKit
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
I have tested the MagnifyGesture code below on multiple devices:
Vision Pro - working
iPhone - working
iPad - working
macOS - not working
In Reality Composer Pro, I have also added the below components to the test model entity:
Input Target
Collision
For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you.
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
.gesture(MagnifyGesture()
.targetedToAnyEntity()
.onChanged(onMagnifyChanged)
.onEnded(onMagnifyEnded))
}
func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyChanged")
}
func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyEnded")
}
}
Topic:
Spatial Computing
SubTopic:
General
My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below:
How do I convert a blend shape/morphed 3D lip-synced model into a usdz that will play in AR on an iPhone?
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation.
In the ARKit documentation for ARCamera.transform, it says the following:
This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side.
Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as:
The device is in landscape mode, with the device held upright and the front-facing camera on the right side.
There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread
The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation.
It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit.
So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart").
Could someone help me with this? Thank you so much
Code:
//
// TestttttttApp.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
@main
struct TestttttttApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
var body: some View {
RealityView { content in
do {
// MARK: Work
let scene = try await ModelEntity(named: "heart")
content.add(scene)
// MARK: Doesn't work
// let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle)
// content.add(scene)
} catch {
print(error)
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
I would like to know if there have been any updates that improved the image tracking speed. I submitted feedback a while ago, mentioning that the image tracking rate was only one frame per second on Vision Pro.
Thank you very much. FB15688516 and FB16745373is feedback we previously submitted, but we have not received any response yet.
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive.
Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource:
if let textureResource = try? await TextureResource(named: "fuji") {
let value = MaterialParameters.Value.textureResource(textureResource)
try? material.setParameter(name: "MyImage", value: value)
model.model?.materials = [material]
}
Thanks in advance
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
Hello,
I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools?
Thanks!
Mehdi
0
I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned.
Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera.
What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose
Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors.
Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?