Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Keyboard Tracking
Hi! I'm currently experimenting on Apple Vision Pro with hand and head anchors. Is there a way to get an anchor linked to the apple magic keyboard (as the detection is already done to display inputs at the top)? Thanks in advance, Have a good day!
1
0
649
Jul ’25
visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
521
Sep ’25
Current Apple Forum about ARKit and visionOS
Recently, questions about ARKit/visionOS seem to be being asked in the Apple forum by internal Apple engineers. Inexperienced and untested makeshift features are being offered, putting average but experienced developers in a difficult position. They are unable to react and get something useful from the posts. Apple needs to review the situation.
1
0
359
Sep ’25
How to best manage ARKitSession in concurrent code
I have a visionOS app where I instantiate ARKitSession and various providers (HandTrackingProvider and WorldTrackingProvider) in my appModel. That way, I can pass these providers to a Task which runs a gRPC server for sending the data from these providers to a client. When the users enters the immersive space of the app, the ARKitSession will run the providers if they are not running already. I am now trying to implement the AccessoryTrackingProvider with the PSVR sense controllers but it does not fit with my current framework because the controllers may not be connected when the ARKitSession.run function is called. So I need to find a new place to start the session. My question is, if I already have a session which is running the hand and world tracking providers, can I start another session to run the accessory tracking? Should they all be running on the same session? Is there a way to stop the session and restart it when the controllers are connected? When I tried this, I get an error that says "It is not possible to re-run a stopped data provider (<ar_hand_tracking_provider_t: " but if I instantiate a new HandTrackingProvider, then the one that got passed to the gRPC task would no longer be the one running in the new session. Any advice on how best to manage the various providers and ARKit sessions would be greatly appreciated.
1
0
236
Nov ’25
Best approach for high-quality textured room reconstruction using ARKit / RoomPlan / Object Capture?
I am developing an IOS App that allow users to scan rooms, view the scans on device, and add notes. I need to preserve actual geometry (odd angles, chamfers, fixtures), not simplified RoomPlan boxes. Are there any easy ways to incorporate high quality texture mapping or PBR? Where is the documentation for scene reconstruction?
1
0
861
Nov ’25
visionOS 3d interactions like the native keyboard when no longer observed in passthrough
While using apple's vision pro, we noticed that we can continue to use the visionOS keyboard when we no longer actually see it in passthrough. In other words, when we focus on a field to type, visionOS displays the keyboard for us in such a way that we actually see it. Then, we noticed if we look away a little bit, either up, or down, or left, or right, in such a way that the keyboard is no longer visible by us in the passthrough, the keyboard still remains responsive to taps from our fingers at the location where it is. It seems the keyboard remains functional and responsive to taps even though we can no longer observe/see it. We are trying to figure out how to implement similar functionality in our app whereby the user can continue to manipulate a 3d entity when the user can no longer actually observe it in passthrough (like the visionOS keyboard appears to allow). I assume the visionOS keyboard has this functionality thanks to the downward facing sensors on the hardware that allow hand tracking even though the hands can no longer be observed by the user. That is likely how we can rest our hands on our lap is still be able to interact with visionOS. How can we implement a similar functionality for 3D entities? Is there a way to tap in, or to allow hand tracking, from those toward facing cameras? Is it possible to manipulate a 3D entity when it is no longer observed by the user for example when they shift their attention somewhere else in the field of vision? How does the visionOS keyboard achieve this?
1
0
315
Nov ’25
ARKit / visionOS - handtracking with 3D objects attached on hand
I use ARKit's hand tracking to attach a 3D model of a remote control to the left hand. The user is supposed to press buttons on the remote control. In the Vision Pro settings, I have removed the left hand from Hands & Eye Tracking. Only the right hand is used. The problem now is that the left hand appears and the 3D model of the remote control fades out. I want the remote control to be completely visible. The user should feel like they really have the remote control in their hand. Can I prevent the fading out?
1
0
227
Nov ’25
Object tracking capability not available
Hi there, I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here? Thanks in advance, Matthias
1
0
290
Dec ’25
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
458
2w
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
455
1w
VisionOS Main Camera Enterprise API: Development license into distribution for Business Store
Hello, We've been working for months now on an App for the Vision Pro. (it's been great btw!) We already have an App in the App Store for iOS, and have been migrating our platform from the Microsoft Hololens 2 to the AVP: https://apps.microsoft.com/detail/9NPPP031VHD1 We require the Main Camera access and already have gotten the Enterprise.license for development purposes. Unfortunately, we cannot publish our Business App (which uses an Enterprise API) under the same Name/Bundle ID as our iOS App because it would conflict with our current Distribution Method. We arrived at the conclusion that we need a new Enterprise.license under a different Bundle ID to create a new App for the Business Store. Has anyone been in the same boat as us, and tried to publish to the Business Store while already having an App in the Public App Store under the same name? We applied to get another license for distribution under another name (with "Pro" at the end), but it's been stuck in limbo for over a month now (probably because the new bundle ID doesn't have any track record). Anyhow, thanks for any help, we're open to suggestions as to how to proceed!
0
0
474
Feb ’25
Adding reference image failed in VisionPro
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image. Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
0
0
249
Mar ’25
The Question Of Two ARView Together
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time. Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room) But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
0
0
128
Mar ’25
How to obtain video streams from the digital space included in VisionPro after applying for the "Enterprise API"?
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world? let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined let worldTracking = WorldTrackingProvider() func requestWorldSensingCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func queryAuthorizationCameraAccess() async{ let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func monitorSessionEvents() async { for await event in arKitSession.events { switch event { case .dataProviderStateChanged(_, let newState, let error): switch newState { case .initialized: break case .running: break case .paused: break case .stopped: if let error { print("An error occurred: \(error)") } @unknown default: break } case .authorizationChanged(let type, let status): print("Authorization type \(type) changed to \(status)") default: print("An unknown event occured \(event)") } } } @MainActor func processWorldAnchorUpdates() async { for await anchorUpdate in worldTracking.anchorUpdates { switch anchorUpdate.event { case .added: //检查是否有持久化对象附加到此添加的锚点- //它可能是该应用程序之前运行的一个世界锚。 //ARKit显示与此应用程序相关的所有世界锚点 //当世界跟踪提供程序启动时。 fallthrough case .updated: //使放置的对象的位置与其对应的对象保持同步 //世界锚点,如果未跟踪锚点,则隐藏对象。 break case .removed: //如果删除了相应的世界定位点,则删除已放置的对象。 break } } } func arkitRun() async{ do { try await arKitSession.run([cameraFrameProvider,worldTracking]) } catch { return } } @MainActor func processDeviceAnchorUpdates() async { await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90) } @MainActor func cameraFrameUpdatesBuffer() async{ guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 = cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } for await cameraFrame in cameraFrameUpdates1 { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } if self.pixelBuffer != nil { self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080)) } } }
0
0
145
Apr ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
254
May ’25