Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Possible to detect multiple images at the same time on VisionPro?
I'm working on a project that uses imageTrackingProvider through ARKit on VisionPro, and I want to detect multiple images(about 5) and show info at the same time. However, I found that it seems only 1 image could be detected by device at one time. And the api of maximumNumberOfTrackedImages doing this seems not available for visionOS but only iOS. Anyone knows possible ways to detect multiple images at the same time on VisionPro?
0
0
470
Oct ’25
RoomPlan CaptureError.exceedSceneSizeLimit on iOS devices
When scanning multiple rooms (10+) in a single structure using ARWorldMap for coordinate space consistency, RoomCaptureSession throws CaptureError.exceedSceneSizeLimit. The instructions here (https://developer.apple.com/documentation/roomplan/scanning-the-rooms-of-a-single-structure) provide exactly what I am doing to keep the underlying ARSession alive (by calling captureSession.stop(pause: false)) and save the results before a user moves to the next room. Scanning 11 or so rooms will cause the user to hit the exceedSceneSizeLimit error. The ARWorldMap is about 58 MB and always is around this size when hitting this issue. No anchors are present and all the data seems to be from tracking data. On iPad devices (where I do not see this issue) the ARWorldMap grows as a significantly slower rate in size. I save the ARWorldMap after each room is scanned and confirmed by the user. If I use the ARMap to initialize the ARSession (as described in the docs) the session will immediately error with "exceedSceneSizeLimit" once the captureSession.run() is executed. Occasionally it will allow me/the user to scan again, but either breaks mid scan or the following. This has been working fine for the past 2 years and users have been able to scan dozens of rooms without issue. It seems only lately that it has been a problem. I would expect the ARWorldMap to be allowed for much bigger sizes. At this point I can just about scan more area of my house with a single scan than I can when I use different captureSessions. Few observations: This happens on my iPhone 15 Pro Max, my iPhone 17 Pro, but not my iPad M4 (maybe memory related?). It is possible if scanning many more rooms it would happen on the iPad too. I have tried things such as resetting the ARConfig on the underlying ARSession to reset some, but this doesn't work. I have tried to create a new ARWorldMap and move the origin to the older map to clear out tracking data. This almost works but causes a mess of issues when a user moves at all due to the unshared coordinate space. I believe there are three active issues regarding this: FB14454922, FB15035788, FB20642944 Could we get an update for this issue? It is a production issue and severely limits my user experience in my production application.
0
0
133
Oct ’25
ARFrame.sceneDepth not correctly registered with ARFrame.capturedImage for iPad Pro (6th Gen) for high resolution capture.
Hi team, I believe I’ve found a registration issue between ARFrame.sceneDepth and ARFrame.capturedImage when using high-resolution frame capture on a 2022 iPad Pro (6th gen). When enabling high-resolution capture: if let highResFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { config.videoFormat = highResFormat } … arView.session.captureHighResolutionFrame { ... } the depth map provided by ARFrame.sceneDepth no longer aligns correctly with the corresponding high-resolution capturedImage. This misalignment results in consistently over-estimated distance measurements in my app (which relies on mapping depth to 2D pixel coordinates). iPad Pro (6th gen): misalignment occurs only when capturing high-resolution frames. iPhone 16 Pro: depth is correctly registered for both standard and high-resolution captures. It appears the camera intrinsics, specifically the FOV, change between the “regular” resolution stream and the high-resolution capture on the iPad. My suspicion is that the depth data continues using the intrinsics of the lower resolution stream, resulting in an unregistered depth-to-RGB mapping. Once I have the iPad in hand again, I will confirm whether camera.intrinsics or FOV differ between the low-res and high-res frames. Is this a known issue with high-resolution frame capture on the 2022 iPad Pro? If not, I’m happy to provide some more thorough sample code. Thanks for your time!
0
0
212
Nov ’25
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
0
0
680
Nov ’25
Access a DepthMap while RoomCaptureSession
I'm capturing a room via RoomPlan API and would like to access the DepthMap(sceneDepth) or SmoothDepthMap(smoothedSceneDepth) from my own provided ARSession for RoomCaptureSession. But both depth maps are empty when handling the delegates. I have not found a solution yet. So is it even possible? Because i have not found any documentation of what RoomCaptureSession overwrites in the ARSession if I provide my own ARSession instance. Here is a example code snippet of what i'm trying to do: private let arSession = ARSession() private lazy var roomPlanCaptureSession = RoomCaptureSession(arSession: arSession) let arConfig = ARWorldTrackingConfiguration() //Create semantics for ARconfig which is used for ARSession var semantics: ARWorldTrackingConfiguration.FrameSemantics = [] if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { semantics.insert(.sceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) { semantics.insert(.smoothedSceneDepth) } arConfig.frameSemantics = semantics //set delegates roomPlanCaptureSession.delegate = self arSession.delegate = self //Check if device support for depthMap if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth){ arSession.run(arConfig) } else{ print(".sceneDepth is unsupported.") } //run roomcapture scan config let captureConfig = RoomCaptureSession.Configuration() roomPlanCaptureSession.run(configuration: captureConfig) //trying to get sceneDepth public func session(_ session: ARSession, didUpdate frame: ARFrame) { print("session delegate capture: sceneDepth: \(String(describing: frame.sceneDepth))") //prints: session delegate capture: sceneDepth: nil also in this video from 2023 it is say that i can pass custom ARSession to my RoomPlan. Explore enhancements to RoomPlan - Video Quote 3:00: Here is the init and stop function in previous RoomPlan. And here is how you pass over a custom ARSession to init function. Any custom ARSession with ARWorldTrackingConfiguration will be honored inside RoomCaptureSession. anyway I welcome any input. maybe im doing something wrong. :)
0
0
387
Dec ’25
When running Unity applications in the shared space of visionOS, the Mask/RectMask2D of ScrollView does not function properly
Problem Description: I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space. Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly. The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro. Reproduction steps: Create a ScrollView in Unity. Add a Mask or RectMask2D to the viewport. Deploy the application to Apple Vision Pro and run it in Shared Space mode. Sliding content will not be clipped by the mask, and the masked area is entirely ineffective. Expected behavior: The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary. Actual results: In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion. Environmental Information: Device: Apple Vision Pro Mode: Shared Space Unity Version: 6000.0.40f1 visionOS version: visionOS 26.0 Unity PolySpatial Version: 2.0.4 Impact This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications. Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
0
0
149
Dec ’25