Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Created

When running Unity applications in the shared space of visionOS, the Mask/RectMask2D of ScrollView does not function properly
Problem Description: I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space. Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly. The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro. Reproduction steps: Create a ScrollView in Unity. Add a Mask or RectMask2D to the viewport. Deploy the application to Apple Vision Pro and run it in Shared Space mode. Sliding content will not be clipped by the mask, and the masked area is entirely ineffective. Expected behavior: The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary. Actual results: In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion. Environmental Information: Device: Apple Vision Pro Mode: Shared Space Unity Version: 6000.0.40f1 visionOS version: visionOS 26.0 Unity PolySpatial Version: 2.0.4 Impact This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications. Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
0
0
149
Dec ’25
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.2k
Dec ’25
Technical Inquiry regarding iPhone LiDAR Specifications and ARKit Data Integrity
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification: Point Cloud Density / Resolution: The effective resolution of the depth map. Sampling Frequency: The sensor's refresh rate. Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range). Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
2
0
372
3w
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
458
2w
LiDAR Projector Pattern iPhone 15 Pro vs. 12 Pro – Research Project Question
Dear Apple Team, I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation. My current understanding (for context): I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation. My specific questions: How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro? Are the dots static or do they shift/move over time? How many depth measurement points does the system deliver internally (after processing)? What is the ranging accuracy (cm-level precision) of each measurement point? Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications? Photos of my measurements are available if helpful. Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose. These specifications would be essential for my research paper. Thank you very much in advance! Best regards, Max! Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth Research Project: “LiDAR Sensor Technology in Smartphones”
6
0
635
2w
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
455
2w