Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Spatial Computing, ARPointCloud (rawFeaturePoints)
https://developer.apple.com/documentation/arkit/arpointcloud https://developer.apple.com/documentation/arkit/arframe/rawfeaturepoints The point cloud (collection of points/features) main intention is a debug visualization to what the underlying tracking algorithm processes and is not designed for additional algorithms on top of that. But, we are utilizing information contained in the points/features collected by ARKit. Currently, the range of rawfeaturepoints is limited to about 10 meter from the device. We see a great chance if the range is unlock. The global localization will be more robust and accurate. ARPointCloud - Apple ARKit - FindSurface YouTube SIdQRiLj2jY
7
0
1.3k
3w
Technical Inquiry regarding iPhone LiDAR Specifications and ARKit Data Integrity
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification: Point Cloud Density / Resolution: The effective resolution of the depth map. Sampling Frequency: The sensor's refresh rate. Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range). Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
2
0
369
3w
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
455
2w
LiDAR Projector Pattern iPhone 15 Pro vs. 12 Pro – Research Project Question
Dear Apple Team, I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation. My current understanding (for context): I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation. My specific questions: How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro? Are the dots static or do they shift/move over time? How many depth measurement points does the system deliver internally (after processing)? What is the ranging accuracy (cm-level precision) of each measurement point? Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications? Photos of my measurements are available if helpful. Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose. These specifications would be essential for my research paper. Thank you very much in advance! Best regards, Max! Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth Research Project: “LiDAR Sensor Technology in Smartphones”
6
0
635
1w
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
455
1w
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.2k
6d