I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Topic:
Spatial Computing
SubTopic:
General
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
Topic:
Spatial Computing
SubTopic:
General
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve.
I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected.
However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored.
Any guidance or clarification on this behavior would be greatly appreciated.
// ImmersiveView.swift
// estudos_vision
//
// Created by Lailan Rogerio Rodrigues Matos on 15/05/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
@State private var session: SpatialTrackingSession?
@State private var box = ModelEntity()
@State private var subs: [EventSubscription] = []
@State private var ballEntity: Entity?
var body: some View {
RealityView { content in
// Load initial content from the RealityKit scene.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
// Create and run a spatial tracking session.
let session = SpatialTrackingSession()
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
_ = await session.run(configuration)
self.session = session
// Create a red box.
let boxMesh = MeshResource.generateBox(size: 0.2)
let material = SimpleMaterial(color: .red, isMetallic: false)
box = ModelEntity(mesh: boxMesh, materials: [material])
box.position.y += 0.15 // Position the box slightly above the origin.
// Configure the box for user interaction and physics.
box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive.
box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics.
box.components.set(PhysicsBodyComponent( // Add physics behavior.
massProperties: .default,
material: .default,
mode: .kinematic // Use kinematic mode so it can be moved by user interaction.
))
box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow.
//content.add(box) //commented out to add to hand anchor
// Create a left hand anchor and add the box as a child.
let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous)
handAnchor.addChild(box)
content.add(handAnchor) // Add the hand anchor to the scene.
// Create a sphere.
let ball = ModelEntity(mesh: .generateSphere(radius: 0.15))
ball.position = [0.0, 1.5, -1.0] // Initial position of the ball.
ball.generateCollisionShapes(recursive: false) // Add collision.
ball.name = "Sphere"
content.add(ball)
ballEntity = ball
// Subscribe to collision events between the box and other entities.
let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in
print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred")
//ce.entityA.removeFromParent() // removes the colliding object
//ce.entityB.removeFromParent()
}
Task {
subs.append(event)
}
}
// Add a drag gesture to the box, allowing the user to move it.
.gesture(
DragGesture()
.targetedToEntity(box) // Target the drag gesture to the box.
.onChanged({ value in
// Update the position of the box based on the drag gesture.
box.position = value.convert(value.location3D, from: .local, to: box.parent!)
})
)
}
}
#Preview(immersionStyle: .full) {
ImmersiveView()
.environment(AppModel())
}
Topic:
Spatial Computing
SubTopic:
General
I found some snapshot API in developer documents, like blows:
RealityKit / Views and attachments / ARView / /snapshot(saveToHDR:completion:)
SceneKit / SCNView / snapshot()
Is there a similar API in visionOS?and if not, how can I implement snapshot for realityview and usdz?
I want to implement the functions in this video, how should I set the window
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
Problem Description
I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden.
However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space.
Current Implementation
Here's the relevant gesture detection code in my ImmersiveView:
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@EnvironmentObject var appModel: AppModel
@Environment(\.openWindow) private var openWindow
var body: some View {
RealityView { content in
// RealityView content setup with panoramic sphere...
let rootEntity = Entity()
content.add(rootEntity)
// Load panoramic content here...
}
// Using SpatialEventGesture to handle multiple spatial gestures
.gesture(
SpatialEventGesture()
.onEnded { eventCollection in
// Check menu visibility state
if !appModel.isPanoramaMenuVisible {
// Iterate through event collection to handle various gestures
for event in eventCollection {
switch event.kind {
case .touch:
print("Detected spatial touch gesture, showing menu")
showMenuWithGesture()
return
case .indirectPinch:
print("Detected spatial pinch gesture, showing menu")
showMenuWithGesture()
return
case .pointer:
print("Detected spatial pointer gesture, showing menu")
showMenuWithGesture()
return
@unknown default:
print("Detected unknown spatial gesture: \(event.kind)")
showMenuWithGesture()
return
}
}
}
}
)
// Keep long press gesture as backup
.simultaneousGesture(
LongPressGesture(minimumDuration: 1.5)
.onEnded { _ in
if !appModel.isPanoramaMenuVisible {
print("Detected long press gesture, showing menu")
showMenuWithGesture()
}
}
)
}
private func showMenuWithGesture() {
if !appModel.isPanoramaMenuVisible {
appModel.showPanoramaMenu()
if !appModel.windowExists(id: "PanoramaMenu") {
openWindow(id: "PanoramaMenu", value: "menu")
}
}
}
}
What I've Tried
Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other.
SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu.
Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space.
Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently.
Expected Behavior
When the panorama menu is hidden (after 5-second auto-hide), users should be able to:
Perform a pinch gesture (indirect pinch) to show the menu
Tap in space to show the menu
Use other spatial gestures to show the menu
Questions
Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView?
Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures?
Should I be using a different gesture approach for visionOS immersive spaces?
Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them?
Environment
Xcode 16.1
visionOS 2.5
Testing on Vision Pro device
App uses SwiftUI + RealityKit
Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated!
Additional Context
The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden.
Thank you for any help or suggestions!
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429
This thread also links to documentation which states
If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene:
This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template:
import SwiftUI
@main
struct SceneTestApp: App {
var body: some Scene {
MyScene()
WindowGroup(id: "extra") {
Text("Extra window")
}
}
}
struct MyScene: Scene {
@Environment(\.scenePhase) private var scenePhase
@Environment(\.openWindow) private var openWindow
var body: some Scene {
WindowGroup {
ContentView()
.onAppear {
openWindow(id: "extra")
}
}
.onChange(of: scenePhase) { oldValue, newValue in
print("scenePhase changed")
}
}
}
The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation.
When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window.
Also, I want to know the z and y position of the system window in the immersive space.
This modifier in visionOS 2.5 works perfectly with LazyVgrid inside a Stack in ScrollView:
.hoverEffect { effect, isActive, _ in
effect.scaleEffect(isActive ? 1.1 : 1.0)
But the grid does not scroll in visionOS 26 beta 1 unless the scaleEffect is commented out.
FB17941468
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only.
Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like:
arView.installGestures([.all], for: entity)
however, with RealityView I don't know how (or if possible) to access an ARView.
In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS.
I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
Topic:
Spatial Computing
SubTopic:
General