I downloaded the file through Scoot, and when I remove VisionPro, the app will call the StreamDelegate method and return ". endEncountered". How can I solve this problem?
Thank you!
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code.
Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations.
Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes.
Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem.
Progress:
Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot).
I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below)
Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals.
Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes?
Blender animation
Time zero RCP "animations"
#usda 1.0
(
defaultPrim = "BlendMeshRoot"
doc = "Blender v4.5.3 LTS"
endTimeCode = 48
framesPerSecond = 24
metersPerUnit = 1
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Z"
)
def Xform "BlendMeshRoot" (
customData = {
dictionary Blender = {
bool generated = 1
}
}
)
{
def SkelRoot "Mesh"
{
custom string userProperties:blender:object_name = "Mesh"
float3 xformOp:rotateXYZ = (89.99999, -0, 0)
float3 xformOp:scale = (0.009999999, 0.01, 0.01)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"]
def Mesh "Mesh" (
active = true
prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"]
)
{
uniform bool doubleSided = 1
float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)]
int[] faceVertexCounts = [3, 3, ...
int[] faceVertexIndices = [0, 10293, ...
rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default>
normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), ....
point3f[] points = [(244.41148, 155.42062, 70.454926),.....
float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992)....
float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),...
int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ...
float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1...
uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
rel skel:blendShapeTargets = [
</BlendMeshRoot/Mesh/Mesh/frame_0000>,
.......
</BlendMeshRoot/Mesh/Mesh/frame_0005>,
]
prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel>
uniform token subdivisionScheme = "none"
custom string userProperties:blender:data_name = "Mesh"
custom float userProperties:originalTime
float userProperties:originalTime.timeSamples = {
0: 0,
}
def BlendShape "frame_0000"
{
uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),.....
uniform int[] pointIndices = [0, 1, 2, .....
}
.....
.....
#### BlendShape frame to 0005
.....
def Skeleton "Skel" (
prepend apiSchemas = ["SkelBindingAPI"]
)
{
uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
uniform token[] joints = ["joint1"]
uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim>
def SkelAnimation "Anim"
{
uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
float[] blendShapeWeights.timeSamples = {
0: [1, 0, 0, 0, 0, 0],
1: [0.9697085, 0.03029152, 0, 0, 0, 0],
2: [0.88787615, 0.11212383, 0, 0, 0, 0],
.....
46: [0, 0, 0, 0, 0.11212379, 0.8878762],
47: [0, 0, 0, 0, 0.030291557, 0.96970844],
48: [0, 0, 0, 0, 0, 1],
}
}
}
}
def Scope "_materials"
{
def Material "MeshSequence_Default"
{
token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface>
custom string userProperties:blender:data_name = "MeshSequence_Default"
def Shader "Principled_BSDF"
{
uniform token info:id = "UsdPreviewSurface"
float inputs:clearcoat = 0
float inputs:clearcoatRoughness = 0.03
color3f inputs:diffuseColor = (0.8, 0.4, 0.3)
float inputs:ior = 1.5
float inputs:metallic = 0
float inputs:opacity = 1
float inputs:roughness = 0.5
float inputs:specular = 0.2
token outputs:surface
}
}
}
def Scope "AnimationClips"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
}
def RealityKitComponent "AnimationLibrary"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
custom token info:id = "RealityKit.AnimationLibrary"
custom double realitykit:approximateDuration = 2
custom double[] realitykit:clipDurations = [2]
custom string[] realitykit:clipNames = ["Anim"]
custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim>
custom double realitykit:frameRate = 24
custom bool realitykit:isAnimationLibrary = 1
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift Packages
Developer Tools
Reality Converter
Reality Composer
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
I want to render a 3d/stereoscopic video in an Apple Vision Pro window using RealityKit/RealityView. The video is a left-right stereo. The straight forward approach would be to spawn a quad, and give it a custom Shader Graph material, which has a CameraIndexSwitch. The CameraIndexSwitch chooses between the right texture vs the left texture.
https://i.sstatic.net/XawqjNcg.png
The issue I have here is that I have to extract the video frames from my AVSampleBufferVideoRenderer. This should work ok, but not if I'm playing FairPlay content.
So, my question is, how to render stereo FairPlay videos in a SwiftUI RealityView?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Metal
MetalKit
RealityKit
AVFoundation
I am submitting a challenge to the Swift Student Challenge. I have created a RealityContent folder using Reality Composer Pro. How can I import this folder into the Swift Package Manager (.swiftpm) project hosted on Playground to ensure that it becomes a usable package?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift Student Challenge
Swift Playground
RealityKit
Reality Composer Pro
My friend cannot build my visionOS project in the simulator. He gets the following error.
Error:
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets")
In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error.
Error:
Tool exited with code 1
Yikes, please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
How do I obtain the device's camera permissions when developing camera apps?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hello,
we have a RealityKit app that also runs on macOS via Catalyst.
For specific USD assets containing particle systems we have observed a reproducible crash.
Steps to reproduce:
Open Reality Composer Pro
Create new file
Create simple particle system (default one is fine)
export as USDZ
Create project in Xcode
Call Entity.load(… and pass in your USD
Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log:
validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
iptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
8) must match.
'
Xcode version: 16.2.0
iMac 2020 3,8GHz Intel Core i7
macOS Sequoia 15.3
FB16477373
It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
Hello,
I'm unable to activate a timeline in my application through an OnTap, OnAddedToScene or OnNotification.
In RCP I can test and play the timelines easily.
When running in the simulator or on device the timelines simply do not run, regardless of the method through which I try to call the API.
I have two questions:
How can I check that my timelines are in my RCP project that's loaded into the scene? I don't see timelines in the entity hierarchy when I debug in RealityKit Debugger
Is Behaviors a component I can manually set at runtime? I can very clearly see the behaviors component attached to my entity in RCP, but when running this code:
.gesture(
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
if value.entity.applyTapForBehaviors() {
print("Success!")
} else {
print("Failure.")
}
}
)
It prints "Failure." every time indicating to me that the entity does not have a Behavior attached to it (whether that's a component or however else the Behavior is associated with the entity)
I also have not had success using the Notification system or even the OnAddedToScene behavior trigger which should theoretically work if a behavior is attached to the entity which the tap experiment indicates it's not.
For context this is my notification trigger code:
private let notificationTrigger = NotificationCenter.default
.publisher(for: Notification.Name("RealityKit.NotificationTrigger"))
@Environment(\.realityKitScene) var scene
Attachment(id: "home") {
Button {
NotificationCenter.default.post(
name: NSNotification.Name("RealityKit.NotificationTrigger"),
object: nil,
userInfo: [
"RealityKit.NotificationTrigger.Scene": scene,
"RealityKit.NotificationTrigger.Identifier": "test"
]
)
} label: {
Text("Test")
}
.padding(20)
.glassBackgroundEffect()
}
.onReceive(notificationTrigger) { _ in
print("test notification received")
I am receiving "test notification received" print statements as well.
I'm using Xcode 16.0 with VisionOS 2.0 on MacOS 15.3.1
If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart").
Could someone help me with this? Thank you so much
Code:
//
// TestttttttApp.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
@main
struct TestttttttApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
var body: some View {
RealityView { content in
do {
// MARK: Work
let scene = try await ModelEntity(named: "heart")
content.add(scene)
// MARK: Doesn't work
// let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle)
// content.add(scene)
} catch {
print(error)
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
Using Unity to develop VisionOS program, pressing the right knob of VisionPro during use will exit the Unity space and destroy the model in the space. The model in the space has been disconnected from the SwiftUI interface. After clicking the right knob, return to the system main interface, and then click the right knob again to return to the inside of the program. However, Unity space cannot be restored, and calling the discisWindow method on the SwiftUI interface has no effect, so the interface cannot be destroyed. Is there any solution??
Hi, I downloaded a few files from apple developers website, and they are in .reality file format. I wanted to see how they are constructed, but there is no way to open them and look at the content inside (3d model, shader, animation, etc). Is there a way to look at the content of .reality file?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image?
in my code I set up the parameter for "DepthMapTexture" universal input node
and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture
this is where I define the plane entity.
and this is the shader graph
what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions.
and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly.
what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier.
Maybe I'm missing something when defining Entity, something that allows geometric modifications?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi, I am trying to create a simple effect to create feather edges on the image using Reality Composer Pro. Something like this:
As you can see it has softer edges on all sides that dissolves into transparency with the background.
this is what I have been able to achieve on my own.
I want to use the "feather" input node value (float) from 0.0 to 1.0 to increase or decrease the strength of the feather edges.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
Shader Graph Editor
visionOS
I have two cubes in my blender project. But one gets lost after importing the USDZ file which is exported from the blender project.
It seems that Apple frameworks don't support non-English USDZ.
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code.
However, I have several instances of the same model in my scene.
Is it possible to make only one specific model respond to a notification?
For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
Reality Composer
RealityKit
visionOS
help me please i cant remove little messed up pixels and they are getting more and more please help me!!
2025 Macbook pro no protection screen actual screen
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.