Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

How to configure RealityKit entities for animations on a modular character?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro: My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name. What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
89
May ’25
Diagnose data access latency
The code is pretty simple kernel void naive( constant RunParams *param [[ buffer(0) ]], const device float *A [[ buffer(1) ]], // [N, K] device float *output [[ buffer(2) ]], uint2 gid [[ thread_position_in_grid ]]) { uint a_ptr = gid.x * param->K; for (uint i = 0; i < param->K; i++, a_ptr++) { val += A[b_ptr]; } output[ptr] = val; } when uint a_ptr = gid.x * param->K, the code got 150 GFLops when uint a_ptr = gid.y * param->K, the code got 860 GFLops param->K = 256; thread per group: [16, 16] I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
0
0
91
Apr ’25
MacOS Catalina 10.15.7 CoreGraphic.framework not find symbol
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
0
0
94
May ’25
SKScene editor canvas gone
I've recently run into an issue in Xcode where the sks editor's preview canvas just vanishes for every project on my computer. I don't think it is an issue with my sks files because this works as expected on another computer with the same files, and when it happens it happens for ALL sks files in all projects. There used to be menu items to toggle the canvas and its settings, but those are now gone for me in sks files (they show up for swift files that have previews, however). Any idea what is going on here? How do I get the canvas back? I literally cannot get any work done on my primary computer because of this...
1
0
525
Dec ’25
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
2
0
982
Dec ’25
Walking an entity around an immersive space in visionOS like the window drag bar
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
2
0
1.1k
Dec ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
161
May ’25
How does Game Kit offline leaderboard submission work?
I do not understand how offline leaderboard submission is supposed to work in Game Kit: While the documentation briefly states that offline submission is supported, how is that even possible when you first have to fetch a leaderboard object in order to then call its submitScore function? How can I get the leaderboard object in the first place when offline? Can anyone enlighten me how this works? Or maybe point me to some relevant documentation?
0
0
348
Dec ’25
Sparse Texture Writes
Hey, I've been struggling with this for some days now. I am trying to write to a sparse texture in a compute shader. I'm performing the following steps: Set up a sparse heap and create a texture from it Map the whole area of the sparse texture using updateTextureMapping(..) Overwrite every value with the value "4" in a compute shader Blit the texture to a shared buffer Assert that the values in the buffer are "4". I have a minimal example (which is still pretty long unfortunately). It works perfectly when removing the line heapDesc.type = .sparse. What am I missing? I could not find any information that writes to sparse textures are unsupported. Any help would be greatly appreciated. import Metal func sparseTexture64x64Demo() throws { // ── Metal objects guard let device = MTLCreateSystemDefaultDevice() else { throw NSError(domain: "SparseNotSupported", code: -1) } let queue = device.makeCommandQueue()! let lib = device.makeDefaultLibrary()! let pipeline = try device.makeComputePipelineState(function: lib.makeFunction(name: "addOne")!) // ── Texture descriptor let width = 64, height = 64 let format: MTLPixelFormat = .r32Uint // 4 B per texel let desc = MTLTextureDescriptor() desc.textureType = .type2D desc.pixelFormat = format desc.width = width desc.height = height desc.storageMode = .private desc.usage = [.shaderWrite, .shaderRead] // ── Sparse heap let bytesPerTile = device.sparseTileSizeInBytes let meta = device.heapTextureSizeAndAlign(descriptor: desc) let heapBytes = ((bytesPerTile + meta.size + bytesPerTile - 1) / bytesPerTile) * bytesPerTile let heapDesc = MTLHeapDescriptor() heapDesc.type = .sparse heapDesc.storageMode = .private heapDesc.size = heapBytes let heap = device.makeHeap(descriptor: heapDesc)! let tex = heap.makeTexture(descriptor: desc)! // ── CPU buffers let bytesPerPixel = MemoryLayout<UInt32>.stride let rowStride = width * bytesPerPixel let totalBytes = rowStride * height let dstBuf = device.makeBuffer(length: totalBytes, options: .storageModeShared)! let cb = queue.makeCommandBuffer()! let fence = device.makeFence()! // 2. Map the sparse tile, then signal the fence let rse = cb.makeResourceStateCommandEncoder()! rse.updateTextureMapping( tex, mode: .map, region: MTLRegionMake2D(0, 0, width, height), mipLevel: 0, slice: 0) rse.update(fence) // ← capture all work so far rse.endEncoding() let ce = cb.makeComputeCommandEncoder()! ce.waitForFence(fence) ce.setComputePipelineState(pipeline) ce.setTexture(tex, index: 0) let threadsPerTG = MTLSize(width: 8, height: 8, depth: 1) let tgCount = MTLSize(width: (width + 7) / 8, height: (height + 7) / 8, depth: 1) ce.dispatchThreadgroups(tgCount, threadsPerThreadgroup: threadsPerTG) ce.updateFence(fence) ce.endEncoding() // Blit texture into shared buffer let blit = cb.makeBlitCommandEncoder()! blit.waitForFence(fence) blit.copy( from: tex, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0), sourceSize: MTLSize(width: width, height: height, depth: 1), to: dstBuf, destinationOffset: 0, destinationBytesPerRow: rowStride, destinationBytesPerImage: totalBytes) blit.endEncoding() cb.commit() cb.waitUntilCompleted() assert(cb.error == nil, "GPU error: \(String(describing: cb.error))") // ── Verify a few texels let out = dstBuf.contents().bindMemory(to: UInt32.self, capacity: width * height) print("first three texels:", out[0], out[1], out[width]) // 0 1 64 assert(out[0] == 4 && out[1] == 4 && out[width] == 4) } Metal shader: #include <metal_stdlib> using namespace metal; kernel void addOne(texture2d<uint, access::write> tex [[texture(0)]], uint2 gid [[thread_position_in_grid]]) { tex.write(4, gid); }
1
0
130
May ’25
Turn-Based Game and Invitations
I have two devices (iPod, iPhone), each using a different Apple ID. I have an existing game to which I'm adding TBM. When the iPod invites the iPhone, it sends an iMessage invite to the iPhone; when I click on that message, I get "Retrieving", then Game Center in Settings is opened, not my App (same version installed on both devices). I start my App on the iPhone and that match is not shown in the Matchmaker View Controller. When I send an invite from the iPhone to the iPod and I click on the iMessage invite, the app starts but the match isn't listed in the MatchMaker ViewController on the iPod (but is on the iPhone). In addition, when I click on the info circle on the iPhone, it who's the two players and "App Store" under the Game Center name. However, When I do the same on the iPod, it has a "Play your turn" there. Any ideas?
0
0
561
Nov ’25
Animations for streaming
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events. Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame. We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames. This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K. When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution. Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window? If it cannot work that way, what is an alternative?
0
0
141
May ’25
PhotogrammetrySession crashes after update from iOS 18 to iOS 26
After updating iPad/iPhone devices from iOS 18 to iOS 26, PhotogrammetrySession intermittently crashes during photogrammetry processing. The same workflow was stable on iOS 18 with no code changes to the app. Environment: OS versions: Works on OS 18, crashes on OS 26 Device: iPad/iPhone (reproducible across devices) Source images: ~170-200 JPG files at 2160 x 3840 resolution Reproduction: The crash occurs consistently on the second or third sequential run of the photogrammetry session with the same image set. First run typically succeeds. Crash details: Xcode shows an uncaught exception during image processing: terminating due to uncaught exception of type std::bad_alloc: std::bad_alloc VTPixelTransferSession 420f sid 269 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 2160, 2160 ) Color( (null), 0x0, (null), (null), ITU_R_601_4 ) => 24 sid 19 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 6528 ) Color( 0x0, (null), (null), (null) ) This appears to be a memory allocation failure in VTPixelTransferSession during color space conversion. Has anyone else experienced similar crashes with CorePhotogrammetry on iOS 26, or found workarounds?
0
0
306
Dec ’25
iOS Simulator can only render 1 RealityView
I'm using RealityView in my iOS game mxied with SwiftUI. For the following 2 example usages, the simulator will only render the first RealityView, and the second one is either super laggy or show a black model. Running on the real device is all good, just simualtor has this issue. Have a TabView and each tab has a RealityView. Have a root view and detail view connected via a push navigation, both root and detail have a RealityView. In the Simulator, the second RealityView is going to be very choppy and basically unusable, but on a real iPhone everything looks great. Is this a known simulator issue or I did something bad?
0
0
136
Jun ’25
Core Image recipe for QR code icon image
Create the QRCode CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator; f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding]; f.correctionLevel = @"Q"; // increase level CIImage *qrcode = f.outputImage; Overlay the icon CIImage *icon = [CIImage imageWithURL:url]; CGAffineTransform *t = CGAffineTransformMakeTranslation( (qrcode.extent.width-icon.extent.width)/2.0, (qrcode.extent.height-icon.extent.height)/2.0); icon = [icon imageByApplyingTransform:t]; qrcode = [icon imageByCompositingOver:qrcode]; Round off the corners static dispatch_once_t onceToken; static CIWarpKernel *k; dispatch_once(&onceToken, ^ { k = [CIWarpKernel kernelWithFunctionName:name fromMetalLibraryData:metalLibData() error:nil]; }); CGRect iExtent = image.extent; qrcode = [k applyWithExtent:qrcode.extent roiCallback:^CGRect(int i, CGRect r) { return CGRectInset(r, -radius, -radius); } inputImage:qrcode arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]]; …and this code for the kernel should go in a separate .ci.metal source file: float2 bend_corners (float4 extent, float s, destination dest) { float2 p, dc = dest.coord(); float ratio = 1.0; // Round lower left corner p = float2(extent.x+s,extent.y+s); if (dc.x < p.x && dc.y < p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round lower right corner p = float2(extent.x+extent.z-s, extent.y+s); if (dc.x > p.x && dc.y < p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round upper left corner p = float2(extent.x+s,extent.y+extent.w-s); if (dc.x < p.x && dc.y > p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round upper right corner p = float2(extent.x+extent.z-s, extent.y+extent.w-s); if (dc.x > p.x && dc.y > p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } return dc; }
0
0
110
Mar ’25
SCNTechnique clearColor Always Shows sceneBackground When Passes Share Depth Buffer
Problem Description I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0 Problem Details In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling: "passes": [ "box1_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 1, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Expecting transparent black ], "depthStates": [ "clear": true, "enableWrite": true ], "outputs": [ "depth": "box1_depth", "color": "box1_color" ], ], "box2_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 2, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Also expecting transparent black ], "depthStates": [ "clear": false, "enableWrite": false ], "outputs": [ "depth": "box1_depth", // Sharing the same depth buffer "color": "box2_color", ], ], "final_quad": [ "draw": "DRAW_QUAD", "metalVertexShader": "myVertexShader", "metalFragmentShader": "myFragmentShader", "inputs": [ "box1_color": "box1_color", "box2_color": "box2_color", ], "outputs": [ "color": "COLOR" ] ] ] And the metal shader used to display box1_color and box2_color with splitting: fragment half4 myFragmentShader(VertexOut in [[stage_in]], texture2d<half, access::sample> box1_color [[texture(0)]], texture2d<half, access::sample> box2_color [[texture(1)]]) { half4 color1 = box1_color.sample(s, in.texcoord); half4 color2 = box2_color.sample(s, in.texcoord); if (in.texcoord.x < 0.5) { return color1; } return color2; }; Expected Behavior Both passes should clear their color targets to transparent black (0, 0, 0, 0) The depth buffer should be shared between passes for proper occlusion Actual Behavior Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image) This happens even when I explicitly set clearColor: "0 0 0 0" for both passes Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes What I've Tried Setting different clearColor values - all are ignored when sharing depth buffer Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue Creating a separate pass to capture the background - the background still appears in the other passes Various combinations of clear flags and render orders Environment iOS/macOS, running with "My Mac (Designed for iPad)" Xcode 16.2 Question Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing? The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings. Any insights or workarounds would be greatly appreciated. Thank you!
0
0
248
Jun ’25
Particles rendered in the wrong order: back last instead of the back first
I've tried out a ParticleEmitter in Reality Composer Pro to produce a burst of particles that don't move (i.e. speed close to zero). When viewing from different angles, it clearly looks like the particles are rendered exactly in the wrong order, that is, front first and back last. In other words, back particles obscure front particles. I would prefer it the correct way around. I've only tried this interactively in Reality Composer Pro, not programmatically, but I assume I would get the same result. My Reality Composer Pro "File" (zipped): https://gert-rieger-edv.de/Posts/Post-1/RealityParticles.zip Screenshot: Click on the ParticleEmitter object, then on its Play button, then select the Particles tab and click on "Burst" a few times to get a few random particles. Mac Studio 2025 Apple M4 Max macOS 15.7.2 (24G325) Reality Composer Pro Version 2.0 (494.60.2)
0
0
513
Dec ’25
MTLCaptureManager.sharedCaptureManager generates corrupted .gputrace files (0KB, invalid internal structure)
Hello, I am experiencing an issue with programmatically capturing a GPU trace using MTLCaptureManager. The .gputrace file that is generated appears to be corrupted, and I'm looking for guidance or a solution. Description of the Problem: I am using MTLCaptureManager.sharedCaptureManager to capture a Metal frame and save it to disk. The generated .gputrace file is consistently reported as 0 bytes in size by the file system. Crucially, when I compress this 0-byte .gputrace file into a .zip archive, the resulting archive contains the full, expected data. After unzipping, the file can be opened and viewed correctly in Xcode. However,When inspecting the file's contents using NSFileManager in Objective-C (treating it as a directory), the internal structure is different from a .gputrace file captured directly from Xcode's Metal Debugger. capture in xcode capture in file Finally,When capturing multiple frames programmatically, the first captured frame contains valid buffer data. However, for subsequent frames (starting from the second frame), the corresponding buffer contents are all zero-filled. Frame 1: All MTLBuffer data is correctly captured and populated. Frame 2 and onward: The same MTLBuffer objects are present in the trace, but their contents are entirely 0 (i.e., the data is not captured or is corrupted). In this case, the on-screen display is normal, but the captured frame is incorrect. The frame captured directly in Xcode is also correct. Only the frame captured to a file is abnormal.
1
0
520
Aug ’25
iPad - Can I prevent Multitasking on my app?
I have a game built in Unreal Engine 5.6 which uses tilt motion controls to rotate an object. I've restricted the app to only run in portrait for iPhone, and everything works fine, however for iPad I've had a few issues relating to multitasking and I can't seem to solve it. Forcing the app to portrait only still allows the app to run in landscape mode, but shows black bars either side of the game, and the axes for the motion controls are incorrect. X becomes Y and Y becomes X, and there's no way for my app to know which orientation it is because the container is still technically portrait. Allowing my game to run in all orientations makes the whole app more presentable, it doesn't add black bars and the game is still functional and I'm able to map the controls correctly because the game knows it's landscape rather than portrait. The problem with allowing my app to run in landscape mode is if multitasking is enabled on the ipad, you can resize the app to be portrait, and then I run into the same problem again where the game thinks it's portrait mode and all of the axes are wrong again. I tried getting the true orientation of the device rather than the scene, but the game is intended to be played flat so instead of returning the orientation of the OS the orientation is FaceUp, which doesn't help. I need to either disable multitasking or find a way of getting the orientation of the OS (not the scene or the device). I haven't found how to get the OS orientation so I've been trying to disable multitasking. I've got Requires Fullscreen true and UIApplicationSupportsMultipleScreens false in my info.plist but my iPad still seems to allow the window to be resized in landscape view. Opening the IOS workspace of my project Requires Fullscreen is ticked but under that it says "Supports Multiple Windows" and the arrow button next to it takes my to my info.plist values, but no indication of how I can change it. I'm using Unreal Engine 5.6 and Xcode 16.0. Xcode is old I know, but this version of unreal engine doesn't seem to support any newer.
0
0
296
Nov ’25