Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
294
Jul ’25
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
1k
Sep ’25
ReplayKit Issue on iOS 26
When previewing the recording of gameplay the buttons to exit or save are unclickable behind the top bar clock and Wi-Fi/5G status bar. Which means that you have to quit the game in order to continue. Tested on multiple devices. Does anyone have a solution to this? At the moment we have disabled it altogether for iOS 26 users.
1
3
333
Nov ’25
VRAM not freeing in Elite Dangerous
So I've been trying out GPTK with Elite Dangerous Horizons game and it looks like from what I can tell. The VRAM keeps going up until it goes over the limit where it drops the FPS to 1-3 FPS and then crashes the game. From the Performance HUD I can see that it looks like when using GPTK, the VRAM usage just keeps climbing and I never saw it drop down at all. I did some limited testing, and from that I think I can conclude that it is probably not a VRAM leak, but it might be caching it. The reason for this is because I noticed that if I went back to the area that I've been before. It won't increase the VRAM usage. So either there is something wrong with the freeing VRAM memory part, or it could be that GPTK might not be reporting the right amount of VRAM available to use? So maybe that's why it keeps allocating VRAM until it went out of memory and crashed the game. Just to test, I did try running the game with DXVK+MoltenVK combo, and I can see that it works just fine. VRAM is being freed up when it's no longer used. Is this a known issue in some games?
12
3
869
Apr ’25
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
16
3
1.2k
May ’25
Request low-latency streaming for iOS/iPadOS
Just found out this key available for visionOS https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.developer.low-latency-streaming It seems to keep video streaming from being interrupted by AWDL, our community needs it badly for self-hosted game streaming (PC to iPhone / iPad). Related apps: Moonlight / VoidLink / SteamLink. Can we expect this on iOS/iPadOS 26, or even iOS/iPadOS 18 ?
1
3
274
Nov ’25
SceneKit Transparent Material Self-Overlapping Issue (Front Face Overlapping)
Description: I'm developing an AR effect using SceneKit and applying a transparent material to a face mesh. However, I'm facing an issue where the front faces of the mesh overlap each other, causing incorrect rendering. Problem: The front faces of the mesh overlap with each other when transparency is applied. This causes areas like the cheeks to be visible through the nose, even though they should be occluded. Expected Behavior: The material should behave as if it were opaque to itself—that is, overlapping front faces should be occluded properly, while still allowing transparency for background elements. Actual Behavior: The mesh renders its own front faces incorrectly, making parts of the face visible through others when they should be blocked. What I Have Tried: testMaterial.writesToDepthBuffer = true testMaterial.readsFromDepthBuffer = true Question: 👉 How can I prevent SceneKit's transparent material from rendering overlapping front faces? 👉 Is there a way to force SceneKit to treat its own mesh as opaque for itself while still being transparent to the background? 👉 Does SceneKit support a proper depth pre-pass or an equivalent to Unity’s ZWrite shaders to solve this issue? Attached screenshots demonstrate the problem visually. Any help would be greatly appreciated! 🚀
0
2
467
Feb ’25
GameKit not working as expected in iOS 26.
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26. Then I found two issues when building my app with Xcode 26 and running it on simulator 26. The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4 func authenticatePlayer() { GKAccessPoint.shared.location = .topTrailing self.localPlayer.authenticateHandler = { viewController, error in if let viewController = viewController { // can present Game Center login screen } else if self.localPlayer.isAuthenticated { // game can be started } else { // user didn't log in, continue the game without game center } } } After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4 struct GameCenterView: UIViewControllerRepresentable { @Environment(\.presentationMode) var presentationMode ... func makeUIViewController(context: Context) -> GKGameCenterViewController { let viewController = GKGameCenterViewController( leaderboardID: getLeaderBoardID(with: leaderBoardGameMode), playerScope: .global, timeScope: .allTime ) viewController.gameCenterDelegate = context.coordinator return viewController } func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, GKGameCenterControllerDelegate { let parent: GameCenterView init(_ parent: GameCenterView) { self.parent = parent } func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) { parent.presentationMode.wrappedValue.dismiss() } } }
5
2
379
Sep ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
1
2
931
Jul ’25
Per-vertex color. in a custom RealityKit mesh? (macOS)
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face). I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://developer.apple.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution). Can someone give me some pointers?
6
2
1.8k
Nov ’25
Roblox freezing on MacOS 26.1 Beta
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working. Wondering if you could help, when it will be fixed and if others are having the same issue. Many thanks, William.
4
2
730
Oct ’25
Are there complete code examples available for “Combine Metal 4 machine learning and graphics”?
Hello, I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder. While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4. Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to: Export a model to an .mlpackage, Convert it to an .mtlpackage, Use MTL4MachineLearningCommandEncoder alongside render passes, Or embed small neural networks directly in shaders using Shader ML? Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently. Thank you very much for your time and support! Best regards,
4
2
860
Nov ’25
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
548
Sep ’25
Struggles with attaching a ModelEntity to the skeleton joints of another ModelEntity
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm. The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit. I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit Will this be addressed in some way?
5
2
765
Jul ’25
Is migrating from ARView to RealityView recommended?
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS. In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://developer.apple.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements. Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS. Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost. If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition? Thank you.
1
2
184
Jun ’25
PingFang.ttc font file is missing in iOS 18.0
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0. I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future? My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue. In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either. I traversed the following system font directories and could not find the PingFang.ttc font file. /System/Library/Fonts/AppFonts /System/Library/Fonts/Core /System/Library/Fonts/CoreAddition /System/Library/Fonts/CoreUI /System/Library/Fonts/LanguageSupport /System/Library/Fonts/UnicodeSupport /System/Library/Fonts/Watch Looking for answers, thanks for the help!
9
2
6k
Apr ’25
Spatial Scene API for iOS Apps
As part of the WWDC25 Keynote, a technology was announced that can present 2D images as 3D spatial scenes. This announcement is supported by a Press Release. ...developers can use the Spatial Scene API to make their app experience even more immersive. Zillow is taking advantage of the API for their Zillow Immersive app, allowing users to see images of homes and apartments with the rich depth and dimension that spatial scenes offer. The feature also appears in the Photos App on iOS 26 Developer Beta 1. Tapping "Spatial Scene" on any photo opens a view of that photo with a parallax effect. I've searched the WWDC sessions and new documentation and have come up short. Reaching out here for help. Is there any documentation for Spatial Scene API? Or any guidance on how to implement the spatial scene in iOS?
1
2
300
Jun ’25
SpriteKit framerate drop on iOS 26.0
Hello, I have noticed a performance drop on SpriteKit-based projects running on iOS 26.0 (23A341). Below is a SpriteKit scene used to test framerate on different devices: import SpriteKit import SwiftUI class BareboneScene: SKScene { override func didMove(to view: SKView) { size = view.bounds.size anchorPoint = CGPoint(x: 0.5, y: 0.5) backgroundColor = .darkGray let roundedSquare = SKShapeNode(rectOf: CGSize(width: 150, height: 75), cornerRadius: 12) roundedSquare.fillColor = .systemRed roundedSquare.strokeColor = .black roundedSquare.lineWidth = 3 addChild(roundedSquare) let action = SKAction.rotate(byAngle: .pi, duration: 1) roundedSquare.run(.repeatForever(action)) } } struct BareboneSceneView: View { var body: some View { SpriteView( scene: BareboneScene(), debugOptions: [.showsFPS] ) .ignoresSafeArea() } } #Preview { BareboneSceneView() } The scene is very simple, yet framerate drops to ~40 fps as shown by the Metal HUD. Tested on: iPhone 13, iOS 26.0: framerate drops to 40 fps. Sometimes it runs at near 60fps. But if the screen is touched repeatedly, the framerate drops to 40-50 fps again. iPhone 11 Pro, iOS 26.0: ~40fps. iPad 9th Gen, iOS 18.6.2: 60fps, no issues. See screenshots attached. These numbers were observed by me and members of our beloved SpriteKit Discord server. Thank you for your attention.
11
5
1.9k
1w
How to attach SwiftUI Views to entities on non-visionOS platforms?
What is the recommended way to attach SwiftUI views to RealityKit entities on macOS, iOS, etc? All the APIs seem to be visionOS only: https://developer.apple.com/documentation/realitykit/realityviewattachments https://developer.apple.com/documentation/realitykit/viewattachmentcomponent https://developer.apple.com/documentation/realitykit/presentationcomponent https://developer.apple.com/documentation/realitykit/imagepresentationcomponent My only idea is to do it "manually" with a ZStack and RealityView somehow? I submitted this as a feedback since it seemed like an oversight: FB18034856.
0
2
116
Jun ’25