Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

MetalFx
Recently, I adopted MetalFX for Upscale feature. However, I have encountered a persistent build failure for the iOS Simulator with the error message, 'MetalFX is not available when building for iOS Simulator.' To address this, I modified the MetalFX.framework status to 'Optional' within Build Phases > Link Binary With Libraries, adding the linker option (-weak_framework). Despite this adjustment, the build process continues to fail. Furthermore, I observed that the MetalFX sample application provided by Apple, specifically the one found at https://developer.apple.com/documentation/metalfx/applying-temporal-antialiasing-and-upscaling-using-metalfx, also fails to build for the iOS Simulator target. Has anyone encountered this issue?
3
0
750
Mar ’25
Unable to play .aivu with VideoPlayerComponent
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here: https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent Here is a simplified version of the code I'm running in another application: import SwiftUI import RealityKit import AVFoundation struct ImmersiveView: View { var body: some View { RealityView { content in let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!) let videoEntity = Entity() var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.desiredImmersiveViewingMode = .full videoPlayerComponent.desiredViewingMode = .stereo player.play() videoEntity.components.set(videoPlayerComponent) content.add(videoEntity) } } } Full code is here: https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes: App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error} CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience [0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format. Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things: https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video Steps to reproduce the issue: - Open AIVUPlayerSample project and run. Look at the logs. All code can be found in ImmersiveView.swift Sample file is included in the project Expected results: If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace. Am i doing something wrong in the code? I'm basically following the documentation here. Feedback ticket: FB19971306
3
0
810
Aug ’25
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
3
0
1.2k
Sep ’25
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
294
Jul ’25
MDLAsset loads texture in usdz file loaded with wrong colorspace
I have a very basic usdz file from this repo I call loadTextures() after loading the usdz via MDLAsset. Inspecting the MDLTexture object I can tell it is assigning a colorspace of linear rgb instead of srgb although the image file in the usdz is srgb. This causes the textures to ultimately render as over saturated. In the code I later convert the MDLTexture to MTLTexture via MTKTextureLoader but if I set the srgb option it seems to ignore it. This significantly impacts the usefulness of Model I/O if it can't load a simple usdz texture correctly. Am I missing something? Thanks!
3
3
850
Dec ’24
Xcode Playground - The LLDB RPC server has crashed.
I am trying to learn Metal development on my MacBook Pro M1 Pro (Sequoia 15.3.1) on Xcode Playground, but when I write these two lines of code: import Metal let device = MTLCreateSystemDefaultDevice()! I get the error The LLDB RPC server has crashed. Any ideas as to what I can do to solve this? I have rebooted the machine and reinstalled Xcode...
3
0
495
Mar ’25
JPEG2000 (JP2) Decoding Works on iOS 16 but Fails on iOS 18
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it. On iOS 16, the following code works fine: import ImageIO import UIKit func getUIImage(from imageData: [UInt8]) -> UIImage? { let data = Data(imageData) guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil), let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { print("Failed to decode JP2 image!") return nil } return UIImage(cgImage: cgImage) } However, on iOS 18, this fails with errors like: initialize:1415: *** invalid JPEG2000 file *** makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50] CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59] Questions: Did Apple remove or modify JPEG2000 support in iOS 18? Is there an official workaround for decoding JPEG2000 on iOS 18? Should I use Vision/Metal/Core Image instead? Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage? Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding? Additional Info: The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...). The image works on iOS 16 but fails on iOS 18. Tested on iPhone running iOS 18.0 beta. Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
3
0
341
Mar ’25
Updated Object Capure -- needs LiDAR?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
3
0
552
Mar ’25
Metal useResource vs. MTLFence
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away. The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors. Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
3
0
149
Jul ’25
CGSetDisplayTransferByTable no longer working on macOS Tahoe
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion. I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this. FB18559786
3
1
305
Aug ’25
CIBumpDistortion filter not working on my view
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text. However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer. // // TVView.swift // Mistery // // Created by Joje on 31/07/25. // import CoreImage import CoreImage.CIFilterBuiltins import UIKit import AVFoundation final class TVView: UIView { // propriedades animacao texto private var textAnimationTimer: Timer? private var fullTextToAnimate: String = "" private var currentCharIndex: Int = 0 // propriedades video estatica private var player: AVQueuePlayer? private var playerLayer: AVPlayerLayer? private var playerLooper: AVPlayerLooper? var onNextButtonTap: () -> Void = {} // MARK: - Subviews // imagem da TV private(set) lazy var tvImageView: UIImageView = { let imageView = UIImageView() imageView.translatesAutoresizingMaskIntoConstraints = false imageView.image = UIImage(named: "tvFinal") imageView.contentMode = .scaleAspectFit return imageView }() // texto que passa dentro da TV private(set) lazy var storyLabel: UILabel = { let label = UILabel() label.translatesAutoresizingMaskIntoConstraints = false //label.backgroundColor = .gray label.textColor = .red label.font = UIFont(name: "MeltedMonster", size: 30) label.textAlignment = .left label.numberOfLines = 0 label.text = "" return label }() private(set) lazy var nextButton: UIButton = { let button = UIButton(type: .system) button.translatesAutoresizingMaskIntoConstraints = false //button.backgroundColor = .darkGray button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside) return button }() // MARK: - Lifecycle override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .black setupVideoPlayer() addSubviews() setupConstraints() } override func layoutSubviews() { super.layoutSubviews() playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18) setupFisheyeEffect() } private func setupFisheyeEffect() { // cria o filtro guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")} storyLabel.layer.shouldRasterize = true storyLabel.layer.rasterizationScale = UIScreen.main.scale // define os parametros filter.setDefaults() // centro do efeito let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY) filter.setValue(center, forKey: kCIInputCenterKey) // raio de distorção filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey) // intensidade de distorção filter.setValue(7, forKey: kCIInputScaleKey) storyLabel.layer.filters = [filter] } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } // MARK: - Button actions @objc private func didPressNextButton() { onNextButtonTap() } @objc private func animateNextCharacter() { guard currentCharIndex < fullTextToAnimate.count else { textAnimationTimer?.invalidate() return } let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex) let partialText = String(fullTextToAnimate[...currentTextIndex]) storyLabel.text = partialText currentCharIndex += 1 } public func updateStoryText(with text: String) { textAnimationTimer?.invalidate() storyLabel.text = "" fullTextToAnimate = text currentCharIndex = 0 textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true) } // MARK: - Setup methods private func setupVideoPlayer() { guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else { print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov") return } let playerItem = AVPlayerItem(url: videoURL) player = AVQueuePlayer(playerItem: playerItem) // LINHA COM POSSIVEL ERRO playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem) playerLayer = AVPlayerLayer(player: player) playerLayer?.videoGravity = .resizeAspectFill if let layer = playerLayer { self.layer.addSublayer(layer) } player?.play() } private func addSubviews() { self.addSubview(storyLabel) self.addSubview(tvImageView) self.addSubview(nextButton) } private func setupConstraints() { NSLayoutConstraint.activate([ // TV Image tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor), tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor), tvImageView.widthAnchor.constraint(equalTo: widthAnchor), // TV Text storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50), storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35), storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42), //TV Button nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190), nextButton.widthAnchor.constraint(equalToConstant: 100), nextButton.heightAnchor.constraint(equalToConstant: 160) ]) } } #Preview{ ViewController() }
3
0
142
Sep ’25
CAMetalLayer nextDrawable crash
Hi , My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API. I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like frame property w or h <= 0.0 bounds property w or h <= 0.0 drawableSize w or h <= 0.0 or w or h > max value (like 16384) Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ? Thanks a lot Main Thread - Crashed libsystem_kernel.dylib __pthread_kill libsystem_c.dylib abort libsystem_c.dylib __assert_rtn Metal MTLReportFailure.cold.1 Metal MTLReportFailure Metal _MTLMessageContextEnd Metal -[MTLTextureDescriptorInternal validateWithDevice:] AGXMetalA13 0x245b1a000 + 4522096 QuartzCore allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long) QuartzCore get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool) QuartzCore CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*) QuartzCore -[CAMetalLayer nextDrawable] SpaceApp -[MetalRender renderFrame:] MetalRenderer.mm:167 SpaceApp -[FrameBuffer acceptFrame:] VideoRender.mm:173 QuartzCore CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&) QuartzCore CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) QuartzCore CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int) UIKitCore _UIUpdateSequenceRun UIKitCore schedulerStepScheduledMainSection UIKitCore runloopSourceCallback CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ CoreFoundation __CFRunLoopDoSource0 CoreFoundation __CFRunLoopDoSources0 CoreFoundation __CFRunLoopRun CoreFoundation CFRunLoopRunSpecific GraphicsServices GSEventRunModal UIKitCore -[UIApplication _run] UIKitCore UIApplicationMain
3
0
357
Jul ’25
Is there any future for screensavers on macOS?
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now. I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode). In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers. I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful). I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list. To summarize: No public access to modern APIs AFAIK. UI that is hostile to 3rd party screen savers on macOS Sequoia. Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3. Hence the question: Is there any future for screen savers on macOS? Because if there's none, I won't waste my time trying to update some old screen savers.
3
0
523
Aug ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
388
Jul ’25
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
2
0
913
1w
How to apply a geometry modifier to a VideoMaterial in RealityKit?
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material. This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s). // Did the setup for the video and AVPlayer eventually leading me to let videoMaterial = VideoMaterial(avPlayer: avPlayer) // Assign the material to the entity entity.model!.materials = [videoMaterial] // The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material entity.model!.materials = entity.model!.materials.map { baseMaterial in       try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)     } But, I get the following error Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?
2
0
1.1k
1w
Unknown CHHapticError.Code (1852797029 == 'nope') in iOS 18+ on iPhone 11 Pro
Hello, I'm getting this error when launching a SpriteKit Swift game in iOS 18+ on an iPhone 11 Pro, whose shell is partly damaged in the back: CHHapticEngine.mm:1206 -[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.) Haptics do not work on this device, due to the damaged shell, so some error — which obviously occurs when calling start(completionHandler:) — is definitely expected; what is not expected is the main thread sometimes blocking for up to 5 seconds — although the method is not called from the main thread... the error itself is always displayed from some other secondary (system) thread. During this time, the main thread does not access the haptics engine at all; on average, it blocks once every four or five launches. In each launch (blocking or not), the 'nope' error is displayed ~5 seconds after trying to start the engine. After going nuts with all kinds of breakpoints and instrumentation, I'm at a loss as to why the main thread would sometimes block... Ideas, anyone? Thank you, D.
2
0
186
Jul ’25
RealityKit, Attachments - not working
The simplest realityView (content, attachments in ... causes Contextual closure expects 1 argument but 2 were used in closure body. I have checked every example and i cannot understand why i get this error regardless of any content. Note: i have added Attachment(id: "test") to the attachment closure and get Attachment not is scope. imported both realityKit and SwiftUI.
2
0
189
1w