Hi,
Since iOS 26 introduced the new Games app, I’ve noticed a problem when using a Nintendo Switch Pro Controller in wired USB-C mode, and also with third-party controllers that emulate it (like the GameSir X5 Lite).
In the Games app interface, only the L/R buttons respond, but the D-Pad and analog sticks don’t work at all. Once inside actual games, the controller works fine — the issue only affects the Games app UI.
What I’ve tested so far:
Xbox / PlayStation controllers → work fine in both wired and Bluetooth, including inside the Games app.
Switch Pro Controller (Bluetooth) → works fine, including in the Games app.
Switch Pro Controller (wired) → same issue as the X5 Lite, D-Pad and sticks don’t work in the Games app.
This makes it hard to use the new Games app launcher with these controllers, even though they work perfectly once a game is launched.
My question: is this an iOS bug (Apple needs to add proper support for wired Switch Pro controllers in the Games app), or something that Nintendo / GameSir would need to address?
Thanks in advance to anyone who can confirm this or provide more info.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello!
I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process.
I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere.
Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV?
Any help would be appreciated!
Thanks,
Gustav
Now the examples of metal-cpp are target on desktop and using AppKit which is not supported on iOS. Is there any tips for developing with metal-cpp on mobile device?
Helle there
Currently, I’m attempting to create an interactive learning application with a 3D view. I’ve discovered the framework SceneKit, but I lack the necessary knowledge to animate, load and moving objects. Could someone kindly suggest some good articles or tutorials on this topic?
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Swift Student Challenge
Swift
ARKit
Swift Playground
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text.
I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well.
To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable.
I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!!
Note: The links are clickable for MacOS as well as for Windows.
#ios #ipados #javascript #spfx #react
Just wondering if anyone knows what it will take to hit greater than 60hz when targeting iPhone. If I set the preferredFramesPerSecond of an MTKView to 120, it works on the iPad, but on iPhone it never goes over 60hz, even with a simple hello triangle sample app... is this a limitation of targeting iPhone?
Topic:
Graphics & Games
SubTopic:
Metal
Dear Apple, there is still no support for WebGPU via WebView. Do you have a roadmap on when you plan to support it? That would be very helpful to release our new game.
Topic:
Graphics & Games
SubTopic:
General
When using simd_inverse functions, the same input yields different results on different devices.
On the Mac mini with M4 pro and M3 ultra, the result is
But on the Mac mini of M2 ultra, the result is
Hey there,
I tried to install GPTK again, since I had to reinstall the OS for irrelevant reasons. But every time I try to install the tool kit, it gives me theError: apple/apple/game-porting-toolkit 1.1 did not build error. Before that error occired, I had the Openssl error, which I fixed with the rbenv version of openssl. Is there any way to fix this error? Down bellow you'll find the full error message it gave me. The specs for my Mac are (if they are helpful in any way): M1 Pro MBP 14" with 16GB Ram and 512GB SSD.
Thanks!
``Error: apple/apple/game-porting-toolkit 1.1 did not build
Logs:
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/00.options.out
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/wine64-build
If reporting this issue please do so at (not Homebrew/brew or Homebrew/homebrew-core):
https://github.com/apple/homebrew-apple/issues```
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background...
could i get a metallic surface getting accurate reflections of a box on top ?
i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
I am currently working on a game that involves earning achievements, which I am using the Apple Unity Plug-Ins to display. I have found that occasionally opening the Game Center Dashboard the last achievement earned will not be displayed until the game is closed and reopened. I am using GKAccessPoint.Shared.Trigger to display the Achievements screen, which occasionally seems to open a cached version of the dashboard. I've found that it seems to consistently happen when earning multiple achievements within one minute, but this is not always the case. Does anybody have any experience with something like this in the past?
I'm trying to build an MDLMesh then add normals
let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1),
segments: SIMD3<UInt32>(2, 2, 2),
geometryType: MDLGeometryType.triangles,
inwardNormals:false,
allocator: allocator)
mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0)
When I render the mesh, some normals are (0,0,0). I don't know if the problem is in the mesh, or in the conversion to MTKMesh. Is there a way to examine an MDLMesh with the geometry viewer?
When I look at the variable values for my mdlMesh I get this:
Not too useful. I don't know how to track down the normals.
What's the best way to find out where the normals getting broken?
In this video, tile fragment shading is recommended for image processing. In this example, the unpack function takes two arguments, one of which is RasterizerData. As I understand it, this is the data passed to us from the previous stage (Vertex) of the graphics pipeline.
However, the properties of MTLTileRenderPipelineDescriptor do not include an option for specifying a Vertex function. Therefore, in this render pass, a mix of commands is used: first, a draw command is executed to obtain UV coordinates, and then threads are dispatched.
My question is: without using a draw command, only dispatch, how can I get pixel coordinates in the fragment tile function? For the kernel tile function, everything is clear.
typedef struct
{
float4 OPTexture [[ color(0) ]];
float4 IntermediateTex [[ color(1) ]];
} FragmentIO;
fragment FragmentIO Unpack(RasterizerData in [[ stage_in ]],
texture2d<float, access::sample> srcImageTexture [[texture(0)]])
{
FragmentIO out;
//...
// Run necessary per-pixel operations
out.OPTexture = // assign computed value;
out.IntermediateTex = // assign computed value;
return out;
}
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth.
The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible.
Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices.
I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc).
Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights!
import ARKit
import SwiftUI
import RealityKit
struct RealityViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.environment.sceneUnderstanding.options.insert(.occlusion)
arView.renderOptions.insert(.disableMotionBlur)
arView.renderOptions.insert(.disableDepthOfField)
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) {
configuration.frameSemantics.insert(.personSegmentationWithDepth)
}
arView.session.run(configuration)
arView.session.delegate = context.coordinator
context.coordinator.arView = arView
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, ARSessionDelegate {
var parent: RealityViewContainer
var floorAnchor: ARPlaneAnchor?
init(_ parent: RealityViewContainer) {
self.parent = parent
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
if let arView,floorAnchor == nil {
for anchor in anchors {
if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor,
horizontalPlaneAnchor.alignment == .horizontal,
horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling
floorAnchor = horizontalPlaneAnchor
let backgroundEntity = BackgroundEntity()
let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor)
anchorEntity.addChild(background)
let foregroundEntity = ForegroundEntity()
backgroundEntity.addChild(foregroundEntity)
arView.scene.addAnchor(anchorEntity)
arView.installGestures([.rotation, .translation], for: backgroundEntity)
break // Stop after adding the first horizontal plane (floor)
}
}
}
}
}
}
Topic:
Graphics & Games
SubTopic:
RealityKit
Hey, I've been struggling with this for some days now.
I am trying to write to a sparse texture in a compute shader. I'm performing the following steps:
Set up a sparse heap and create a texture from it
Map the whole area of the sparse texture using updateTextureMapping(..)
Overwrite every value with the value "4" in a compute shader
Blit the texture to a shared buffer
Assert that the values in the buffer are "4".
I have a minimal example (which is still pretty long unfortunately).
It works perfectly when removing the line heapDesc.type = .sparse.
What am I missing? I could not find any information that writes to sparse textures are unsupported. Any help would be greatly appreciated.
import Metal
func sparseTexture64x64Demo() throws {
// ── Metal objects
guard let device = MTLCreateSystemDefaultDevice()
else { throw NSError(domain: "SparseNotSupported", code: -1) }
let queue = device.makeCommandQueue()!
let lib = device.makeDefaultLibrary()!
let pipeline = try device.makeComputePipelineState(function: lib.makeFunction(name: "addOne")!)
// ── Texture descriptor
let width = 64, height = 64
let format: MTLPixelFormat = .r32Uint // 4 B per texel
let desc = MTLTextureDescriptor()
desc.textureType = .type2D
desc.pixelFormat = format
desc.width = width
desc.height = height
desc.storageMode = .private
desc.usage = [.shaderWrite, .shaderRead]
// ── Sparse heap
let bytesPerTile = device.sparseTileSizeInBytes
let meta = device.heapTextureSizeAndAlign(descriptor: desc)
let heapBytes = ((bytesPerTile + meta.size + bytesPerTile - 1) / bytesPerTile) * bytesPerTile
let heapDesc = MTLHeapDescriptor()
heapDesc.type = .sparse
heapDesc.storageMode = .private
heapDesc.size = heapBytes
let heap = device.makeHeap(descriptor: heapDesc)!
let tex = heap.makeTexture(descriptor: desc)!
// ── CPU buffers
let bytesPerPixel = MemoryLayout<UInt32>.stride
let rowStride = width * bytesPerPixel
let totalBytes = rowStride * height
let dstBuf = device.makeBuffer(length: totalBytes, options: .storageModeShared)!
let cb = queue.makeCommandBuffer()!
let fence = device.makeFence()!
// 2. Map the sparse tile, then signal the fence
let rse = cb.makeResourceStateCommandEncoder()!
rse.updateTextureMapping(
tex,
mode: .map,
region: MTLRegionMake2D(0, 0, width, height),
mipLevel: 0,
slice: 0)
rse.update(fence) // ← capture all work so far
rse.endEncoding()
let ce = cb.makeComputeCommandEncoder()!
ce.waitForFence(fence)
ce.setComputePipelineState(pipeline)
ce.setTexture(tex, index: 0)
let threadsPerTG = MTLSize(width: 8, height: 8, depth: 1)
let tgCount = MTLSize(width: (width + 7) / 8,
height: (height + 7) / 8,
depth: 1)
ce.dispatchThreadgroups(tgCount, threadsPerThreadgroup: threadsPerTG)
ce.updateFence(fence)
ce.endEncoding()
// Blit texture into shared buffer
let blit = cb.makeBlitCommandEncoder()!
blit.waitForFence(fence)
blit.copy(
from: tex,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0),
sourceSize: MTLSize(width: width, height: height, depth: 1),
to: dstBuf,
destinationOffset: 0,
destinationBytesPerRow: rowStride,
destinationBytesPerImage: totalBytes)
blit.endEncoding()
cb.commit()
cb.waitUntilCompleted()
assert(cb.error == nil, "GPU error: \(String(describing: cb.error))")
// ── Verify a few texels
let out = dstBuf.contents().bindMemory(to: UInt32.self, capacity: width * height)
print("first three texels:", out[0], out[1], out[width]) // 0 1 64
assert(out[0] == 4 && out[1] == 4 && out[width] == 4)
}
Metal shader:
#include <metal_stdlib>
using namespace metal;
kernel void addOne(texture2d<uint, access::write> tex [[texture(0)]],
uint2 gid [[thread_position_in_grid]])
{
tex.write(4, gid);
}
Hello, when I'm looking to customize the icons of my phone, the applications that are in the grouping genres without replacing with all-black images, I don't know what happens by changing the color of the applications in group of change no color throws just listen not the black stuff
Topic:
Graphics & Games
SubTopic:
General
Hello,
I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer.
When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer.
Do you have any ideas what might be causing the issue?
I have this drawing app that I have been working on for the past few years when I have free time. I recently rebuilt the app in Metal to build out other brushes and improve performance, need to render 10000s of lines in realtime.
I’m running into this issue trying to create a uniform opacity per path. I have a solution but do not love it - as this is a realtime app and the solution could have some bottlenecks. If I just generate a triangle strip from touch points and do my best to smooth, resample, and handle miters I will always get some overlaps. See:
To create a uniform opacity I render to an offscreen texture with blending disabled. I then pre-multiply the color and draw that texture to a composite texture with blending on (I do this per path). This works but gets tricky when you introduce a textured brush, the edges of the texture in the frag shader cut out the line.
Pasted Graphic 1.png
Solution: I discard below a threshold
fragment float4 fragment_line(VertexOut in [[stage_in]],
texture2d<float> texture [[ texture(0) ]]) {
constexpr sampler s(coord::normalized, address::mirrored_repeat, filter::linear);
float2 texCoord = in.texCoord;
float4 texColor = texture.sample(s, texCoord);
if (texColor.a < 0.01) discard_fragment(); // may be slow (from what I read)
return in.color * texColor;
}
Better but still not perfect.
Question: I'm looking for better ways to create a uniform opacity per path. I tried .max blending but that will cause no blending of other paths. Any tips, ideas, much appreciated. If this is too detailed of a question just achieve.