Create the QRCode
CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator;
f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding];
f.correctionLevel = @"Q"; // increase level
CIImage *qrcode = f.outputImage;
Overlay the icon
CIImage *icon = [CIImage imageWithURL:url];
CGAffineTransform *t = CGAffineTransformMakeTranslation(
(qrcode.extent.width-icon.extent.width)/2.0,
(qrcode.extent.height-icon.extent.height)/2.0);
icon = [icon imageByApplyingTransform:t];
qrcode = [icon imageByCompositingOver:qrcode];
Round off the corners
static dispatch_once_t onceToken;
static CIWarpKernel *k;
dispatch_once(&onceToken, ^ {
k = [CIWarpKernel kernelWithFunctionName:name
fromMetalLibraryData:metalLibData()
error:nil];
});
CGRect iExtent = image.extent;
qrcode = [k applyWithExtent:qrcode.extent
roiCallback:^CGRect(int i, CGRect r) {
return CGRectInset(r, -radius, -radius); }
inputImage:qrcode
arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]];
…and this code for the kernel should go in a separate .ci.metal source file:
float2 bend_corners (float4 extent, float s, destination dest)
{
float2 p, dc = dest.coord();
float ratio = 1.0;
// Round lower left corner
p = float2(extent.x+s,extent.y+s);
if (dc.x < p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round lower right corner
p = float2(extent.x+extent.z-s, extent.y+s);
if (dc.x > p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper left corner
p = float2(extent.x+s,extent.y+extent.w-s);
if (dc.x < p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper right corner
p = float2(extent.x+extent.z-s, extent.y+extent.w-s);
if (dc.x > p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
return dc;
}
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi
I've noticed one issue in Metal HUD, but I'm not sure if it is a bug in the Metal HUD or if there is a purpose for this behavior.
Metal HUD has an option to send the data to system log in raw format where the numbers are like
metal-HUD: ,,,,,...,
https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance/
If the HUD is displayed, it works just fine, but it seems that when the HUD is hidden (with shift-F9), it still send the data to system log, but the numbers are the same all the time and are not updated while is still being updated.
I would expect that it should log the data no matter if the HUD is displayed or not, this of course leads to incorrect FPS calculations
Here is an example of the system log entries when the HUD is not visible:
Topic:
Graphics & Games
SubTopic:
General
Hello
If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds.
However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
Topic:
Graphics & Games
SubTopic:
RealityKit
I'm looking to create an effect on iOS that tracks the user's face position with ARKit and shifts nearer/more prominent geometry in the scene around while more "distant" geometry stays fixed to the XY plane - making it look like the geometry on screen "sticks out"
I've managed to implement most of this successfully, but it's not perfect when using PerspectiveCameraComponent in RealityKit because as I shift the camera (and change its field of view based on the user's distance) the backplane changes its orientation (it's always orthogonal to camera's direction).
I've tried adopting ProjectiveTransformCameraComponent instead. The idea is that the camera shifts around the scene, mirroring the user's head's position, looking at (0,0,0) and the back plane is adjusted to be parallel with the X,Y plane (animation replicated in Blender below).
However, I can't manage to set up ProjectiveTransformCameraComponent with an appropriate matrix or update its transform property in a RealityKit System correctly.
I also tried setting many simpler projection matrices as described in a number of guides on camera projection matrices on the internet and all I get is a blank view.
Does anyone have some guidance on what the projection matrix that ProjectiveTransformCameraComponent expects is meant to look like or how I would go about accomplishing my goal?
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
I have been trying to run an open source Windows executable that I would like to help porting to macOS using the Game Porting Toolkit but I stumbled on an issue quite early in the application lifecycle.
It looks like the funtion GetThreadDpiHostingBehavior is missing in USER32.dll
Has anyone any idea how to solve that?
During the startup, it fails with the following error:
TiXL crashed. We're really sorry.
The last backup was saved Unknown time to...
C:\users\crossover\AppData\Roaming\TiXL\Backup
Please refer to Help > Using Backups on what to do next.
System.EntryPointNotFoundException: Unable to find an entry point named 'GetThreadDpiHostingBehavior' in DLL 'USER32.dll'.
at System.Windows.Forms.ScaleHelper.DpiAwarenessScope..ctor(DPI_AWARENESS_CONTEXT context, DPI_HOSTING_BEHAVIOR behavior)
at System.Windows.Forms.ScaleHelper.EnterDpiAwarenessScope(DPI_AWARENESS_CONTEXT awareness, DPI_HOSTING_BEHAVIOR dpiHosting)
at System.Windows.Forms.NativeWindow.CreateHandle(CreateParams cp)
at System.Windows.Forms.Control.CreateHandle()
at System.Windows.Forms.Application.ThreadContext.get_MarshallingControl()
at System.Windows.Forms.WindowsFormsSynchronizationContext..ctor()
at System.Windows.Forms.WindowsFormsSynchronizationContext.InstallIfNeeded()
at System.Windows.Forms.Control..ctor(Boolean autoInstallSyncContext)
at System.Windows.Forms.ScrollableControl..ctor()
at System.Windows.Forms.ContainerControl..ctor()
at System.Windows.Forms.Form..ctor()
at T3.Editor.SplashScreen.SplashScreen.SplashForm..ctor()
at T3.Editor.SplashScreen.SplashScreen.Show(String imagePath) in C:\Users\pixtur\dev\tooll\tixl\Editor\SplashScreen\SplashScreen.cs:line 25
at T3.Editor.Program.Main(String[] args) in C:\Users\pixtur\dev\tooll\tixl\Editor\Program.cs:line 111
In a Unity game, if the assetBundle.Unloadinterface is called very frequently during normal gameplay, it can cause the game application's screen to freeze, although the background music continues to play normally. This issue only occurs on iPhone 16 and iPhone 17 models, with no problems on lower-version phones. How can this problem be resolved?
Hi,
I’m testing Unity’s Spaceship HDRP demo on iPhone 17 Pro Max and iPad Pro M4 (iOS 26.1).
Everything renders correctly, and my custom MetalFX Spatial plugin initializes successfully — it briefly reports active scaling (e.g. 1434×660 → 2868×1320 at 50% scaling), then reverts to native rendering a few frames later.
Setup:
Xcode 16.1 (targeting iOS 18)
Unity 2022.3.62f3 (HDRP)
Metal backend
Dynamic Resolution enabled in HDRP assets and cameras
Relevant Xcode console excerpt:
[MetalFXPlugin] MetalFX_Enable(True) called.
[SpaceshipOptions] MetalFX enabled with HDRP dynamic resolution integration.
[SpaceshipOptions] Disabled TAA for MetalFX Spatial.
[SpaceshipOptions] Created runtime RenderTexture: 1434x660
[MetalFX] Spatial scaler created (1434x660 → 2868x1320).
[MetalFX] Processed frame with scaler.
[MetalFXPlugin] Sent RenderTexture (1434x660) to MetalFX. Output target 2868x1320.
[SpaceshipOptions] MetalFX target set: 1434x660
[SpaceshipOptions] Camera targetTexture cleared after MetalFX handoff.
It looks like HDRP clears the camera’s target texture right after MetalFX submits the frame, which causes it to revert to native rendering.
Is there a recommended way to persist or rebind the MetalFX output texture when using HDRP on iOS?
Unity doesn’t appear to support MetalFX in the Editor either:
Thanks!
Hello! I'm currently porting a videogame console emulator to iOS and I'm trying to make the renderer (tested on MacOS) work on iOS as well.
The emulator core is written in C++ and uses metal-cpp for rendering, whereas the iOS frontend is written in Swift with SwiftUI. I have an Objective-C++ bridging header for bridging the Swift and C++ sides.
On the Swift side, I create an MTKView. Inside the MTKView delegate, I run the emulator for 1 video frame and pass it the view's backing layer for it to render the final output image with. The emulator runs and returns, but when it returns I get a crash in Swift land (callstack attached below), inside objc_release, which indicates I'm doing something wrong with memory management.
My bridging interface (ios_driver.h):
#pragma once
#include <Foundation/Foundation.h>
#include <QuartzCore/QuartzCore.h>
void iosCreateEmulator();
void iosRunFrame(CAMetalLayer* layer);
Bridge implementation (ios_driver.mm):
#import <Foundation/Foundation.h>
extern "C" {
#include "ios_driver.h"
}
<...>
#define IOS_EXPORT extern "C" __attribute__((visibility("default")))
std::unique_ptr<Emulator> emulator = nullptr;
IOS_EXPORT void iosCreateEmulator() { ... }
// Runs 1 video frame of the emulator and
IOS_EXPORT void iosRunFrame(CAMetalLayer* layer) {
void* layerBridged = (__bridge void*)layer;
// Pass the CAMetalLayer to the emulator
emulator->getRenderer()->setMTKLayer(layerBridged);
// Runs the emulator for 1 frame and renders the output image using our layer
emulator->runFrame();
}
My MTKView delegate:
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var device: MTLDevice!
init(_ parent: ContentView) {
self.parent = parent
if let device = MTLCreateSystemDefaultDevice() {
self.device = device
}
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
var metalLayer = view.layer as! CAMetalLayer
// Run the emulator for 1 frame & display the output image
iosRunFrame(metalLayer)
}
}
Finally, the emulator's render function that interacts with the layer:
void RendererMTL::setMTKLayer(void* layer) {
metalLayer = (CA::MetalLayer*)layer;
}
void RendererMTL::display() {
CA::MetalDrawable* drawable = metalLayer->nextDrawable();
if (!drawable) {
return;
}
MTL::Texture* texture = drawable->texture();
<rest of rendering follows here using the drawable & its texture>
}
This is the Swift callstack at the time of the crash:
To my understanding, I shouldn't be violating ARC rules as my bridging header uses CAMetalLayer* instead of void* and Swift will automatically account for ARC when passing CoreFoundation objects to Objective-C. However I don't have any other idea as to what might be causing this. I've been trying to debug this code for a couple of days without much success.
If you need more info, the emulator code is also on Github
Metal renderer: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/core/renderer_mtl/renderer_mtl.cpp#L58-L68
Bridge implementation: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/ios_driver.mm
Bridging header: https://github.com/wheremyfoodat/Panda3DS/blob/ios/include/ios_driver.h
Any help is more than appreciated. Thank you for your time in advance.
iPhone(14 Pro Max)で端末の画面にリフレッシュレートを表示させたいのですが、どなたか方法をご存知ないでしょうか?
Topic:
Graphics & Games
SubTopic:
General
我们想在游戏类 App 内接入 Game Center。用户可以在游戏内创建多个角色,若用户在游戏内创建了2个角色:角色1、角色2,请问:
当用户将角色1与 Game Center 绑定后,数据将上报至 Game Center。此时玩家想要将角色1与 Game Center 解除绑定,解绑后,再将角色2与 Game Center 绑定。那么这时角色1的数据是留存在 Game Center 中,还是将被移除?
I have a question I guess more for the Apple team.
But why are there no totally 3D experiences for the Vision Pro lineup?
I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro?
Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games.
Is this something Apple is thinking about implementing?
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
I have an odd bug, if I use initWithFrame as the init routine for NSView subclass that uses layers I don't see this bug.
But if I embedded this view into a storyboard with a .nib file and use initWithCoder, I need to return true on
(BOOL) contentsAreFlipped
From the NSView subclass
If I don't the CALayer actually renders from 0,0 from the view upwards and off the window.
The frame sizes for the NSView and the CALayer are good.. when I see them in updateLayer.
Obviously I have a fix.. but I would like to understand why.
Topic:
Graphics & Games
SubTopic:
General
Hi everyone,
I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture.
While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges.
Current Core Implementations:
GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders.
Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering.
Temporal Stability: Custom TAA with history rejection and Motion Blur resolve.
Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy.
The Architectural Challenge:
I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers.
&& m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty();
if (canBuildHzb) {
MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder();
hzbInit->setComputePipelineState(m_hzbInitPipeline);
hzbInit->setTexture(m_depthTexture, 0);
hzbInit->setTexture(m_hzbMipViews[0], 1);
if (m_pointClampSampler) {
hzbInit->setSamplerState(m_pointClampSampler, 0);
} else if (m_linearClampSampler) {
hzbInit->setSamplerState(m_linearClampSampler, 0);
}
const uint32_t hzbWidth = m_hzbMipViews[0]->width();
const uint32_t hzbHeight = m_hzbMipViews[0]->height();
const uint32_t threads = 8;
MTL::Size tgSize = MTL::Size(threads, threads, 1);
MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads,
(hzbHeight + threads - 1) / threads * threads,
1);
hzbInit->dispatchThreads(gridSize, tgSize);
hzbInit->endEncoding();
for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) {
MTL::Texture* src = m_hzbMipViews[mip - 1];
MTL::Texture* dst = m_hzbMipViews[mip];
if (!src || !dst) {
continue;
}
MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder();
downEncoder->setComputePipelineState(m_hzbDownsamplePipeline);
downEncoder->setTexture(src, 0);
downEncoder->setTexture(dst, 1);
const uint32_t mipWidth = dst->width();
const uint32_t mipHeight = dst->height();
MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads,
(mipHeight + threads - 1) / threads * threads,
1);
downEncoder->dispatchThreads(downGrid, tgSize);
downEncoder->endEncoding();
}
if (m_instanceCullHzbPipeline) {
dispatchInstanceCulling(m_instanceCullHzbPipeline, true);
}
}
My Questions:
Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR?
visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency.
Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process?
I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
Hello Everyone I am new here,
I am testing game center integration and using a development build of my IOS game. I have set up a couple of achievements in app store connect, but when I trigger them in the game then they do not unlock or show up.
Okay so i am signed into the game center with a sandbox account on a test advice. Is there anything else I need to configure, or do achievements usually only work after the game is released?.
I will appreciate any guidance…
Thanks in Advance!!!
I am using Unity's GameKit to implement a turnbase game.
I want to make a UI in Unity to show all the games I can join.
I tried using
var matches = await GKTurnBasedMatch.LoadMatches();
to get all the open matches.
But it seems that I can only get the matcm related to the current apple account.
Can you help me get all the matches?
ALSO
I used
var match = await GKTurnBasedMatchmakerViewController.Request(request);
to exit the gamecenter interface and start a game (automatic matching, no one was invited)
Another device used
var match = await GKTurnBasedMatch.Find(request);
to find the game, but it did not find the game, but it start a new game (automatic matching).
Can you help me solve these problems?
I'm using RealityView in my iOS game mxied with SwiftUI. For the following 2 example usages, the simulator will only render the first RealityView, and the second one is either super laggy or show a black model. Running on the real device is all good, just simualtor has this issue.
Have a TabView and each tab has a RealityView.
Have a root view and detail view connected via a push navigation, both root and detail have a RealityView.
In the Simulator, the second RealityView is going to be very choppy and basically unusable, but on a real iPhone everything looks great.
Is this a known simulator issue or I did something bad?
Was wondering if anyone from Apple could provide some clarification, The gaming studio "Epic Games" Is wondering if they could distribute the award winning game "Fortnite" back on MacOS without any retaliations.
I know Fortnite being back on MacOS would benefit thousands of MacOS Devs. Hoping to get a clarification so Epic could start on bringing Fortnite back.
I have code that captures a window and displays a cropped image. The problem is 2 fold. Kit doesn't seem to allow to modify stop and recapture image in window mode to capture a portion of the screen.
So this makes me having to crop and display the cropped image via a published variable. This all works find. But seems to stop after some time.
Using an M1 16gig ram. program is taking less than 100meg of mem with 40-70%cpu as the crow flies.
printing captured success in debug mode and sometimes frame isn't valid so guarding against it.
any ideas on how to improve my strategy?
What is the recommended way to attach SwiftUI views to RealityKit entities on macOS, iOS, etc?
All the APIs seem to be visionOS only:
https://developer.apple.com/documentation/realitykit/realityviewattachments
https://developer.apple.com/documentation/realitykit/viewattachmentcomponent
https://developer.apple.com/documentation/realitykit/presentationcomponent
https://developer.apple.com/documentation/realitykit/imagepresentationcomponent
My only idea is to do it "manually" with a ZStack and RealityView somehow?
I submitted this as a feedback since it seemed like an oversight: FB18034856.