I have been trying to run an open source Windows executable that I would like to help porting to macOS using the Game Porting Toolkit but I stumbled on an issue quite early in the application lifecycle.
It looks like the funtion GetThreadDpiHostingBehavior is missing in USER32.dll
Has anyone any idea how to solve that?
During the startup, it fails with the following error:
TiXL crashed. We're really sorry.
The last backup was saved Unknown time to...
C:\users\crossover\AppData\Roaming\TiXL\Backup
Please refer to Help > Using Backups on what to do next.
System.EntryPointNotFoundException: Unable to find an entry point named 'GetThreadDpiHostingBehavior' in DLL 'USER32.dll'.
at System.Windows.Forms.ScaleHelper.DpiAwarenessScope..ctor(DPI_AWARENESS_CONTEXT context, DPI_HOSTING_BEHAVIOR behavior)
at System.Windows.Forms.ScaleHelper.EnterDpiAwarenessScope(DPI_AWARENESS_CONTEXT awareness, DPI_HOSTING_BEHAVIOR dpiHosting)
at System.Windows.Forms.NativeWindow.CreateHandle(CreateParams cp)
at System.Windows.Forms.Control.CreateHandle()
at System.Windows.Forms.Application.ThreadContext.get_MarshallingControl()
at System.Windows.Forms.WindowsFormsSynchronizationContext..ctor()
at System.Windows.Forms.WindowsFormsSynchronizationContext.InstallIfNeeded()
at System.Windows.Forms.Control..ctor(Boolean autoInstallSyncContext)
at System.Windows.Forms.ScrollableControl..ctor()
at System.Windows.Forms.ContainerControl..ctor()
at System.Windows.Forms.Form..ctor()
at T3.Editor.SplashScreen.SplashScreen.SplashForm..ctor()
at T3.Editor.SplashScreen.SplashScreen.Show(String imagePath) in C:\Users\pixtur\dev\tooll\tixl\Editor\SplashScreen\SplashScreen.cs:line 25
at T3.Editor.Program.Main(String[] args) in C:\Users\pixtur\dev\tooll\tixl\Editor\Program.cs:line 111
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I’m testing Unity’s Spaceship HDRP demo on iPhone 17 Pro Max and iPad Pro M4 (iOS 26.1).
Everything renders correctly, and my custom MetalFX Spatial plugin initializes successfully — it briefly reports active scaling (e.g. 1434×660 → 2868×1320 at 50% scaling), then reverts to native rendering a few frames later.
Setup:
Xcode 16.1 (targeting iOS 18)
Unity 2022.3.62f3 (HDRP)
Metal backend
Dynamic Resolution enabled in HDRP assets and cameras
Relevant Xcode console excerpt:
[MetalFXPlugin] MetalFX_Enable(True) called.
[SpaceshipOptions] MetalFX enabled with HDRP dynamic resolution integration.
[SpaceshipOptions] Disabled TAA for MetalFX Spatial.
[SpaceshipOptions] Created runtime RenderTexture: 1434x660
[MetalFX] Spatial scaler created (1434x660 → 2868x1320).
[MetalFX] Processed frame with scaler.
[MetalFXPlugin] Sent RenderTexture (1434x660) to MetalFX. Output target 2868x1320.
[SpaceshipOptions] MetalFX target set: 1434x660
[SpaceshipOptions] Camera targetTexture cleared after MetalFX handoff.
It looks like HDRP clears the camera’s target texture right after MetalFX submits the frame, which causes it to revert to native rendering.
Is there a recommended way to persist or rebind the MetalFX output texture when using HDRP on iOS?
Unity doesn’t appear to support MetalFX in the Editor either:
Thanks!
In the process of using ARKit's image tracking, we found that different images have significant differences in recognizability. How can we judge the quality of this image in ARKit's image tracking for this situation?
iPhone(14 Pro Max)で端末の画面にリフレッシュレートを表示させたいのですが、どなたか方法をご存知ないでしょうか?
Topic:
Graphics & Games
SubTopic:
General
Hi
Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity
Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
I want to use reality to create a custom material that can use my own shader and support Mesh instancing (for rendering 3D Gaussian splating), but I found that CustomMaterial does not support VisionOS. Is there any other interface that can achieve my needs? Where can I find examples?
Topic:
Graphics & Games
SubTopic:
RealityKit
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
Hi all im having a variety of issues with gamekit matchmaking. On the simulator the matchmaking ui pops up and I can click Quick Match, then immediately "Failed to find Players" this is the same with a real Apple ID and a sandbox account.
If I use real devices the app at least discovers a match, but then the match none of the delegate methods for the match ever get called and the logs are filled with socket not connected and various errors.
My questions are:
Should match making via quick match work in the simulator, I have seen tutorial videos etc of this working, but I can't seem to get it to work.
How do people debug issues with GameCenter / Gamekit to find out why its not able to connect?
Many thanks in advance
Had anyone experienced convexCast causing a crash and what might be behind it?
Here's the call stack:
Matchmaking rules
https://developer.apple.com/documentation/gamekit/matchmaking-rules?language=objc
AppStoreConnectApi rules
https://developer.apple.com/documentation/appstoreconnectapi/rules?language=objc
・Environment
Unity 6000.2.2f1
XCode 16.1
iOS 26
3 iPhones
・AppStoreConnectApi rules
"type": "gameCenterMatchmakingRuleSets",
"id": "f6a88caf-85db-42bf-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "co.mygame.RuleSets.GvERandom34",
"ruleLanguageVersion": 1,
"minPlayers": 3,
"maxPlayers": 4
},
"type": "gameCenterMatchmakingRules",
"id": "6afa68ce-4d2c-496f-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "GameVersion",
"description": "Check Game Version. GvERandom34",
"type": "COMPATIBLE",
"expression": "requests[0].properties.gameVersion == requests[1].properties.gameVersion",
"weight": null
},
"type": "gameCenterMatchmakingQueues",
"id": "7fb645ef-4eca-4510-xxxxxxxxxxxxxxxxxxxx",
"attributes": {
"referenceName": "co.mygame.que.GvERandom34",
"classicMatchmakingBundleIds": []
},
・Objective-C Execution code
queueName = "co.mygame.que.GvERandom34"
keyStr = "gameVersion "
valueStr = "1.0"
- (void)MatchQueueParamStr1Start:(NSString*)queueName keyStr:(NSString*)keyStr valueStr:(NSString*)valueStr
{
if (@available(iOS 17.2, tvOS 17.2, macOS 14.2, visionOS 1.1, *) == NO)
{
DBGLOG(@"MatchQueueParamStr1Start Not support.");
return;
}
self->_matchMakingFlag = YES;
self->_matchFinishFlag = NO;
self->_myMatch = nil;
GKMatchRequest *req = [[GKMatchRequest alloc] init];
if (@available(iOS 17.2, tvOS 17.2, macOS 14.2, visionOS 1.1, *))
{
req.queueName = queueName;
req.properties = @{keyStr: valueStr};
}
[[GKMatchmaker sharedMatchmaker] findMatchForRequest:req withCompletionHandler: ^(GKMatch *match, NSError *error)
{
if (error)
{
[self SetupErrorInfo:error descriptionText:@"findMatchForRequest"];
}
else if(match)
{
self->_myMatch = match;
self->_myMatch.delegate = self;
}
self->_matchMakingFlag = NO;
self->_matchFinishFlag = YES;
}];
}
・
I'm trying to match with three devices.
Matching doesn't work.
5 minutes later times out.
What's the problem?
Hello
I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access.
Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way
[renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0]
Unfortunately, all I am getting is the following error (runtime assertion)
-[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation
offset + length(16) must be <= threadgroupMemoryLength(0).`
What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
Hello,
I created a new project with the provided template for Immersive Environments.
Straight out of box I build to both the Simulator and to Vision Pro and the provided Environment looks like this.
What's interesting is that in Reality Composer Pro, it looks correct so how do I achieve the same look?
Thank you in advance!
I play this game called Sonic Forces: Speed Battle that's available in the app store and I completed a quest outside of the app on this site called TapResearch for some rewards as I've done before and has worked, but after this one time I can no longer enter back into the game without crashing immediately. I tried deleting and reinstalling but nothing. I even tried signing into a different account but that didn't work either. So then I tried to make a new game center account to try and see if it works, and it did, though all my progress has been restarted. Does anyone know how to fix this?
Hi all,
I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode.
When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors:
ERROR cv3dapi.pg: Internal error codes (2): 4011 4012
WARN cv3dapi.pg: Internal warning codes (1): 4507
Output error with code = -15
requestError: CoreOC.PhotogrammetrySession.Error.processError
I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app
here is an example heic image metadata from the sequence.
heif-info Images/00044.869568833.HEIC
MIME type: image/heic
main brand: heic
compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic
image: 3024x4032 (id=49), primary
tiles: 6x8, tile size: 512x512
colorspace: YCbCr, 4:2:0
bit depth: 8
thumbnail: 240x320
color profile: nclx
alpha channel: no
depth channel: yes
size: 192x256
bits per pixel: 8
z-near: 1.173828
z-far: 2.552734
d-min: undefined
d-max: undefined
representation: uniform Z
metadata:
Exif: 960 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes
uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes
uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes
uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes
uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes
uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes
transformations:
angle (ccw): 270
region annotations:
none
properties:
camera intrinsic matrix:
focal length: 2813.695557; 2813.695557
principal point: 1522.338502; 2002.843018
skew: 0.000000
camera extrinsic matrix:
rotation matrix:
-0.695 0.344 -0.632
0.007 -0.875 -0.483
-0.719 -0.340 0.606
Questions:
• What do internal error codes 4011 and 4012 refer to?
• Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession?
• Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools?
NOTE: I can provide the folder of images for debugging if that would help!
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format.
Here is the code
(IBAction)convertButtonClicked:(id)sender {
NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"];
NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"];
MagickWand *wand = NewMagickWand();
if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error reading image: %s", description);
MagickRelinquishMemory(description);
return;
}
if (MagickSetFormat(wand, "JP2") == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error setting image format to JP2: %s", description);
MagickRelinquishMemory(description);
}
if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) {
NSLog(@"Error writing JP2 image");
return;
}
NSLog(@"Image successfully converted.");
}
@end
Topic:
Graphics & Games
SubTopic:
General
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
I am trying to install the Game Porting Toolkit 2.1 according to the Readme file provided with the toolkit.
When I run the following command:
WINEPREFIX=~/my-game-prefix brew --prefix game-porting-toolkit/bin/wine64 winecfg
I get an error message:
zsh: no such file or directory: /usr/local/opt/game-porting-toolkit/bin/wine64
I don't know how to resolve this.
When I type in the command which brew , I get the path
/usr/local/bin/brew
What am I doing wrong?
If I have one portal on the ceiling and one on the floor, can a tall Entity cross multiple portals at once? Will the opposing portal directions cause it to fail?
No matter what I try for the crossingMode and clippingMode of the PortalComponent I can only get it to fully work for one portal at a time.
I have tried flipping the normals for the crossingMode and clippingMode planes.
I have also tried creating a ceiling portal plane with inverted normals.
It seems like whatever Entity is passing through a portal has one portal it wants to deal with at a time and that's it.
My other option is to create portals using occlusion but I prefer the simplest way.