I am running Appium tests on an iOS 18 simulator, and I am encountering an intermittent issue where the device screen gets locked unexpectedly during the tests. The Appium logs show no errors or unusual activity, and all commands appear to be executed successfully.
However, upon reviewing the device logs, I see entries related to the lock event, but the exact cause remains unclear.
SpringBoard: (SpringBoard) [com.apple.SpringBoard:Common] lockUIFromSource:Boot options:{
SBUILockOptionsLockAutomaticallyKey: 1,
SBUILockOptionsForceLockKey: 1,
SBUILockOptionsUseScreenOffModeKey: 0
}
SpringBoard: (SpringBoard) [com.apple.SpringBoard:Common] -[SBTelephonyManager inCall] 0
SpringBoard: (SpringBoard) [com.apple.SpringBoard:Common] LockUI from source: Now locking
Has anyone experienced similar behavior with Appium on iOS 18, or could there be a setting or configuration in the simulator that is causing this issue?
General
RSS for tagDive into the vast array of tools and services available to developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi everyone, I've been trying to integrate with Microsoft EntraID for more than a week. I've followed every tutorial, ChatGPT, Cloude.ai, etc, but nothing works, and I realized that the problem is setting up information inside the Info.plist correctly. In the old days, we were able to edit it, but now it's a mess. I'm working with Xcode 15.2. Unfortunately, my computer does not accept more upgrades. Yes, I know I have to buy a new one, but I'm not sure if the new version will help me solve that. Does anyone have a project example or some experience with Microsoft EntraID authentication using SwiftUI?
All the examples in the project are really old and usually don't use SwiftUI
Topic:
Developer Tools & Services
SubTopic:
General
Hello Apple community !
Not here to report an issue but I just wanted to make a suggestion ^^
I feel like a common frustration amongst developers is the lack of transparency over bugs filed on developer tools, SDKs, iOS versions, the whole Apple ecosystem really.
This leads to the creation of parallel bug tracking tools (https://github.com/feedback-assistant/reports?tab=readme-ov-file /
https://openradar.appspot.com/page/1) or filing of duplicates for reports that may already exist and are being worked on.
I feel like this would save time for both external developers that encounter bugs & Apple engineers that have to look for possible duplicates to share a common public database of issues.
Other companies have this kind of system in place (Google for example : https://issuetracker.google.com/) so why not Apple ?
Thank you
I developed (with AI help) using Xcode an app for myself. It logged certain places visited on a list, with photos and notes. It worked, but after a few days my iPhone can't launch it. What do I have to do to get this to last?
I didn't think I needed to pay as this is purely for me.
It used to work for me (a few months, maybe a year ago). Right now when I run it there is no output (even though I granted it Bluetooth Capture permissions and have working RFCOMM communication in my program).
Versions:
PacketLogger 2024.03.18 (2024.03.18d1)
Sequoia 15.4.1
M2 chip
How to add speech recognition in + capability in Xcode there is no "Speech Recognition" in the list.
I have created and Ios app using xojo, and run using the Xcode simulator. I want to test it on my phone and don't seed to put it on the app store as it is for personal use. I've seen pages that indicate a free apple developer option is available for a limited time but can't for the live of me get my app to actually run on my phone. I've tried both apple configurator and xcode to add my app with no luck
My application uses a text file with an extension of .dssfilelist. On Linux I would register the Mime type and associate it with the application in the .desktop file.
<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info">
<mime-type type="text/dssfilelist">
<comment>DeepSkyStacker file-list file</comment>
<glob pattern="*.dssfilelist" />
</mime-type>
</mime-info>
I believe that I need to add stuff to the Info.plist for my application, but I also understand that CFBundleTypeExtensions is deprecated.
So please could you show me what I now need to add to the Info.plist file so that these files will be registered as "text/dssfilelist" type and associated with my application and to associate a .icns file with it?
Hi,
I really appreciate the C++ binding provided.
I got the metal-cpp source code from the website at Getting Started. However, I could not find the same for metal-cpp-extensions. Is it not available or do we have to always extract it from the sample code?
Thanks.
I’ve developed a virtual machine manager application using the macOS Virtualization framework. The application currently supports both NAT and bridged networking configurations.
I’m now looking to implement host-only networking, where the guest VM can communicate with the host but not with external networks. Is this networking mode supported by the Virtualization framework, and if so, what is the recommended approach to set it up?
Additionally, I would like to implement port forwarding from the host to the guest (e.g., redirecting traffic from a specific port on the host to a port on the guest). Is there a way to configure port forwarding using the built-in APIs of the Virtualization framework, or would this require a custom networking solution?
Any guidance or best practices for implementing these features within the constraints of the framework would be greatly appreciated.
Hello,
According to the official documentation, the macOS Virtualization Framework currently supports only macOS and Linux guest operating systems. I would like to know if there is any way—officially or through a supported workaround—to run Windows 11 as a guest using this framework.
Additionally, is there any indication or roadmap suggesting that support for Windows guests might be introduced in a future release, such as in macOS 16?
Any insights or official clarification would be greatly appreciated.
Thank you.
I observed the following problem:
A scientific code to perform Reverse Monte Carlo simulations (‘rmcxas’) compiled with the most up to date version of gcc/gfortran and Xcode including command line tools on Mac OS Sequoia 15.4 on a Macbook air with M4 processor generates the following problem upon starting it in a terminal:
dyld[10154]: dyld cache '(null)' not loaded: syscall to map cache into shared region failed
dyld[10154]: Library not loaded: /usr/lib/libSystem.B.dylib
Referenced from: <0144F82E-003C-37A9-A544-9AE6336E549B> /Users/markuswinterer/bin/rmcxas
Reason: tried: '/usr/lib/libSystem.B.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/lib/libSystem.B.dylib' (no such file), '/usr/lib/libSystem.B.dylib' (no such file, no dyld cache), '/usr/local/lib/libSystem.B.dylib' (no such file)
zsh: abort rmcxas
This occurs only about every 5th time the code is started.
Help would be highly appreciated.
Hello,
I recently enrolled in the Apple Developer Program and created an App ID with the bundle ID com.echo.eyes.voice.
I am trying to enable Speech Recognition in the App ID capabilities list, but the option does not appear — even after waiting over a week since my membership was activated.
I’ve already:
Confirmed my Apple Developer account is active
Checked the Identifiers section in the Developer portal
Tried editing the App ID, but Speech Recognition is not listed
Contacted both Developer Support and Developer Technical Support (Case #102594089120), but was told to post here for help
My app uses Capacitor + the @capacitor-community/speech-recognition plugin. I need the com.apple.developer.speech-recognition entitlement to appear so I can use native voice input in iOS.
I would really appreciate help from an Apple engineer or anyone who has faced this issue.
Thank you,
— Daniel Colyer
Hello,
I'm developing a macOS application that uses the Virtualization framework to run Linux virtual machines (specifically Ubuntu and Fedora) on Apple Silicon Macs.
I've noticed that while the macOS host properly supports all trackpad gestures, the two-finger tap gesture for right-click does not work within the Linux guest. Only the primary click is recognized. This behavior is consistent across different Linux distributions and desktop environments (GNOME, KDE, etc.).
I would like to confirm:
Is the macOS Virtualization framework expected to support trackpad gestures such as two-finger tap for right-click within Linux guest VMs?
If not currently supported, is there a known workaround to enable right-click functionality for the trackpad in Linux guests? (e.g., configuration changes in the VM, Linux kernel input modules, or framework-level adjustments.)
Any insights or suggestions would be greatly appreciated.
Thank you!
Hello Team!
Recently we cleaned up profiles and renewed certificates under developer account, noticing profile expiration date showing invalid, it supposed to show certificate expiration date. Due to this I am not able to update or download profiles. Any one experienced this this? what would be the solution?.
Thanks,
Kumar.
Topic:
Developer Tools & Services
SubTopic:
General
Hello, I'm not a developer. I have an app in the App Store and an Apple Developer account. I have two questions: 1. I'm trying to find out exactly where the source code of my app is located - I can't. It should be in Xcode. There is only Xcode cloud in my account, which is inactive. 2. I want to find out what programming language my app is written in. How do I do this?
Topic:
Developer Tools & Services
SubTopic:
General
如何在没有电脑的情况下启用开发者模式
please reply me in Chinese
I'm a newbie to on-demand resources and I feel like I'm missing something very obvious. I've successfully tagged and set up ODR in my Xcode project, but now I want to upload the assets to my own server so I can retrieve them from within the app, and I can't figure out how to export the files I need.
I'm following the ODR Guide and I'm stuck at Step #4, after I've selected my archive in the Archives window it says to "Click the Export button", but this is what I see:
As shown in the screenshot, there is no export button visible. I have tried different approaches, including distributing to appstore connect, and doing a local development release. The best I've been able to do is find a .assetpack folder inside the archive package through the finder, but uploading that, or the asset.car inside it, just gives me a "cannot parse response" error from the ODR loading code. I've verified I uploaded those to the correct URL.
Can anyone walk me through how to save out the file(s) I need, in a form I can just upload to my server?
Thanks,
Pete
Hi, I’m currently developing a watchOS app and ran into an issue where I can’t enable Developer Mode on my Apple Watch.
Device info:
Apple Watch Series 9 (watchOS 10.4)
Paired with iPhone 14 Pro (iOS 17.4.1)
Xcode 15.3 (macOS 15.5, Apple Silicon)
Issue:
When I try to run the app on my physical watch device, Xcode prompts that Developer Mode needs to be enabled. However, there is no approval request on the Apple Watch, and no Developer Mode option appears under Settings → Privacy & Security.
I’ve already tried the following:
Rebooting both devices
Unpairing and re-pairing the watch
Erasing and setting up the watch again
Signing out and back into my Apple ID
Using the latest Xcode version (15.3 and 16.3 both tested)
Running clean builds and checking provisioning profiles
Attempting install via both simulator and physical device
Still no luck — the app will not launch on the Apple Watch due to Developer Mode being disabled, and the option is missing entirely from Settings.
I visited an Apple Store Genius Bar, but they couldn’t help and told me to contact Developer Support. I’ve already submitted a support request, but in the meantime I wanted to ask here in case anyone else has experienced this and found a workaround.
Thanks in advance.
"UITests recording reports 'The capability "Create Service Socket" is not supported by this device.' on M1 chip, but works normally on Intel chip."