Hello everyone,
I am migrating a legacy KEXT to a DriverKit (DEXT) architecture. While the DEXT itself is working correctly, I am completely blocked by a code signing issue when trying to establish the UserClient connection from our SwiftUI management app.
Project Goal & Status:
Our DEXT (com.accusys.Acxxx.driver) activates successfully (systemextensionsctl list confirms [activated enabled]).
The core functionality is working (diskutil list shows the corresponding disk device node).
The Core Problem: The userclient-access Signing Error
To allow the app to connect to the DEXT, the com.apple.developer.driverkit.userclient-access entitlement is required in the app's .entitlements file.
However, as soon as this entitlement is added, the build fails.
Both automatic and manual signing fail with the same error:
`Provisioning profile ... doesn't match the entitlements file's value for the ... userclient-access entitlement.`
This build failure prevents the generation of an .app bundle, making it impossible to inspect the final entitlements with codesign.
What We've Confirmed:
The necessary capabilities (like DriverKit Communicates with Drivers) are visible and enabled for our App ID on the developer portal.
The issue persists on a clean system state and on the latest macOS Sequoia 15.7.1.
Our Research and Hypothesis:
We have reviewed the official documentation "Diagnosing issues with entitlements" (TN3125).
According to the documentation, a "doesn't match" error implies a discrepancy between the entitlements file and the provisioning profile.
Given that we have tried both automatic and manual profiles (after enabling the capability online), our hypothesis is that the provisioning profile generation process on Apple's backend is not correctly including the approved userclient-access entitlement into the profile file itself. The build fails because Xcode correctly detects this discrepancy.
Our Questions:
Did we misunderstand a step in the process, or is the issue not with the entitlement request at all? Alternatively, are there any other modifications we can make to successfully connect our App to the DEXT and trigger NewUserClient?
Thank you for any guidance.
Drivers
RSS for tagUnderstand the role of drivers in bridging the gap between software and hardware, ensuring smooth hardware functionality.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
We have developed a hardware product that embeds an FTDI USB-serial converter, and an application on MacOS that communicates with this device. We would like to port our application to iPadOS. I can see that when I plug the device into the iPad, it is recognized as a serial port, based on its console logs. When I attempt to enumerate serial ports on iPadOS using IOKit, I can see matching IOSerialBSDClient services, but the properties are sandboxed, including the IOCalloutDevice property, for example:
0 error 16:36:10.922450-0700 kernel Sandbox: ***(662) deny(1) iokit-get-properties iokit-class:IOUserSerial property:IOTTYSuffix
Is there an entitlement that can be applied that allows access to the serial port properties of an attached USB device? Or do I need to implement my own USBDriverKit driver for this device, as seems to be implied in these forum threads:
https://developer.apple.com/forums/thread/795202
https://developer.apple.com/forums/thread/655527
Dear Apple engineers,
We have developed a DriverKit (DEXT) driver for an HBA RAID controller.
The RAID controller is connected to hosts through Thunderbolt (PCIe port of the Thunderbolt controller).
We do plug/unplug tests to verify the developed driver. The test always fails in about 100 cycles with a MacOS crash (panic).
The panic contains
“LLC Bus error (Unavailable) from cpu0: FAR=0xa40100008 LLC_ERR_STS/ADR/INF=0x80/0x300480a40100008/0x1400000005 addr=0xa40100008 cmd=0x18(ACC_CIFL2C_CMD_RD_LD: request for load miss in E or S state)”
At first we assumed that the issue is with hardware. But we did this test on different hosts (MacMini M3 and M4) with different units of our device.
The error points to the same physical address FAR=0xa40100008 even if the hosts are different.
The 2 full panic logs are attached (one for M4, another one for M3 host).
Could you share your understanding of the crash and give any hints on how we can fix it?
Please let us know if you need any additional data.
Thank you
M3 panic:
https://drive.google.com/file/d/1GJXd3tTW6ajdrHpFsJxO_tWWYKYIgcMc/view?usp=share_link
M4 panic:
https://drive.google.com/file/d/1SU-3aBSdhLsyhhxsLknzw9wGvBQ9TbJC/view?usp=share_link
Topic:
App & System Services
SubTopic:
Drivers
We have an IOUserSCSIPeripheralDeviceType00 class DEXT supporting USB attached devices. With some high-capacity drives, the default setPowerState can exceed 20s to complete. This triggers a kernel panic, although this drive behavior is not unexpected.
With a kernel extension implementing similar functionality we see no such problem as it appears from reading of Apple open source the timeout was 100s.
What changes will allow setPowerState to complete without the kernel panic?
kernel panic report excerpt attached.
panic-full-2025-09-04-063127.0003.txt
We want to allocate a block of contiguous memory (≤1M) for audio ring DMA usage, but we haven't found any explicit method in the DriverKit documentation for allocating contiguous memory.
I'm aware that IOBufferMemoryDescriptor::Create can be used in DriverKit to allocate memory and share it with user space. However, is the allocated memory physically contiguous? Can it guarantee that when I subsequently call PrepareForDMA in IODMACommand, there will be only one segment?
Could you please help review this? Thank you!
Hello everyone,
I've encountered a very strange and persistent logging issue with my DriverKit DEXT and would appreciate any insights from the community.
[Problem Summary]
My DriverKit DEXT, along with its companion Swift app, is functionally working perfectly.
I can repeatedly call methods in the DEXT from the app (e.g., a Ping-Pong test and a StaticProcessInbandTask call) and receive the correct response every time.
However, the os_log messages within my IOUserClient subclass are only successfully recorded for the very first set of interactions. After that, all subsequent logs are completely missing.
What's even stranger is that all successfully recorded logs are attributed to the kernel: process, even for purely user-space methods like ExternalMethod.
[Development Environment]
macOS: 15.7.1
Xcode: 16.4
Hardware: MacBook Pro M1
DEXT Logging Macro (Log.h):
#include <os/log.h>
#define Log(fmt, ...) \
do { \
os_log(OS_LOG_DEFAULT, "[%{public}s] " fmt, __FUNCTION__, ##__VA_ARGS__); \
} while (0)
[Steps to Reproduce & Observed Behavior]
The DEXT is successfully loaded via the companion app.
I click the "Ping-Pong" button, then the "Process InBand" button in the app. The app's UI log correctly shows that the request was sent and a successful response was received from the DEXT.
I repeat step 2 multiple times. Each interaction works flawlessly from the app's perspective.
I then use the log show command to export the logs from this period, for example:
log show --last 5m | grep "com.accusys.Acxxx.driver" > dext_logs.txt
Observed Result (Log Content):
In the dext_logs.txt file, I can only see the logs from the very first Ping-Pong and the very first Process InBand call. All subsequent, successful operations leave no trace in the logs.
kernel: (com.accusys.Acxxx.driver.dext) [ExternalMethod] // { ---
kernel: (com.accusys.Acxxx.driver.dext) [ExternalMethod] // --- }
kernel: (com.accusys.Acxxx.driver.dext) [StaticPingPong] // { ---
kernel: (com.accusys.Acxxx.driver.dext) [StaticPingPong] // --- }
kernel: (com.accusys.Acxxx.driver.dext) [ExternalMethod] // { ---
kernel: (com.accusys.Acxxx.driver.dext) [ExternalMethod] // --- }
kernel: (com.accusys.Acxxx.driver.dext) [StaticProcessInbandTask] // { ---
kernel: (com.accusys.Acxxx.driver.dext) [StaticProcessInbandTask] // --- }
<--- END OF FILE (No new logs appear after this point) --->
[Core Questions]
Why are logs in IOUserClient subclass only recorded once? Given the DEXT is clearly still running and processing requests, why would os_log calls only succeed in writing to the system log database on the first interaction?
Why are all logs attributed to the kernel? Why would logs from 100% user-space code like ExternalMethod and StaticPingPong be attributed to the kernel process?
[Solutions Attempted That Did Not Work]
I have verified with ps aux that the DEXT process (com.accusys.Acxxx.driver) is running continuously in the background and has not crashed.
Attempted to force-restart the logging service with sudo killall logd, but the issue persists.
Performed the most thorough reset possible using systemextensionsctl reset followed by a full reboot, then reinstalled the DEXT. The issue remains exactly the same.
Thank you for any possible help or suggestions
Best, Charles
Hello everyone,
I am migrating a KEXT for a SCSI PCI RAID controller (LSI 3108 RoC) to DriverKit (DEXT). While the DEXT loads successfully, I'm facing a DMA issue: an INQUIRY command results in a 0-byte disk because the data buffer received by the DEXT is all zeros, despite our firmware logs confirming that the correct data was prepared and sent.
We have gathered detailed forensic evidence and would appreciate any insights from the community.
Detailed Trace of a Failing INQUIRY Command:
1, DEXT Dispatches the Command:
Our UserProcessParallelTask implementation correctly receives the INQUIRY task. Logs show the requested transfer size is 6 bytes, and the DEXT obtains the IOVA (0x801c0000) to pass to the hardware.
DEXT Log:
[UserProcessParallelTask_Impl] --- FORENSIC ANALYSIS ---
[UserProcessParallelTask_Impl] fBufferIOVMAddr = 0x801c0000
[UserProcessParallelTask_Impl] fRequestedTransferCount = 6
2, Firmware Receives IOVA and Prepares Correct Data:
A probe in our firmware confirms that the hardware successfully received the correct IOVA and the 6-byte length requirement. The firmware then prepares the correct 6-byte INQUIRY response in its internal staging buffer.
Firmware Logs:
-- [FIRMWARE PROBE: INCOMING DMA DUMP] --
Host IOVA (High:Low) = 0x00000000801c0000
DataLength in Header = 6 (0x6)
--- [Firmware Outgoing Data Dump from go_inquiry] ---
Source Address: 0x228BB800, Length: 6 bytes
0x0000: 00 00 05 12 1F 00
3, Hardware Reports a Successful Transfer, but Data is Lost:
After the firmware initiates the DMA write to the Host IOVA, the hardware reports a successful transfer of 6 bytes back to our DEXT.
DEXT Completion Log:
[AME_Host_Normal_Handler_SCSI_Request] [TaskID: 200] COMPLETING...
[AME_Host_Normal_Handler_SCSI_Request] Hardware Transferred = 6 bytes
[AME_Host_Normal_Handler_SCSI_Request] - ReplyStatus = SUCCESS (0x0)
[AME_Host_Normal_Handler_SCSI_Request] - SCSIStatus = SUCCESS (0x0)
The Core Contradiction:
Despite the firmware preparing the correct data and the hardware reporting a successful DMA transfer, the fDataBuffer in our DEXT remains filled with zeros. The 6 bytes of data are lost somewhere between the PCIe bus and host memory.
This "data-in-firmware, zeros-in-DEXT" phenomenon leads us to believe the issue lies in memory address translation or a system security policy, as our legacy KEXT works perfectly on the same hardware.
Compared to a KEXT, are there any known, stricter IOMMU/security policies for a DEXT that could cause this kind of "silent write failure" (even with a correct IOVA)?
Alternatively, what is the correct and complete expected workflow in DriverKit for preparing an IOMemoryDescriptor* fDataBuffer (received in UserProcessParallelTask) for a PCI hardware device to use as a DMA write target?
Any official documentation, examples, or advice on the IOMemoryDescriptor to PCI Bus Address workflow would be immensely helpful.
Thank you.
Charles
Hello everyone,
We are migrating our KEXT for a Thunderbolt storage device to a DEXT based on IOUserSCSIParallelInterfaceController.
We've run into a fundamental issue where the driver's behavior splits based on the I/O source: high-level I/O from the file system (e.g., Finder, cp) is mostly functional (with a minor ls -al sorting issue for Traditional Chinese filenames), while low-level I/O directly to the block device (e.g., diskutil) fails or acts unreliably. Basic read/write with dd appears to be mostly functional.
We suspect that our DEXT is failing to correctly register its full device "personality" with the I/O Kit framework, unlike its KEXT counterpart. As a result, low-level I/O requests with special attributes (like cache synchronization) sent by diskutil are not being handled correctly by the IOUserSCSIParallelInterfaceController framework of our DEXT.
Actions Performed & Relevant Logs
1. Discrepancy: diskutil info Shows Different Device Identities for DEXT vs. KEXT
For the exact same hardware, the KEXT and DEXT are identified by the system as two different protocols.
KEXT Environment:
Device Identifier: disk5
Protocol: Fibre Channel Interface
...
Disk Size: 66.0 TB
Device Block Size: 512 Bytes
DEXT Environment:
Device Identifier: disk5
Protocol: SCSI
SCSI Domain ID: 2
SCSI Target ID: 0
...
Disk Size: 66.0 TB
Device Block Size: 512 Bytes
2. Divergent I/O Behavior: Partial Success with Finder/cp vs. Failure with diskutil
High-Level I/O (Partially Successful):
In the DEXT environment, if we operate on an existing volume (e.g., /Volumes/MyVolume), file copy operations using Finder or cp succeed. Furthermore, the logs we've placed in our single I/O entry point, UserProcessParallelTask_Impl, are triggered.
Side Effect: However, running ls -al on such a volume shows an incorrect sorting order for files with Traditional Chinese names (they appear before . and ..).
Low-Level I/O (Contradictory Behavior):
In the DEXT environment, when we operate directly on the raw block device (/dev/disk5):
diskutil partitionDisk ... -> Fails 100% of the time with the error: Error: -69825: Wiping volume data to prevent future accidental probing failed.
dd command -> Basic read/write operations appear to work correctly (a write can be immediately followed by a read within the same DEXT session, and the data is correct).
3. Evidence of Cache Synchronization Failure (Non-deterministic Behavior)
The success of the dd command is not deterministic. Cross-environment tests prove that its write operations are unreliable:
First Test:
In the DEXT environment, write a file with random data to /dev/disk5 using dd.
Reboot into the KEXT environment.
Read the data back from /dev/disk5 using dd. The result is a file filled with all zeros.
Conclusion: The write operation only went to the hardware cache, and the data was lost upon reboot.
Second Test:
In the DEXT environment, write the same random file to /dev/disk5 using dd.
Key Variable: Immediately after, still within the DEXT environment, read the data back once for verification. The content is correct!
Reboot into the KEXT environment.
Read the data back from /dev/disk5. This time, the content is correct!
Conclusion: The additional read operation in the second test unintentionally triggered a hardware cache flush. This proves that the dd (in our DEXT) write operation by itself does not guarantee synchronization, making its behavior unreliable.
Our Problem
Based on the observations above, we have the conclusion:
High-Level Path (triggered by Finder/cp):
When an I/O request originates from the high-level file system, the framework seems to enter a fully-featured mode. In this mode, all SCSI commands, including READ/WRITE, INQUIRY, and SYNCHRONIZE CACHE, are correctly packaged and dispatched to our UserProcessParallelTask_Impl entry point. Therefore, Finder operations are mostly functional.
Low-Level Path (triggered by dd/diskutil):
When an I/O request originates from the low-level raw block device layer:
The most basic READ/WRITE commands can be dispatched (which is why dd appears to work).
However, critical management commands, such as INQUIRY and SYNCHRONIZE CACHE, are not being correctly dispatched or handled. This leads to the incorrect device identification in diskutil info and the failure of diskutil partitionDisk due to its inability to confirm cache synchronization.
We would greatly appreciate any guidance, suggestions, or insights on how to resolve this discrepancy. Specifically, what is the recommended approach within DriverKit to ensure that a DEXT based on IOUserSCSIParallelInterfaceController can properly declare its capabilities and handle both high-level and low-level I/O requests uniformly?
Thank you.
Charles
First time working with DEXT, so I'm trying to make "Communicating between a DriverKit extension and a client app" sample project to work and I'm having trouble.
I'm following the steps for "Automatically manage signing". It seems that the step “Sign to Run Locally” is wrong now, so I let it to Apple Developer.
Now I have a Provisioning Profile error that says it doesn't include the com.apple.developer.driverkit.allow-any-userclient-access entitlement.
If I go to Certificates, Identifiers and Profiles for this AppID and go to Capability Requests and select DriverKit Allow Any UserClient Access, this bring me to a page where I select DriverKit Entitlement but Any UserClient Access is not a choice. I tried asking for UserClient Access but the request has been denied saying that the engineering team does not think these entitlements are necessary for testing locally. So I removed com.apple.developer.driverkit.allow-any-userclient-access from NullDriver.entitlements. I can now install the DEXT but when I try "Communicate with Dext" button, I received the error: Failed opening connection to dext with error: 0xe00002e2.
Looking in the console, I see this error message:
DK: NullDriverUserClient-0x10000ad6a:UC failed userclient-access check, needed bundle ID com.example.apple-samplecode.dext-to-user-client-2-TEST.driver
I tried many things, but I can't seem to be able to pass through the check to allow the application to communicate with the dext.
What am I missing?
Note that I'm focusing only on macOS, not iOS.
Thanks.
Hello everyone,
We are in the process of migrating a high-performance storage KEXT to DriverKit. During our initial validation phase, we noticed a performance gap between the DEXT and the KEXT, which prompted us to try and optimize our I/O handling process.
Background and Motivation:
Our test hardware is a RAID 0 array of two HDDs. According to AJA System Test, our legacy KEXT achieves a write speed of about 645 MB/s on this hardware, whereas the new DEXT reaches about 565 MB/s. We suspect the primary reason for this performance gap might be that the DEXT, by default, uses a serial work-loop to submit I/O commands, which fails to fully leverage the parallelism of the hardware array.
Therefore, to eliminate this bottleneck and improve performance, we configured a dedicated parallel dispatch queue (MyParallelIOQueue) for the UserProcessParallelTask method.
However, during our implementation attempt, we encountered a critical issue that caused a system-wide crash.
The Operation Causing the Panic:
We configured MyParallelIOQueue using the following combination of methods:
In the .iig file: We appended the QUEUENAME(MyParallelIOQueue) macro after the override keyword of the UserProcessParallelTask method declaration.
In the .cpp file: We manually created a queue with the same name by calling the IODispatchQueue::Create() function within our UserInitializeController method.
The Result:
This results in a macOS kernel panic during the DEXT loading process, forcing the user to perform a hard reboot.
After the reboot, checking with the systemextensionsctl list command reveals the DEXT's status as [activated waiting for user], which indicates that it encountered an unrecoverable, fatal error during its initialization.
Key Code Snippets to Reproduce the Panic:
In .iig file - this was our exact implementation:
class DRV_MAIN_CLASS_NAME: public IOUserSCSIParallelInterfaceController
{
public:
virtual kern_return_t UserProcessParallelTask(...) override
QUEUENAME(MyParallelIOQueue);
};
In .h file:
struct DRV_MAIN_CLASS_NAME_IVars {
// ...
IODispatchQueue* MyParallelIOQueue;
};
In UserInitializeController implementation:
kern_return_t
IMPL(DRV_MAIN_CLASS_NAME, UserInitializeController)
{
// ...
// We also included code to manually create the queue.
kern_return_t ret = IODispatchQueue::Create("MyParallelIOQueue",
kIODispatchQueueReentrant,
0,
&ivars->MyParallelIOQueue);
if (ret != kIOReturnSuccess) {
// ... error handling ...
}
// ...
return kIOReturnSuccess;
}
Our Question:
What is the officially recommended and most stable method for configuring UserProcessParallelTask_Impl() to use a parallel I/O queue?
Clarifying this is crucial for all developers pursuing high-performance storage solutions with DriverKit. Any explanation or guidance would be greatly appreciated.
Best Regards,
Charles
I am developing a DriverKit driver with the goal of sending vendor-specific commands to a USB storage device.
I have successfully created the DriverKit driver, and when I connect the USB storage device, it appears correctly in IORegistryExplorer.
My driver class inherits from IOUserSCSIPeripheralDeviceType00 in the SCSIPeripheralsDriverKit framework.
I also created a UserClient class that inherits from IOUserClient, and from its ExternalMethod I tried sending an INQUIRY command as a basic test to confirm that command transmission works.
However, the device returns an ILLEGAL REQUEST (Sense Key 0x5 / ASC 0x20).
Could someone advise what I might be doing wrong?
Below are the logs output from the driver:
2025-11-14 21:00:43.573730+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] Driver - NewUserClient() - Finished.
2025-11-14 21:00:43.573733+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - Start()
2025-11-14 21:00:43.573807+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - Start() - Finished.
2025-11-14 21:00:43.574249+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - ExternalMethod() called
2025-11-14 21:00:43.574258+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - ----- SCSICmdINQUIRY -----
2025-11-14 21:00:43.574268+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - command.fRequestedByteCountOfTransfer = 512
2025-11-14 21:00:43.575980+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB fCompletionStatus = 0x0
2025-11-14 21:00:43.575988+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB fServiceResponse = 0x2
2025-11-14 21:00:43.575990+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB fSenseDataValid = 0x1
2025-11-14 21:00:43.575992+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB VALID_RESPONSE_CODE = 0x70
2025-11-14 21:00:43.575994+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB SENSE_KEY = 0x5
2025-11-14 21:00:43.575996+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB ADDITIONAL_SENSE_CODE = 0x20
2025-11-14 21:00:43.575998+0900 0x26e9 Default 0x0 0 0 kernel: (SampleDriverKitApp.SampleDriverKitDriver.dext) [DEBUG] UserClient - SCSICmdINQUIRY() UserSendCDB ADDITIONAL_SENSE_CODE_QUALIFIER = 0x0
Here is the UserClient class:
class SampleDriverKitUserClient: public IOUserClient
{
public:
virtual bool init(void) override;
virtual kern_return_t Start(IOService* provider) override;
virtual kern_return_t Stop(IOService* provider) override;
virtual void free(void) override;
virtual kern_return_t ExternalMethod(
uint64_t selector,
IOUserClientMethodArguments* arguments,
const IOUserClientMethodDispatch* dispatch,
OSObject* target,
void* reference) override;
void SCSICmdINQUIRY(SampleDriverKitDriver *driver) LOCALONLY;
};
Here is the part that sends the INQUIRY command:
void SampleDriverKitUserClient::SCSICmdINQUIRY(SampleDriverKitDriver *driver)
{
kern_return_t kr = KERN_SUCCESS;
SCSIType00OutParameters command = {};
UInt8 dataBuffer[512] = {0};
SCSI_Sense_Data senseData = {0};
Log("----- SCSICmdINQUIRY -----");
SetCommandCDB(&command.fCommandDescriptorBlock, 0x12, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00);
command.fLogicalUnitNumber = 0;
command.fTimeoutDuration = 10000; // milliseconds
command.fRequestedByteCountOfTransfer = sizeof(dataBuffer);
Log("command.fRequestedByteCountOfTransfer = %lld", command.fRequestedByteCountOfTransfer);
command.fBufferDirection = kIOMemoryDirectionIn;
command.fDataTransferDirection = kSCSIDataTransfer_FromTargetToInitiator;
command.fDataBufferAddr = reinterpret_cast<uint64_t>(dataBuffer);
command.fSenseBufferAddr = reinterpret_cast<uint64_t>(&senseData);
command.fSenseLengthRequested = sizeof(senseData);
if( driver ) {
SCSIType00InParameters response = {};
kr = driver->UserSendCDB(command, &response);
if( kr != KERN_SUCCESS ) {
Log("SCSICmdINQUIRY() UserSendCDB failed (0x%x)", kr);
return;
}
Log("SCSICmdINQUIRY() UserSendCDB fCompletionStatus = 0x%x", response.fCompletionStatus);
Log("SCSICmdINQUIRY() UserSendCDB fServiceResponse = 0x%x", response.fServiceResponse);
Log("SCSICmdINQUIRY() UserSendCDB fSenseDataValid = 0x%x", response.fSenseDataValid);
Log("SCSICmdINQUIRY() UserSendCDB VALID_RESPONSE_CODE = 0x%x", senseData.VALID_RESPONSE_CODE);
Log("SCSICmdINQUIRY() UserSendCDB SENSE_KEY = 0x%x", senseData.SENSE_KEY);
Log("SCSICmdINQUIRY() UserSendCDB ADDITIONAL_SENSE_CODE = 0x%x", senseData.ADDITIONAL_SENSE_CODE);
Log("SCSICmdINQUIRY() UserSendCDB ADDITIONAL_SENSE_CODE_QUALIFIER = 0x%x", senseData.ADDITIONAL_SENSE_CODE_QUALIFIER);
if( response.fServiceResponse == kSCSIServiceResponse_TASK_COMPLETE ) {
Log("SCSICmdINQUIRY() UserSendCDB complete success!!");
}
for( int i=0; i < 5; i++ ) {
Log("data [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x [%04d]=0x%x",
i*8+0, dataBuffer[i*8+0],
i*8+1, dataBuffer[i*8+1],
i*8+2, dataBuffer[i*8+2],
i*8+3, dataBuffer[i*8+3],
i*8+4, dataBuffer[i*8+4],
i*8+5, dataBuffer[i*8+5],
i*8+6, dataBuffer[i*8+6],
i*8+7, dataBuffer[i*8+7] );
}
char vendorID[9] = {0};
memcpy(vendorID, &dataBuffer[8], 8);
Log("vendorID = %s",vendorID);
char productID[17] = {0};
memcpy(productID, &dataBuffer[16], 16);
Log("productID = %s",productID);
}
}
My environment is:
MacBook Pro (M2), macOS 15.6
If anyone has insight into what causes the ILLEGAL REQUEST, or what I am missing when using IOUserSCSIPeripheralDeviceType00 and UserSendCDB, I would greatly appreciate your help.
Thank you.
Topic:
App & System Services
SubTopic:
Drivers
Tags:
SCSIControllerDriverKit
DriverKit
BlockStorageDeviceDriverKit
We are developing a DriverKit driver on Apple M1. We use the following code to prepare DMA buffer:
IODMACommandSpecification dmaSpecification;
bzero(&dmaSpecification, sizeof(dmaSpecification));
dmaSpecification.options = kIODMACommandSpecificationNoOptions;
dmaSpecification.maxAddressBits = p_dma_mgr->maxAddressBits;
kret = IODMACommand::Create(p_dma_mgr->device,
kIODMACommandCreateNoOptions,
&dmaSpecification,
&impl->dma_cmd
);
if (kret != kIOReturnSuccess) {
os_log(OS_LOG_DEFAULT, "Error: IODMACommand::Create failed! ret=0x%x\n", kret);
impl->user_mem.reset();
IOFree(impl, sizeof(*impl));
return ret;
}
uint64_t flags = 0;
uint32_t segmentsCount = 32;
IOAddressSegment segments[32];
kret = impl->dma_cmd->PrepareForDMA(kIODMACommandPrepareForDMANoOptions,
impl->user_mem.get(),
0,
0, // 0 for entire memory
&flags,
&segmentsCount,
segments
);
if (kret != kIOReturnSuccess) {
OSSafeReleaseNULL(impl->dma_cmd);
impl->user_mem.reset();
IOFree(impl, sizeof(*impl));
os_log(OS_LOG_DEFAULT, "Error: PrepareForDMA failed! ret=0x%x\n", kret);
return kret;
}
I allocated several 8K BGRA video frames, each with a size of 141557760 bytes, and prepared the DMA according to the method mentioned above. The process was successful when the number of frames was 15 or fewer. However, issues arose when allocating 16 frames:
Error: PrepareForDMA failed! ret=0xe00002bd
By calculating, I found that the total size of 16 video frames exceeds 2GB. Is there such a limitation in DriverKit that the total DMA size cannot exceed 2GB? Are there any methods that would allow me to bypass this restriction so I can use more video frame buffers?
I have been working on a multi-platform multi-touch HID-standard digitizer clickpad device.
The device uses Bluetooth Low Energy (BLE) as its connectivity transport and advertises HID over GATT. To date, I have the device working successfully on Windows 11 as a multi-touch, gesture-capable click pad with no custom driver or app on Windows.
However, I have been having difficulty getting macOS to recognize and react to it as a HID-standard multi-touch click pad digitizer with either the standard Apple HID driver (AppleUserHIDEventDriver) or with a custom-coded driver extension (DEXT) modeled, based on the DTS stylus example and looking at the IOHIDFamily open source driver(s).
The trackpad works with full-gesture support on Windows 11 and the descriptors seem to be compliant with the R23 Accessory Guidelines document, §15.
With the standard, matching Apple AppleUserHIDEventDriver HID driver, when enumerating using stock-standard HID mouse descriptors, the device works fine on macOS 14.7 "Sonoma" as a relative pointer device with scroll wheel capability (two finger swipe generates a HID scroll report) and a single button.
With the standard, matching Apple AppleUserHIDEventDriver HID driver, when enumerating using stock-standard HID digitizer click/touch pad descriptors (those same descriptors used successfully on Windows 11), the device does nothing. No button, no cursor, no gestures, nothing. Looking at ioreg -filtb, all of the key/value pairs for the driver match look correct.
Because, even with the Apple open source IOHIDFamily drivers noted above, we could get little visibility into what might be going wrong, I wrote a custom DriverKit/HIDDriverKit driver extension (DEXT) (as noted above, based on the DTS HID stylus example and the open source IOHIDEventDriver.
With that custom driver, I can get a single button click from the click pad to work by dispatching button events to dispatchRelativePointerEvent; however, when parsing, processing, and dispatching HID digitizer touch finger (that is, transducer) events via IOUserHIDEventService::dispatchDigitizerTouchEvent, nothing happens.
If I log with:
% sudo log stream --info --debug --predicate '(subsystem == "com.apple.iohid")'
either using the standard AppleUserHIDEventDriver driver or our custom driver, we can see that our input events are tickling the IOHIDNXEventTranslatorSessionFilter HID event filter, so we know HID events are getting from the device into the macOS HID stack. This was further confirmed with the DTS Bluetooth PacketLogger app. Based on these events flowing in and hitting IOHIDNXEventTranslatorSessionFilter, using the standard AppleUserHIDEventDriver driver or our custom driver, clicks or click pad activity will either wake the display or system from sleep and activity will keep the display or system from going to sleep.
In short, whether with the stock driver or our custom driver, HID input reports come in over Bluetooth and get processed successfully; however, nothing happens—no pointer movement or gesture recognition.
STEPS TO REPRODUCE
For the standard AppleUserHIDEventDriver:
Pair the device with macOS 14.7 "Sonoma" using the Bluetooth menu.
Confirm that it is paired / bonded / connected in the Bluetooth menu.
Attempt to click or move one or more fingers on the touchpad surface.
Nothing happens.
For the our custom driver:
Pair the device with macOS 14.7 "Sonoma" using the Bluetooth menu.
Confirm that it is paired / bonded / connected in the Bluetooth menu.
Attempt to click or move one or more fingers on the touchpad surface.
Clicks are correctly registered. With transducer movement, regardless of the number of fingers, nothing happens.
I read that iPadOS supports driverkit, and, presumably, the same serial FTDI UARTs as macOS.
Has this been migrated to USB-C iPhones on iOS 18?
After some searching, the developer doc is not clear, and web responses are contradictory.
We are currently using it for a wired sensor option of our BlueTooth HR sensor. When it is used in wired config, the radios are turned off. This is important to some of our customers. Since Lightning MFI sensors are being discontinued with Apple killing Lightning, we would love to have an alternative for iOS.
-- Harald
We have a macOS application which interacts with our USB and PCI devices to perform SCSI and NVME commands on them.
We use IOUSBHost, IOUSBLib and IOKitLib for USB interface and have created a custom driver to interact with PCI devices.
Is there a way we can implement a similar functionality for iOS as well if we connect the cards and readers using OTG?
Hello Everyone,
I encountered an issue with PCI memory access in DriverKit. In my case, BAR0 is not available, but BAR1 is ready for use. Here’s the log output:
!!! ERROR : Failed to get BAR0 info (error: 0xe00002f0). !!!
BAR1 - MemoryIndex: 0x00000000, Size: 0x00040000, Type: 0
Issue Description
When I initially wrote to BAR0 using memoryIndex = 0, it worked successfully:
AME_Address_Write_32(pAMEData, pAMEData->memoryIndex, AME_HOST_INT_MASK_REGISTER, 0x0F);
However, I mistakenly forgot to update memoryIndex to 1 for BAR1. Surprisingly, the write operation still succeeded.
When I fixed memoryIndex = 1 for BAR1, the write operation no longer had any effect. There was no error, but the expected behavior did not occur.
Relevant API (From IOPCIDevice.iig)
/*!
/*!
* @brief Writes a 32-bit value to the PCI device's aperture at a given memory index.
* @discussion This method writes a 32-bit register on the device and returns its value.
* @param memoryIndex An index into the array of ranges assigned to the device.
* @param offset An offset into the device's memory specified by the index.
* @param data A 32-bit value to be written in host byte order.
*/
void
MemoryWrite32(uint8_t memoryIndex,
uint64_t offset,
uint32_t data) LOCALONLY;
Log Output:
Writes to BAR0 (memoryIndex = 0)
AME_Address_Write_32() called
memoryIndex: 0, offset: 0x34, data: 0xf
Wrote data 0xF to offset 52
AME_Address_Write_32() called
memoryIndex: 0, offset: 0xa0, data: 0x1
Wrote data 0x1 to offset 160
AME_Address_Write_32() called
memoryIndex: 0, offset: 0x20, data: 0xffffffff
Wrote data 0xFFFFFFFF to offset 32
Writes to BAR1 (memoryIndex = 1) – No Response
AME_Address_Write_32() called
memoryIndex: 1, offset: 0x34, data: 0xf
No confirmation log, no visible effect.
Questions
What should memoryIndex be set to for BAR1?
The log shows "BAR1 - MemoryIndex: 0x00000000", but should I be using 1 instead?
How can I verify if a write operation to BAR1 is successful?
Is there a way to check if the memory region is actually writable?
Should I use MemoryRead32() to confirm the written value?
Any guidance would be greatly appreciated!
Best Regards,
Charles
Hi,
trying to upgrade from macOS Sequoia 15.3.2 to 15.4 gives this error. Now I can not boot from internal drive of M1 Air 2020, because it restores 15.4, that does not boot. I can only boot from external USB SSD that has macOS 15.3.2. How I can install back 15.3.2 or alternatively newer macOS to internal drive? I only have one Mac, this M1 Air 2020. I have AppleAppleCare+ to 9/18/25.
Remaining of error is attached here:
Can't ignore lock validation @t8020dart.c:535
panic(cpu 2 caller 0x0): t8020dart 0xfffffdf053908000 (dart-dispext0): Can't ignore lock validation @t8020dart.c:535
Debugger message: panic
Memory ID: 0x6
OS release type: Not set yet
OS version: Not set yet
Kernel version: Darwin Kernel Version 24.4.0: Wed Mar 19 21:12:54 PDT 2025; root:xnu-11417.101.15~1/RELEASE_ARM64_T8103
Fileset Kernelcache UUID: 2BBA525B95E0E6B962ECC44FC093AB57
Kernel UUID: 4E6CBD31-CD1E-3939-8A63-211A206AFA66
Boot session UUID: 3A34167D-2534-402F-9679-2CAD45F67CDA
iBoot version: iBoot-11881.101.1
iBoot Stage 2 version: iBoot-11881.101.1
secure boot?: YES
I've made a dext and a user client that overrides IOUserSCSIPeripheralDeviceType00, with the object of writing device firmware to the driver. I can gain and relinquish exclusive access to the device, I can call UserReportMediumBlockSize and get back a sensible answer (512).
I can build command parameters with the INQUIRY macro from IOUserSCSIPeripheralDeviceHelper.h and send that command successfully using UserSendCB, and I receive sensible-looking Inquiry data from the device.
However, what I really want to do is send a WriteBuffer command (opcode 0x3B), and that doesn't work. I have yet to put a bus analyzer on it, but I don't think the command goes out on the bus - there's no valid sense data, and the error returned is 0xe00002bc, or kIOReturnError, which isn't helpful.
This is the code I have which doesn't work.
kern_return_t driver::writeChunk(const char * buf, size_t atOffset, size_t length, bool lastOne)
{
DebugMsg("writeChunk %p at %ld for %ld", buf, atOffset, length);
SCSIType00OutParameters outParameters;
SCSIType00InParameters response;
memset(&outParameters, 0, sizeof(outParameters));
memset(&response, 0, sizeof(response));
SetCommandCDB(&outParameters.fCommandDescriptorBlock,
0x3B, // byte 0, opcode WriteBuffer command
lastOne ? 0x0E : 0x0F, // byte 1 mode: E=save deferred, F = download and defer save
0, // byte 2 bufferID
(atOffset >> 16), // byte 3
(atOffset >> 8), // byte 4
atOffset, // byte 5
(length >> 16), // byte 6
(length >> 8), // byte 7
length, // byte 8
0, // control, byte 9
0, 0, 0, 0, 0, 0); // bytes 10..15
outParameters.fLogicalUnitNumber = 0;
outParameters.fBufferDirection = kIOMemoryDirectionOut;
outParameters.fDataTransferDirection = kSCSIDataTransfer_FromInitiatorToTarget;
outParameters.fTimeoutDuration = 1000; // milliseconds
outParameters.fRequestedByteCountOfTransfer = length;
outParameters.fDataBufferAddr = reinterpret_cast<uint64_t>(buf);
uint8_t senseBuffer[255] = {0};
outParameters.fSenseBufferAddr = reinterpret_cast<uint64_t>(senseBuffer);
outParameters.fSenseLengthRequested = sizeof(senseBuffer);
kern_return_t retVal = UserSendCDB(outParameters, &response);
return retVal;
}
We've developed a PCIDriverKit driver for the capture card on macOS and have identified an issue: CreateMemoryDescriptorFromClient can only read data from the user space to the driver, but cannot write data back to the user.
typedef struct _MWCAP_EDID_DATA {
uint64_t size;
uint64_t uaddr;
} MWCAP_EDID_DATA;
// App
size_t xxx::GetEdid(void *buff, size_t size)
{
MWCAP_EDID_DATA edid;
edid.size = size;
edid.uaddr = (uint64_t)buff;
kr = IOConnectCallStructMethod(
connect,
kUserGetEdid,
&edid,
sizeof(MWCAP_EDID_DATA),
NULL,
NULL
);
// kr is 0. But However, the data in the buffer remains unchanged;
// it does not reflect the EDID copied from the DEXT.
return size;
}
// Driver
MWCAP_EDID_DATA *edid = (MWCAP_EDID_DATA *)input;
IOMemoryDescriptor *user_buf_mem = NULL;
IOAddressSegment segment;
segment.address = edid->uaddr;
segment.length = edid->size;
// We have verified that the values in edid->uaddr and edid->size are consistent with what was set by the application.
ret = CreateMemoryDescriptorFromClient(kIOMemoryDirectionOutIn, 1, &segment, &user_buf_mem);
if (ret != kIOReturnSuccess) {
os_log(OS_LOG_DEFAULT, "Failed to create memdesc with error: 0x%08x", ret);
break;
}
IOMemoryMap* user_buf_map = nullptr;
ret = user_buf_mem->CreateMapping(0, 0, 0, 0, 0, &user_buf_map);
if (ret != kIOReturnSuccess) {
os_log(OS_LOG_DEFAULT, "Failed to create mapping with error: 0x%08x", ret);
OSSafeReleaseNULL(user_buf_mem);
break;
}
// ... fill the user_buf_map with edid data ...
// For example:
// memcpy(user_buf_map->GetAddress(), source_edid_data, edid->size);
// At this point we have also verified the data in user_buf_map->GetAddress(), which matches our expectations.
OSSafeReleaseNULL(user_buf_map);
OSSafeReleaseNULL(user_buf_mem);
Please help take a look, thank you!
I want to create a DriverKit driver and send vendor-specific commands to the storage device.
Since UserClient is required to access the DriverKit driver from the app, I am attempting to implement it by referring to the following sample code.
[Communicating between a DriverKit extension and a client app]
I want to add com.apple.developer.driverkit.allow-any-userclient-access to the driver's Entitlements, submit a Capability Request on the developer site, and create a Provisioning Profile. However, I don't know how to enable com.apple.developer.driverkit.allow-any-userclient-access.
I am entering the following information in the “Request a System Extension or DriverKit Entitlement” form.
Which entitlement are you applying for? : DriverKit Entitlement
Which DriverKit entitlements do you need? : UserClient Access
UserClient Bundle IDs : [Bundle ID of MyDriver]
Describe your apps and how they’ll use these entitlements. : testing sample code
However, even if this request is accepted, I believe only MyDriver will be permitted. How can I grant access to all UserClients?
Topic:
App & System Services
SubTopic:
Drivers