I’m requesting access to the Family Controls API for an iOS app currently in development. I’ve submitted the request through the official form here:
https://developer.apple.com/contact/request/family-controls-distribution
However, after submitting, I receive no confirmation email or support ticket ID. The page only shows a “Thank you for requesting the API” message, and I’m left without a way to track or confirm the request.
This entitlement is essential for my app’s functionality, and I need to move forward with development and testing. Can someone from the Apple team please confirm receipt of the request and provide guidance on the next steps or estimated timelines?
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I had submitted a question to clarify which components have accessibility APIs that trigger haptics for VoiceOver users https://developer.apple.com/forums/thread/773182.
The question stems from perhaps a more direct question about specific components: do tablists and disclosures natively intend to include haptics or screen reader hint or other state or properties to indicate to screen reader users where the component begins or ends?
In some web experiences there are screen reader hint text stating "end of..." or "entering" as a way to define the boundaries of these inline dialogs.
I had asked about haptics in the prior thread because I do not recall natively implemented version of this except in some haptic cues but have not experienced them consistently so I am not sure if that is an intended native Swift implementation or perhaps something custom.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
iOS
Accessibility
Sound and Haptics
Core Haptics
I've just received an email from Apple regarding the Global Accessibility Awareness Day and some forthcoming sessions to promote their accessibility features.
What a joke.
For many years, Apple refuses to provide the most basic accessibility requirement on macOS:
LET USERS DISABLE ALL NON-CONSENSUAL UNSOLICITED ANIMATIONS AND OTHER UI CONVULSIONS.
The scourge of animations started from macOS Lion.
Yes, many of them can be, fortunately, disabled through some obscure Terminal commands (that is, if the user is lucky enough to discover them on some obscure internet resources).
The "Reduce motion" control in System Settings is a fake option that doesn't do anything.
And there are two most glaring accessibility violations that cannot be disabled:
Scroll bar rollover highlight effect introduced on macOS 10.7.3. Every time you move the cursor over a scroll bar, the bar gets highlighted. It results in bringing the user's attention to random scroll bars for no reason whatsoever just because the cursor happens to pass over the bar at some point. HUNDREDS of unnecessary, annoying events of distraction daily!
Expand/collapse animation of NSOutlineView (such as when we open/close a folder in the list view in the Finder, as well as any other app that's using outline views). It's extremely annoying, distracting, and time-wasting.
All feedback submitted about this through the years remains mostly ignored (except for a few cases where I received some ridiculous replies from employees who, apparently, are barely familiar with Macs in general).
Apple does NOT care about accessibility. Not only this, but it's obvious that Apple is, in fact, intentionally abusing those users who can't tolerate distracting, time-wasting animations and UI convulsions.
I have a UITextField in my application for entering a state. If I tap on it, a UIPickerView pops up and let's the user select a state (but they can still type too).
The issue relates to Full Keyboard Access. If we select the UITextField using an external keyboard, the UIPickerView appears, but in order to get to it the user has to tab through the whole view controller to get to the UIPickerView at the end.
What would be nice is to a) move focus directly to the UIPickerView (have it highlighted in blue and scrollable right away with keyboard) or b) make the UIPickerView the next view that's accessible when tabbing over or using the arrow keys.
I've tried using:
UIAccessibility notifications (both .screenChanged and .layoutChanged, with and without a delay). This ended up only announcing the view, but didn't help with full keyboard access.
Making the UIPickerView a first responder when it appears.
Attempting to change the accessibilityElements order (but with so many views and views within views, this isn't really a viable option either).
Pressing tab + -> (tab and right arrow button) will quickly take the user to the end of the chain of accessibility elements, in other words, to the UIPickerView. But there has to be a cleaner way of just automatically setting the focus to the UIPickerView or making it the next element by pressing the arrow key.
We have an electron app developed for Mac. We would like to restore the user data previously saved in downloads once user installs the app from store and first launch. But MAS has restrictions with ""com.apple.security.files.downloads.read-write". We have enabled the user access in Entitlement files and request user permission before access What options can be user to auto restore the data from downlodas?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi everyone,
My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment.
Problem 1: Simulating Eye Tracking in Simulator
We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro?
Problem 2: WebSocket-triggered "Click" doesn't work outside the app
We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works.
We suspect this is due to sandbox restrictions or lack of system-level permissions.
Is it possible in anyway to:
Trigger interaction events outside the app using custom input (like BCI via Websocket)?
Access system-wide click/tap simulation APIs from within VisionOS apps
Integrate this with accessibility services (like Voice Control or AssistiveTouch)
We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS.
Thanks in advance for any insight you can provide!
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Xcode
Accessibility
iPad and iOS apps on visionOS
Issue:
When using the shortcut Command + Delete to clear a line of text, the next character I type in Thai unexpectedly appears as an English character, even though the input source is still set to Thai. After that, subsequent characters return to Thai as expected.
Details:
Affected apps: Notes, Messages, and some other native apps
Not affected: Browser text fields (Safari, Chrome, etc.)
Does not occur when using Option + Delete or just Delete
macOS [insert beta version + build number]
Mac model: [insert model]
Input sources: Thai – Kedmanee, English – U.S.
Steps to reproduce:
Open Notes (or Messages).
Switch to Thai input.
Type a few Thai words.
Press Command + Delete.
Type again — the first character shows up in English.
Expected:
First character should remain in Thai, consistent with the active input source.
Actual:
First character shows as English, then input switches back to Thai.
Topic:
Accessibility & Inclusion
SubTopic:
General
i would like to ask what are the new touchpad shortcuts in the latest os for ipad using a trackpad. The three fingers swipe to the right and left seemed to be removed. Thank you
I’m working on a macOS Accessibility setup for a French-speaking user and I’ve hit a wall. (I'm not a developper and I'm trying to help my kid with dyslexia)
I successfully built a custom word prediction panel using the Panel Editor (Keyboard) in macOS Accessibility > Keyboard > Accessibility Keyboard.
Here’s what I have so far:
• The prediction panel works system-wide: I can use it to type in Finder, Safari, Notes, TextEdit, and even browser search bars.
• The panel appears above all applications and suggestions show up correctly.
• However, it does not work inside Google Docs (tested in Chrome, Safari, and Firefox). Selecting a word from the panel does nothing in the Docs editor.
I suspect this is because:
• Google Docs does not use a standard macOS text input field.
• Docs is a web app that relies on custom JavaScript editors, contentEditable elements, and canvas rendering, so macOS Accessibility APIs (AXTextField, AXInsertText, etc.) don’t register or inject text events.
• Accessibility tools like the Accessibility Keyboard rely on native macOS text input methods, which don’t hook into Google Docs’ custom editor.
Important:
I’m not a programmer. I’d like to know if there is an easy fix or option in macOS, Google Chrome, or Google Docs that would make my custom prediction panel work, before going into custom development.
Technical setup:
• MacBook Air (M2, 2022)
• RAM: 8 GB
• macOS: Sequoia 15.3.1
• Language: French (system and keyboard)
• Accessibility Keyboard: Enabled via Settings > Accessibility > Keyboard
• Custom panel: Built using Panel Editor (Keyboard), named “Philemon Prédiction”
• Browsers tested: Chrome, Safari, Firefox (same issue)
• Behavior: Panel is visible, suggestions appear, but inserting text does nothing in Google Docs
Has anyone worked around this limitation? Is there a simple setting, workaround, or accessibility option to bridge macOS Accessibility input with Google Docs’ editor?
Thanks a lot!
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I need to understand the different layers that are there in the iPhone X and later OLED screens as I am designing a hardware attachment. They seem to be projecting letters and images from a different layer than the subpixel layer. Is this proprietary information, or is there a resource that explores them?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi everyone,
I’m developing a React Native iOS app that includes a custom keyboard extension for sending stickers across apps. The project builds successfully, and the main app installs fine on my test device.
However, I’m not seeing the keyboard extension appear under Settings → General → Keyboard → Keyboards → Add New Keyboard, which means I can’t activate it or grant access. At this point, I’m not even sure if the extension is actually being installed on the device along with the main app.
Here’s what I’ve done so far. I created a Keyboard Extension target in Xcode, set the correct bundle identifiers and provisioning profiles, and enabled “Requests Open Access” in the extension’s Info.plist. I built and installed the app on a physical device rather than the simulator to ensure proper testing.
My main questions are: how can I confirm that the extension is being installed on the device, and if it isn’t, what might prevent it from installing even though the build completes successfully?
Any insights, troubleshooting steps, or guidance would be greatly appreciated.
I am encountering the following issue while working with app group preferences in my Safari web extension:
Couldn't read values in CFPrefsPlistSource<0x3034e7f80> (Domain: [MyAppGroup], User: kCFPreferencesAnyUser, ByHost: Yes, Container: (null), Contents Need Refresh: Yes): Using kCFPreferencesAnyUser with a container is only allowed for System Containers, detaching from cfprefsd.
I am trying to read/write shared preferences using UserDefaults with an App Group but keep running into this error. Any guidance on how to resolve this would be greatly appreciated!
Has anyone encountered this before? How can I properly configure my app group preferences to avoid this issue?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Swift Packages
Messages
Xcode
Group Activities
Updated to iOS 26 beta and now the TV remote app in the control center won’t open. I’ve tried the following:
Restart phone
Remove shortcut and re-add
Cant find any other troubleshooting methods for this issue online so I’m guessing it’s a new problem.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi everybody,
I'm trying to build a QR-Code Scanner and Generator App for IOS.
Whenever I try to implement the camera the app crashes with this comment:
This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSCameraUsageDescription key with a string value explaining to the user how the app uses this data.
I tried to reduce the app to the minimum of nothing but camera with the same result.
Any ideas?
Tank you and
best Regards
Horst Schippers
Topic:
Accessibility & Inclusion
SubTopic:
General
We have an app under development which allows musicians to unlock contact details of people who posted about an upcoming event. The musician pays a fees to unlock this contact details.
Both the musician & the post owner are registered users. We will reveal the same contact info that the post owner used for account signup verification.
Questions:
Is this allowed? (given that we obtain consent to share contact info to other people and clearly mention this in privacy policy)
If yes, will we have to use App store in-app purchase to facilitate this transaction or are we free to use a payment processor such as Stripe.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi!
I'm working on an application where I'd like VoiceOver to give each element of a tab bar the "Tab" trait. I'm testing this using the Accessibility Inspector. Essentially, I'd like to replicate the behavior of how Safari identifies each of its tabs as a "Tab" (I've attached a photo below).
How exactly is this accomplished? I've tried using the .isTabBar trait to designate the child objects as "Tabs", but this doesn't seem to be working and I've struggled to find documentation about this. For additional context, these child items are Buttons, and I would like to have the .isButton trait essentially replaced by something like an .isTab trait. Not sure if this is actually possible or not, but curious how the Accessibility Inspector recognizes this in Safari.
As part of apple pay implementation we are trying to create a merchant session by trying to connect to apple endpoint https://apple-pay-gateway-cert.apple.com/paymentservices/startSession.
While trying to do so we are facing an error “An error occurred while sending the request. The request was aborted: Could not create SSL/TLS secure channel.” .
I call the validation url by passing to a C# .Net Framework 4.8 Web API. The API setups an HttpClient with the Merchant Identity Validation Certificate found in my apple account and calls the validation url passing in the required Json Validation Object. When I call PostAsync() I get an exception with the above error message
Code is working successfully on my local machine but facing this issue while deployed on Dev / Model environment for testing.
We have used Azure app service for deployment and TLS version 1.2 already present here.
We have used the Merchant Identity certificate that was issued and have also checked with networking and infrastructure team to make its not an issue from our side.
Does anyone have any other idea what could be causing this error.
Thank you,
Supriya
Topic:
Accessibility & Inclusion
SubTopic:
General
I have subscribed to the developer program, but it’s already been a day and it still shows “is not enrolled in the Apple Developer Program.”
Topic:
Accessibility & Inclusion
SubTopic:
General
When using an app via external keyboard, FocusState and .focused used to work just fine until iOS17. Vertical-axis textfields were also accessible without any issues. But after iOS18 update, adding focused modifier removes elements out of focus order of external keyboard.
1 such example is -when a button using focused modifier and @FocusSate is inside a ScrollView and if this view is getting opened via NavigationLink, that button is not accessible via Bluetooth (external) keyboard.
TextEditor / Vertical-axis TextFields also seem to be impacted in external-keyboard-focus-order when added inside ScrollView.
Is this a known iOS18 issue with ScrollView / any tip to get this fixed ?
Sample code that can reproduce this issue:
struct ContentView: View {
@State private var showBottomSheet: Bool = false
@State private var goToNextView: Bool = false
@FocusState private var focused: Bool
@AccessibilityFocusState private var voFocused: Bool
var body: some View {
NavigationView {
VStack {
Text("Hello, world!")
// This button works fine in Bluetooth keyboard in all versions
Button("Trigger a bottomsheet") {
showBottomSheet = true
}
.focused($focused)
.accessibilityFocused($voFocused)
Button("Goto another view") {
goToNextView = true
}
NavigationLink(
destination: View2(),
isActive: $goToNextView
) { EmptyView() }
.accessibility(hidden: true)
}
.sheet(isPresented: $showBottomSheet,
onDismiss: {
focused = true
voFocused = true
}, content: {
VStack() {
Text("Hello World ! I'm in a bottomsheet")
Button("Close me") {
showBottomSheet = false
}
}
})
.padding()
}
}
}
#Preview {
ContentView()
}
struct View2: View {
@FocusState private var focused: Bool
@AccessibilityFocusState private var voFocused: Bool
@State private var showBottomSheet: Bool = false
var body: some View {
ScrollView {
VStack {
Text("check")
// In iOS18, this button doesn't get focused in Bluetooth / external keyboard
// This issue occurs when these 3 combine in iOS 18 - a button using FocusState inside a view that has a ScrollView & it is opened via NavigationLink
Button("Trigger a bottomsheet") {
showBottomSheet = true
}
.focused($focused)
.accessibilityFocused($voFocused)
Button("Test button") { }
}
.sheet(isPresented: $showBottomSheet,
onDismiss: {
focused = true
voFocused = true
}, content: {
VStack() {
Text("Hello World ! I'm in a bottomsheet")
Button("Close me") {
showBottomSheet = false
}
}
})
.padding()
}
}
}