Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Problems recording audio on Tahoe 26.0 (Intel only)
I have some tried-and-tested code that records and plays back audio via AUHAL which breaks on Tahoe on Intel. The same code works fine on Sequioa and also works on Tahoe on Apple Silicon. To start with something simple, the following code to request access to the Microphone doesn't work as it should: bool RequestMicrophoneAccess () { __block AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType: AVMediaTypeAudio]; if (status == AVAuthorizationStatusAuthorized) return true; __block bool done = false; [AVCaptureDevice requestAccessForMediaType: AVMediaTypeAudio completionHandler: ^ (BOOL granted) { status = (granted) ? AVAuthorizationStatusAuthorized : AVAuthorizationStatusDenied; done = true; }]; while (!done) CFRunLoopRunInMode (kCFRunLoopDefaultMode, 2.0, true); return status == AVAuthorizationStatusAuthorized; } On Tahoe on Intel, the code runs to completion but granted is always returned as NO. Tellingly, the popup to ask the user to grant microphone access is never displayed, even though the app is not present in the Privacy pane and never appears there. On Apple Silicon, everything works fine. There are some other problems, but I'm hoping they have a common underlying cause and that the Apple guys can figure out what's wrong from the information in this post. I'd be happy to test any potential fix. Thanks.
2
0
436
Oct ’25
How to synchronize the clock sources of two audio devices
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
2
0
183
May ’25
MusicKit: Best way to check if all tracks of albums are added to library.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline). Using .with(.tracks) on the library album fetches the tracks added to the library. The trackCount property is referring to the tracks that can be fetched from the library. The isComplete property is always nil when fetching from the library. One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well. Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids. Thank you
0
0
76
Mar ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
168
May ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
299
2w
tvOS AVQueuePlayer Now Playing Info in Control Center?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
0
0
98
Jun ’25
Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
2
0
175
4d
Essentials of macOS to read and write mp3 and mp4 audio files
Hi, On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore. So I decided to try different macOS API using the AudioFileID of AudioToolbox framework. I decided to write a test: https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588 It fails right here: AudioFileOpenWithCallbacks() By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError. The filename was set to an MP4 file: ~/Music/test.mp4 Howto fix this? regards, Joël
1
0
127
Jun ’25
Improving Speech Analyzer Transcription for technical terms
I am developing an app with transcription and I am exploring ways to improve the transcription from the SpeechAnalyzer/Transcriber for technical terms. SFSpeech... recognition had the capability of being augmented by contextualStrings. Does something similar exist for SpeechAnalyzer/Transcriber? If so please point me towards the documentation and any sample code that may exist for this. If there are other options, please let me know.
1
0
233
Sep ’25
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
0
0
248
2w
Potential Documentation Error in kAudioAggregateDevicePropertyTapList Sample Code
Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months. Capturing system audio with Core Audio taps The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData. // (Code to get the list and potentially modify listAsArray) if var listAsArray = list as? [CFString] { // ... (modification logic) ... // Set the list back on the aggregate device. <--- The comment is correct list = listAsArray as CFArray _ = withUnsafeMutablePointer(to: &list) { list in // INCORRECT: This call uses tapID as the target object. AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list) } } The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended. Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample. The AudioObjectID for both getting and setting this property should be the ID of the aggregate device. _ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list) _ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList) Thank you!
0
0
317
Sep ’25
coreaudiod display sleep
hi all, as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display. can i somehow stop the insertion of the display sleep assertion? pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep" Created for PID: 4145. where PID 4145 is spotify. but it doesn't matter which app is playing the audio. any help would be appreciated thanks
0
0
58
3w
WebM audio playback
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API? Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen). Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
1
0
446
Mar ’25
Help Needed: How to Make iOS Timer More Stable?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin. The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds. When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating. I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance! P.S. I’ve already enabled background audio permissions. The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
2
0
555
Jan ’25
Issue using Siphon Tap on input AudioQueue
Hi all, I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15. I use an AudioQueue for input and another for output. This works great. I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this... Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon; This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below. NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it. Relevant code: AudioQueueProcessingTapCallback tap_callback) { // Makes an audio tap for a queue void * tap_data_ptr = NULL; AudioQueueProcessingTapFlags tap_flags = kAudioQueueProcessingTap_PostEffects | kAudioQueueProcessingTap_Siphon; uint32_t max_frames = 0; AudioStreamBasicDescription asbd; AudioQueueProcessingTapRef tap_ref; OSStatus status = AudioQueueProcessingTapNew(queue_ref, tap_callback, tap_data_ptr, tap_flags, &max_frames, &asbd, &tap_ref); if (status != noErr) printf("Error while making Tap\n"); else printf("Successfully made tap\n"); } void tapper(void * tap_data, AudioQueueProcessingTapRef tap_ref, uint32_t number_of_frames_in, AudioTimeStamp * ts_ptr, AudioQueueProcessingTapFlags * tap_flags_ptr, uint32_t * number_of_frames_out_ptr, AudioBufferList * buf_list) { // Callback function for audio queue tap printf("Tap callback"); }``` Image of exception stack provided by Xcode: ![]("https://developer.apple.com/forums/content/attachment/27479e8d-a118-459b-aa2d-7e30528910e3" "title=Screenshot 2025-06-14 at 1.29.14 PM.png;width=932;height=562") What have I missed? Appreciate any help you learned folks may be able to provide. Best, Geoff.
1
0
92
Jun ’25
【溦N51888M】腾龙公司会员申请流程步骤
【溦N51888M】腾龙公司会员申请流程步骤【罔纸 211239.com 】输入官惘到浏览器打开联系24小时在线业务人员办理上下,打开公司官网. 二、点击主页右上角注册按钮. 三、填写账号信息. 四、输入手机号,验证码,密码. 五、勾选用户协议,完成注册协议,完成注册. 注意:若出现账号已存在」提示,需重新设置唯一账号名称
0
0
184
3w