Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Mac Catalyst: AUv3 Extension no longer works on MacOS, still works on iOS
I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago. it still works on iOS as expected on MacOS the extension is never registered/installed and won't load extension won't show up with AUVal seems to have stopped working with the 26.1 XCode update I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.) I have compared all settings with another still-working project and can't find any meaningful difference (I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.) How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
0
0
132
3w
Creating RTP-MIDI Sessions via MIDINetworkSession C API (dlopen/dlsym) on macOS 15?
I’m an amateur developer working on a free utility for composers/producers, for which the macOS release needs to create and name RTP-MIDI sessions in Audio MIDI Setup from the command line (so I can ship a small C helper instead of telling users to click through the UI). Here’s what I’ve tried so far, without luck: • Plist hacks: Injecting entries into ~/Library/Audio/MIDI Configurations/*.mcfg works when AMS is closed, but AMS immediately locks and reverts my changes when it’s open. • CoreMIDI C API: I can create virtual ports with MIDISourceCreate, but attempting MIDIObjectGetDataProperty on the apple.midirtp.session plugin always returns err –10836. • Obj-C & Swift: Loading MIDINetworkSession and calling defaultSession, init, setNetworkName: and setting enabled = YES doesn’t produce a new session object in the Network panel. • dlopen/dlsym: I extracted the real CoreMIDI binary out of the dyld shared cache and tried binding _MIDINetworkSessionCreate, _SetName, _SetEnabled, etc., but all the symbols come back null or my tool segfaults. • Plugin registration: I’ve pulled the factory UUID (70C9C5EA-7C65-11D8-B317-000393A34B5A) from /System/Library/Extensions/AppleMIDIRTPDriver.plugin/Contents/Info.plist and called CFPlugInRegisterFactories, but it still never exposes the session-creation calls. At this point I’m convinced I’m either loading the wrong binary or missing one critical step in registering the RTP-MIDI plugin’s private API. Can anyone point me to: The exact path of the dylib or bundle that actually exports the MIDINetworkSessionCreate/MIDINetworkSessionSetName/MIDINetworkSessionSetEnabled symbols? A minimal working snippet (C or Obj-C) that reliably creates and names a Network-MIDI session? Any pointers, sample code, or even ideas about where Apple hides this functionality on macOS 15 would be hugely appreciated. Thanks!
0
1
104
Jun ’25
MusicKit: Best way to check if all tracks of albums are added to library.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline). Using .with(.tracks) on the library album fetches the tracks added to the library. The trackCount property is referring to the tracks that can be fetched from the library. The isComplete property is always nil when fetching from the library. One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well. Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids. Thank you
0
0
75
Mar ’25
The files generated using AVAudioRecorder have a constant size of only 4kb
Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply. - (void)startRecordWithOrderID:(NSString *)orderID { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; [audioSession setActive:YES error:nil]; NSMutableDictionary *settings = [[NSMutableDictionary alloc] init]; [settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey]; [settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"]; NSURL *tmpFile = [NSURL fileURLWithPath:path]; recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil]; [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder record]; }
0
0
111
Jul ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
101
Jun ’25
Core Audio Tap: per-device attenuation vs. number of stereo output pairs — how to get unattenuated “raw” app streams?
Hi all, I’ve implemented the new Core Audio Tap API (AudioHardwareCreateProcessTap with CATapDescription) and I’m seeing consistent level attenuation that scales with the number of stereo output pairs exposed by the target device. What I observe Device with 4 stereo pairs (8 outs) → tap shows −12.04 dB relative to source. True 2-ch devices (built-in speakers, AirPods) → ~0 dB attenuation. The attenuation appears regardless of whether I: Create a global (default-output) tap via initStereoGlobalTapButExcludeProcesses: Or create a per-process/per-device tap via initWithProcesses:andDeviceUID:withStream: Additionally, the routing choice inside the sending app matters: App output to “System/Default Output” → I often see no attenuation. App output directly to a multi-out interface (e.g., RME Fireface) → I see the pair-count-scaled attenuation. I can query Core Audio for the number of output channels/pairs and gain-compensate (+20·log10(N_pairs) dB) and that matches my measurements for many cases. However, this compensation is not universally correct because it seems to depend on where each process routes its audio (Default Output vs. direct device), even when those processes are included in the same tap aggregate. Question Is there a supported way to obtain the raw, unattenuated streams for all processes through the Tap API—i.e., to bypass this automatic headroom/attenuation behavior entirely? If this attenuation is expected by design: Is there a documented rule for when it applies (global vs. device taps, per-process taps, stream selection, etc.)? Is there a property/flag to disable it, or a reliable, official method to compute the exact compensation (beyond counting stereo pairs)? Any guidance on ensuring consistent levels when multiple processes route differently (Default Output vs. direct device) but are captured by the same tap? Environment API: AudioHardwareCreateProcessTap + CATapDescription Devices: built-in output (2-ch), RME Fireface (8+ outs / 4+ stereo pairs) Behavior reproducible with both global and per-process/per-device tap descriptions. Attenuation example: 4 stereo pairs → −12.04 dB observed. Happy to provide a minimal sample, measurements, and device logs. Thanks! — David
0
0
81
Nov ’25
Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
0
1
161
Apr ’25
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
0
0
538
Jul ’25
How can I find the user's "Favorite Songs" playlist?
It sounds simple but searching for the name "Favorite Songs" is a non-starter because it's called different names in different countries, even if I specify "&l=en_us" on the query. So is there another property, relationship or combination thereof which I can use to tell me when I've found the right playlist? Properties I've looked at so far: canEdit: will always be false so narrows things down a little inFavorites: not helpful as it depends on whether the user has favourite the favourites playlist, so not relevant hasCatalog: seems always true so again may narrow things down a bit isPublic: doesn't help Adding the catalog relationship doesn't seem to show anything immediately useful either. Can anyone help? Ideally I'd like to see this as a "kind" or "type" as it has different properties to other playlists, but frankly I'll take anything at this point.
0
0
270
Jul ’25
occasional glitches and empty buffers when using AudioFileStream + AVAudioConverter
I'm streaming mp3 audio data using URLSession/AudioFileStream/AVAudioConverter and getting occasional silent buffers and glitches (little bleeps and whoops as opposed to clicks). The issues are present in an offline test, so this isn't an issue of underruns. Doing some buffering on the input coming from the URLSession (URLSessionDataTask) reduces the glitches/silent buffers to rather infrequent, but they do still happen occasionally. var bufferedData = Data() func parseBytes(data: Data) { bufferedData.append(data) // XXX: this buffering reduces glitching // to rather infrequent. But why? if bufferedData.count > 32768 { bufferedData.withUnsafeBytes { (bytes: UnsafeRawBufferPointer) in guard let baseAddress = bytes.baseAddress else { return } let result = AudioFileStreamParseBytes(audioStream!, UInt32(bufferedData.count), baseAddress, []) if result != noErr { print("❌ error parsing stream: \(result)") } } bufferedData = Data() } } No errors are returned by AudioFileStream or AVAudioConverter. func handlePackets(data: Data, packetDescriptions: [AudioStreamPacketDescription]) { guard let audioConverter else { return } var maxPacketSize: UInt32 = 0 for packetDescription in packetDescriptions { maxPacketSize = max(maxPacketSize, packetDescription.mDataByteSize) if packetDescription.mDataByteSize == 0 { print("EMPTY PACKET") } if Int(packetDescription.mStartOffset) + Int(packetDescription.mDataByteSize) > data.count { print("❌ Invalid packet: offset \(packetDescription.mStartOffset) + size \(packetDescription.mDataByteSize) > data.count \(data.count)") } } let bufferIn = AVAudioCompressedBuffer(format: inFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maxPacketSize)) bufferIn.byteLength = UInt32(data.count) for i in 0 ..< Int(packetDescriptions.count) { bufferIn.packetDescriptions![i] = packetDescriptions[i] } bufferIn.packetCount = AVAudioPacketCount(packetDescriptions.count) _ = data.withUnsafeBytes { ptr in memcpy(bufferIn.data, ptr.baseAddress, data.count) } if verbose { print("handlePackets: \(data.count) bytes") } // Setup input provider closure var inputProvided = false let inputBlock: AVAudioConverterInputBlock = { packetCount, statusPtr in if !inputProvided { inputProvided = true statusPtr.pointee = .haveData return bufferIn } else { statusPtr.pointee = .noDataNow return nil } } // Loop until converter runs dry or is done while true { let bufferOut = AVAudioPCMBuffer(pcmFormat: outFormat, frameCapacity: 4096)! bufferOut.frameLength = 0 var error: NSError? let status = audioConverter.convert(to: bufferOut, error: &error, withInputFrom: inputBlock) switch status { case .haveData: if verbose { print("✅ convert returned haveData: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(haveData) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } case .inputRanDry: if verbose { print("🔁 convert returned inputRanDry: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(inputRanDry) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } return // wait for next handlePackets case .endOfStream: if verbose { print("✅ convert returned endOfStream") } return case .error: if verbose { print("❌ convert returned error") } if let error = error { print("error converting: \(error.localizedDescription)") } return @unknown default: fatalError() } } }
0
0
551
Jul ’25
AVAudioEngine installTap stops working after phone call interruption on iPhone 16e
Environment Device: iPhone 16e iOS Version: 18.4.1 - 18.7.1 Framework: AVFoundation (AVAudioEngine) Problem Summary On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices. Expected Behavior After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers. Actual Behavior After resuming from phone call interruption: Tap callback is no longer invoked No audio data is captured No errors are thrown Engine appears to be running normally Note: Normal pause/resume (without phone call interruption) works correctly. Steps to Reproduce Start audio recording on iPhone 16e Receive or make a phone call (triggers AVAudioSession interruption) End the phone call Resume recording with audioEngine.start() Result: Tap callback is not invoked Tested devices: iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗ iPhone 14 (iOS 18.x): Works correctly ✓ iPhone SE 3 (iOS 18.x): Works correctly ✓ Code Initial Setup (Works) let inputNode = audioEngine.inputNode inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioEngine.start() Interruption Handling NotificationCenter.default.addObserver( forName: AVAudioSession.interruptionNotification, object: AVAudioSession.sharedInstance(), queue: nil ) { notification in guard let userInfo = notification.userInfo, let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } if type == .began { self.audioEngine.pause() } else if type == .ended { try? self.audioSession.setActive(true) try? self.audioEngine.start() // Tap callback doesn't work after this on iPhone 16e } } Workaround Full engine restart is required on iPhone 16e: func resumeAfterInterruption() { audioEngine.stop() inputNode.removeTap(onBus: 0) inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioSession.setActive(true) try audioEngine.start() } This works but adds latency and complexity compared to simple resume. Questions Is this expected behavior on iPhone 16e? What is the recommended way to handle phone call interruptions? Why does this only affect iPhone 16e and not iPhone 14 or SE 3? Any guidance would be appreciated!
0
0
120
Oct ’25
Why Does WebView Audio Get Quiet During RTC Calls? (AVAudioSession Analysis)
I developed an educational app that implements audio-video communication through RTC, while using WebView to display course materials during classes. However, some users are experiencing an issue where the audio playback from WebView is very quiet. I've checked that the AVAudioSessionCategory is set by RTC to AVAudioSessionCategoryPlayAndRecord, and the AVAudioSessionCategoryOption also includes AVAudioSessionCategoryOptionMixWithOthers. What could be causing the WebView audio to be suppressed, and how can this be resolved?
0
0
550
Jul ’25
AVPlayerItem. externalMetadata not available
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get: Value of type 'AVPlayerItem' has no member 'externalMetadata' Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18. Code snippet: import Foundation import AVFoundation /// ... in function ... // create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338 var title = AVMutableMetadataItem() title.identifier = .commonIdentifierAlbumName title.value = "My Title" as NSString? title.extendedLanguageTag = "und" var playerItem = await AVPlayerItem(asset: composition) playerItem.externalMetadata = [ title ]
0
0
72
Apr ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
109
Apr ’25
CoreMIDI driver - flow control
Hi, when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system. What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data. It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work? Thanks, any hints or pointers are highly appreciated! Hagen.
0
0
147
Oct ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
99
Apr ’25
MusicKit - Skipping Forwards or Backwards does not update
Hello everyone, I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app. How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app. If you play a song in Apple Music, you can see a Now Playing view in the lock screen. When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song. What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app. When skipping a song outside of the app, it works correctly to head to the next song. But when I return to the app, it is not reflected NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc. NOTE: I am using ApplicationMusicPlayer.shared Is there a way to get the song to reflect in my app? (If its easier, a simple example of it would be nice. No need to create an entire xprod file)
0
0
86
Apr ’25