Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

About the built-in instrument sound of Apple devices
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
0
0
56
May ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
132
May ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
65
Apr ’25
Frequent crashes related to com.apple.coreaudio.AQClient thread
I'm encountering numerous crashes involving the com.apple.coreaudio.AQClient thread on our application. The crash details are as follows: #10 com.apple.coreaudio.AQClient SIGSEGV SEGV_ACCERR 0 libobjc.A.dylib _objc_msgSend + 44 1 AudioToolbox ClientMessageHandler::PropertyChanged(unsigned int) + 872 2 AudioToolbox ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 924 3 AudioToolbox __XCallbackNotificationsAvailable + 212 4 libAudioToolboxUtility.dylib _mshMIGPerform + 260 5 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 6 CoreFoundation ___CFRunLoopDoSource1 + 596 7 CoreFoundation ___CFRunLoopRun + 2392 8 CoreFoundation _CFRunLoopRunSpecific + 572 9 AudioToolbox CADeprecated::GenericRunLoopThread::Entry(void*) + 156 10 libAudioToolboxUtility.dylib CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 88 11 libsystem_pthread.dylib __pthread_start + 116 All these crashes occur on system versions below iOS/iPadOS 17, primarily when the device's available RAM is low. What steps can I take to resolve this issue? Any insights would be greatly appreciated!
0
0
171
Nov ’25
Changing playback rate of AVQueuePlayer not working for some Airplay devices.
Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device. When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed. Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app. Kind regards.
0
0
428
Jan ’25
tvOS AVQueuePlayer Now Playing Info in Control Center?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
0
0
98
Jun ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
389
Mar ’25
AVAssetResourceLoaderDelegate for radio stream
Hi everyone, I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time. I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue. Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio? Any tips, examples, or advice would be appreciated. Thanks!
0
0
123
Jun ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
48
Apr ’25
access transport in Logic Pro
hi, i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au. is there a lpx resource available for developers anywhere? if (auto* playHead = processor->getPlayHead()) { juce::AudioPlayHead::CurrentPositionInfo posInfo; if (playHead->getCurrentPosition(posInfo)) { bool isCurrentlyPlaying = posInfo.isPlaying; if (isCurrentlyPlaying != wasTransportPlaying) { if (isCurrentlyPlaying) { wasTransportPlaying = isCurrentlyPlaying; startAllTimers(); } else { wasTransportPlaying = isCurrentlyPlaying; stopAllTimers(); } } } } thanks :)
0
0
259
Mar ’25
AudioUnit may experience silent capture issues on iPadOS 18.4.1 or 18.5.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows: AudioSession: category:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeDefault option:77 preferredSampleRate:48000.000000 preferredIOBufferDuration:0.010000 AudioUnit format.mFormatID = kAudioFormatLinearPCM; format.mSampleRate = 48000.0; format.mChannelsPerFrame = 2; format.mBitsPerChannel = 16; format.mFramesPerPacket = 1; format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8; format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket; format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; component.componentType = kAudioUnitType_Output; component.componentSubType = kAudioUnitSubType_RemoteIO; component.componentManufacturer = kAudioUnitManufacturer_Apple; component.componentFlags = 0; component.componentFlagsMask = 0;
0
0
128
Jun ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
0
0
141
Jun ’25
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
0
0
538
Jul ’25