Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Live Translations on VOIP on iOS26
Hi team, With regards to Call (Live) Translations on VOIP: Is it possible to invoke live translations within the app? (without going into the Call System UI) Is it possible to navigate users from app to Call System UI via an API? (and also invoking the new live translations directly) Will Apple support more languages apart from the current ones? (Currently I see 4 supported languages)
1
0
164
Aug ’25
SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
1
0
201
Aug ’25
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
1
0
397
6d
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
248
Aug ’25
Audio clipping - macOS Tahoe 26 - Beta 5
I was testing audio playback from YouTube in Safari, and the sound was clipping heavily. At first, I thought it might be due to the poor quality of my small sound system. However, when I took a screenshot and the screenshot sound effect itself produced a loud clipping noise, it became clear that this is not a mechanical problem with my speakers, nor an issue specific to YouTube or Safari. This appears to be a system-wide audio issue in macOS Tahoe 26 - Beta 5.
1
2
298
Aug ’25
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'], }); await AudioSession.startAudioSession(); ❓ My questions: Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max? Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices? Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)? Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
1
0
92
4w
Application tones start when I get incoming call or message
I've got a problem with my app where I'm testing it on my own phone. I'm using audio kit to generate tones as part of the app. Everything seems to work fine. Sounds start, Stop, etc. They play when the app is closed and when the phone is locked, so background is working. However, I'm seeing an issue where, even when STOP is pressed and the application exited, if I get a notification such as a text message, the base tone for the app starts to play. If I then open the app, check the Start/Stop button - it says start so that. hasnt' been activated. If I click Start, then a 2nd tone starts. This one stops with the Stop button. However the original tone that was set off by an incoming message carries on playing. Until I go to the Open Apps View on the phone and slide the application upwards. For the life of me, I can't figure out whats happening here.
1
0
91
May ’25
Can't set AVAudio sampleRate and installTap needs bufferSize 4800 at minimum
Two issues: No matter what I set in try audioSession.setPreferredSampleRate(x) the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad. Now, I'm checking the current output loudness to animate a 3D character, using mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in Task { @MainActor in // calculate rms and animate character accordingly but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized. This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results. But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame. My AVAudioEngine setup is the following: audioEngine.connect(playerNode, to: pitchShiftEffect, format: format) audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format) audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil) Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second. PS: Specifying my favorite format in the let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)! mixerNode.installTap(onBus: 0, bufferSize: y, format: format) doesn't change anything either
1
0
385
Aug ’25
MusicKit: Multichannel Dolby Atmos Limited to Stereo Output - Is This Intended Behavior?
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available. When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when: Audio MIDI Setup is configured for 7.1.4 (12-channel) output The same tracks play in full multichannel through the native Apple Music app Dolby Atmos is set to "Automatic" in Apple Music preferences Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
1
2
287
Aug ’25
ShazamKit Background Operation Broken on iOS 18 - SHManagedSession Stops Working After ~20 Seconds
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow: Hi Apple Engineers and fellow developers, I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17. The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground. My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background. To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground. This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far. I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds? Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models. Any guidance would be greatly appreciated.
1
0
365
2w
Detecting if a phone call is being recorded by another app on iOS
Hello, I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background. I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way. My questions are: Does iOS expose any API to detect if a call is being recorded? If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon? Or is this something that iOS explicitly prevents for privacyreasons? Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines. Thanks in advance for any guidance.
1
0
254
Aug ’25
Feature Request: Long-Lived Access to Personal Apple Music Data
Feature Request: Long-Lived Access to Personal Apple Music Data Use Case Summary I'm developing a personal portfolio website (using Nuxt) and want to display information from my own Apple Music library - showcasing personal playlists, recently played tracks, or a read-only "now playing" widget. This is purely for personal use on my website and doesn't require other users to log in. With Spotify's API, implementing this was straightforward thanks to automatic token refresh. I want a similarly seamless integration with Apple Music. Challenge with MusicKit and Music User Tokens Apple Music API requirements Apple's Music API requires a valid Music User Token (MUT) for requests involving personal library data. Beyond the Apple Developer Token, you must obtain a user-specific token via MusicKit authentication to access your own library playlists, play history, or current playback status. Token expiration and manual renewal Music User Tokens expire after approximately 6 months without any mechanism to automatically refresh or renew them - unlike typical OAuth flows that provide refresh tokens. Apple's guidance suggests the device (e.g., iPhone) is responsible for obtaining new user tokens when old ones expire. This works for interactive apps on Apple devices but fails in server-side or long-lived web contexts like a personal website widget. Impact on personal projects Displaying Apple Music data on a public-facing site becomes difficult. I would need to periodically re-authenticate through the MusicKit JS flow every few months just to keep a widget alive. Embedding credentials in a public site is insecure, and manual token refreshing is cumbersome and easy to forget. Comparison to Spotify's Token Model Spotify's API offers a developer-friendly authentication model. Their OAuth flow provides a Refresh Token that applications can use to obtain new access tokens automatically without requiring user re-authorization. This means a personal app can maintain continuous access to a user's Spotify data for extended periods until access is revoked. When building a similar feature with Spotify, this automatic token renewal was crucial. I could safely store the refresh token on my server and have my app periodically update the access token. Many developers have created public-facing widgets showing currently playing tracks on blogs or GitHub profiles using this model. Unfortunately, Apple Music's API lacks an equivalent capability, putting it at a disadvantage for personal projects. Proposed Solutions I request Apple's consideration for one of these enhancements: Provide a mechanism to refresh or extend a Music User Token programmatically for server-side applications. This could be an OAuth-style refresh token issued alongside the MUT, or a dedicated endpoint to exchange an expired MUT for a new one. This would enable renewal without a full user re-auth/login each time. Allow developers to access their own Apple Music library data with just the long-lived Developer Token. Apple could permit GET requests to personal library endpoints using the Developer Token alone, or a special token tied to the developer's Apple ID. This access would be read-only - no ability to modify the library, purely for retrieving data. It could be an opt-in feature in the Apple Developer account settings. Either solution would significantly improve the developer experience for Apple Music API in personal projects. Security and Privacy Considerations This request is not about accessing others' data or creating privacy loopholes - it's about empowering an Apple Music subscriber to access their own information more conveniently. The proposed options respect privacy principles: The data accessed is only what the user already has access to - their own playlists, library items, or playback status. An automatic token refresh can be designed securely (revocable tokens bound to a single account with no increase in permissions). Read-only developer token access could be restricted to non-sensitive data and require explicit opt-in. Conclusion I request an improvement to Apple Music's developer experience through either (1) an automatic Music User Token refresh mechanism, or (2) a provision for read-only personal library access using a Developer Token. This would bring Apple Music integration capabilities closer to parity with services like Spotify for personal projects. I ask Apple's Developer Relations and the Apple Music API team to consider this feature request. If there are existing best practices or workarounds with current APIs, I would appreciate guidance. I invite feedback from Apple or other developers. Are there known patterns for maintaining an Apple Music user token for server-side applications, or any plans to support non-interactive use cases? Any advice is welcome. Thank you for your consideration. I look forward to integrating Apple Music into my personal site as smoothly as with other services, and believe many developers would benefit from this added flexibility. Sources: User Authentication for MusicKit - Requirements for Music User Tokens StackOverflow: Do Apple Music User Tokens expire? - Confirmation of 6-month expiration MetaBrainz GSoC Blog - Documentation of MusicKit authentication limitations Apple Developer Forums - Information on token renewal behavior Spotify for Developers - Documentation on refresh token mechanism
1
0
257
Mar ’25
Play Audio for a Metronome
Hi, I am looking for a good way to play sounds at a high frequency. At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer. When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer. The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse. Maybe anyone has an idea on how I can improve my method. Its a Plugin for Flutter. import AVFoundation class FastSoundPlayer { private var audioPlayers: [SoundPlayer?] = [] private var sounds: [String: Sound] = [:] private var engine = AVAudioEngine() let session = AVAudioSession.sharedInstance() init() { do { try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers]) try session.setActive(true) createSoundPlayers(count: 20) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } // Selector method to handle applicationDidBecomeActiveNotification func applicationDidBecomeActive() { // Reinitialize AVAudioEngine and reattach all nodes do { engine.reset() objc_sync_enter(audioPlayers) audioPlayers.removeAll() createSoundPlayers(count: 20) objc_sync_exit(audioPlayers) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } func createSoundPlayers(count: Int) { for _ in 0..<count { let player = SoundPlayer() engine.attach(player.player) engine.connect(player.player, to: engine.mainMixerNode, format: nil) audioPlayers.append(player) } } func load(sound: Data, name: String) { let sound = Sound(soundData: sound) sounds[name] = sound } func play(name: String) { if !engine.isRunning { applicationDidBecomeActive() } guard let sound = sounds[name] else { print("Sound not found") return } if let player = getAvailablePlayer() { player.play(sound: sound) } } func getAvailablePlayer() -> SoundPlayer? { for player in audioPlayers { if !player!.isPlaying { return player } } return nil } } class SoundPlayer { let player = AVAudioPlayerNode() var isPlaying = false init() { player.volume = 1.0 } func play(sound: Sound) { player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in self.complete() } if (player.engine != nil && player.engine!.isRunning) { player.play() isPlaying = true } } func complete() { isPlaying = false } } class Sound { var sound: AVAudioPCMBuffer? init(soundData: Data) { do { let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav") try soundData.write(to: temporaryURL) // Create AVAudioFile from the temporary file URL let audioFile = try AVAudioFile(forReading: temporaryURL) // Define the format for the PCM buffer (44100Hz, stereo) let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false) // Create AVAudioPCMBuffer guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else { // Failed to create PCM buffer self.sound = nil return } // Read audio file into PCM buffer try audioFile.read(into: pcmBuffer) // Assign the created AVAudioPCMBuffer to the sound property self.sound = pcmBuffer } catch { print("Error loading sound file: \(error.localizedDescription)") self.sound = nil } } } Thanks!
1
0
183
Mar ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
899
May ’25
MusicKit playbackTime Accuracy
Hello, Has anyone else experienced variations in the accuracy of the playbackTime value? After a few seconds of playback, the reported time adjusts by a fraction of a second, making it difficult to calculate the actual playbackTime of the audio. This can be recreated by playing a song in MusicKit, recording the start time of the audio, playing for at least 10-20 seconds, and then comparing the playbackTime value to one calculated using the start time of the audio. In my experience this jump occurs after about 10 seconds of playback. Any help would be appreciated. Thanks!
1
0
120
May ’25
sysEx struct in CoreMIDI/MIDIMessages.h
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field. The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member. It looks like the channel member of the sysEx struct contains the number of used bytes. Is this a mistake in the header or did I misunderstand something?
1
0
567
Dec ’25
Start and stop recording Voice Memos with Siri
using iOS 26.2; Airpods 4 Long press stem to launch Siri Speak "Record Voice Memo" -> Recording starts Recording in progress... Long press stem to launch Siri -> Nothing happens. To stop recording need use phone. is this intended behaviour? i would like to be able to stop recording with Siri I am able to launch Siri from phone while recording, but point is to keep phone in pocket and start/stop recordings only via Airpods.
1
0
164
Dec ’25
AudioUnit (AUv2) Session Compatibility After Adding MIDI Support
Hi there! We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
1
0
110
Mar ’25