Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Inquiry about Potential Core Audio Improvements
Hi everyone, I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry. Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community! Thanks so much! Best, Michael
1
0
502
Jan ’25
[MusicKit] Check for availability of songs
Songs can be unavailable (greyed out) in Apple Music. How can I check if a song is unavailable via the MusicKit framework? Obviously the playback will fail with MPMusicPlayerControllerErrorDomain Code=6 "Failed to prepare to play" but how can I know that in advance? I need to check the availability of hundreds of albums and therefore initiating a playback for each of them is not an option. Things I have tried: Checking if the release date property is set to a future date. This filters out all future releases but doesn't solve the problem for already released songs. Checking if the duration is 0. This does not work since the duration of unavailable songs does not have to be 0. Initiating a playback and checking for the "Failed to prepare to play" error. This is not suitable for a huge amount of Albums. I couldn't find a solution yet but somehow other third-party-apps are able ignore/don't shows these albums. I believe the Apple Music app is only displaying albums where at least one song is available. I am using this function to fetch all albums of an artist. private func fetchAlbumsFor(_ artist: Artist) async throws -> [Album] { let artistWithAlbums = try await artist.with(.albums) var allAlbums = [Album]() guard var currentBadge = artistWithAlbums.albums else { return [] } allAlbums.append(contentsOf: currentBadge) while currentBadge.hasNextBatch { if let nextBatch = try await currentBadge.nextBatch() { currentBadge = nextBatch allAlbums.append(contentsOf: nextBatch) } else { break } } return allAlbums } Here is an example album where I am unable to detect its unavailability (at least in Germany): https://music.apple.com/de/album/die-haferhorde-immer-den-n%C3%BCstern-nach-h%C3%B6rspiel-zu-band-3/1755774804 Furthermore I was unable to navigate to this album via the Apple Music app directly. Thanks for any help Edit: Apparently this album is not included in an apple music subscription but can be bought seperately. The question remains: How can I check that?
1
0
571
Feb ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
48
Apr ’25
issue in recording using AVAudio
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet. **func installTap( onBus bus: AVAudioNodeBus, bufferSize: AVAudioFrameCount, format: AVAudioFormat?, block tapBlock: @escaping AVAudioNodeTapBlock ) ** It works perfectly fine. But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
0
0
113
Apr ’25
About the built-in instrument sound of Apple devices
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
0
0
56
May ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
109
May ’25
Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
553
Dec ’24
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
166
May ’25
AVSpeechSynthesizer & Bluetooth Issues
Hello, I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out. After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem? import SwiftUI import AVFoundation class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate { private var speechSynthesizer = AVSpeechSynthesizer() static let shared = TextToSpeechService() override init() { super.init() speechSynthesizer.delegate = self } func configureAudioSession() { speechSynthesizer.delegate = self do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth]) } catch { print("Failed to set audio session category: \(error.localizedDescription)") } } func speak(_ text: String) { Task(priority: .high) { let speechUtterance = AVSpeechUtterance(string: text) speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) speechSynthesizer.speak(speechUtterance) } } func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { Task { stopSpeech() try AVAudioSession.sharedInstance().setActive(false) } } func stopSpeech() { speechSynthesizer.stopSpeaking(at: .immediate) } }
0
0
427
Feb ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
133
May ’25
AVAudioPlayer/SKAudioNode audio no longer plays after interruption
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways: Effects (incl. UI sounds) with AVAudioPlayer. Long looping tracks with AVAudioPlayer. Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes). We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected. However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with. Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory. Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio? Any help would be very much appreciated ✌️!
0
1
107
May ’25
Best Approach for Reliable Background Audio Playback with Audio Ducking on Command from Server
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it. Current Attempt & Observations I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer. This works consistently when the app is active, but in the background, it only works about 60% of the time. In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio. Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background. The app has background modes enabled for Audio, Background Fetch, and Remote Notifications. Best Approach to Achieve This? I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths: Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server? Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case? Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach? Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment? I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated. Thank you for your time and guidance.
0
1
364
Feb ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
168
May ’25
tvOS AVQueuePlayer Now Playing Info in Control Center?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
0
0
98
Jun ’25
ShazamKit Background Operation Broken on iOS 18 - SHManagedSession Stops Working After ~20 Seconds
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow: Hi Apple Engineers and fellow developers, I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17. The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground. My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background. To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground. This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far. I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds? Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models. Any guidance would be greatly appreciated.
0
0
136
Jun ’25