Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
1
0
186
Aug ’25
AVSpeechSynthesizer pulls words out of thin air.
Hi, I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance. I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to. These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages. I have written a short example program for demonstration purposes. import SwiftUI import AVFoundation import Foundation let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer() struct ContentView: View { var body: some View { VStack { Button { utter("mon") } label: { Text("mon") } .buttonStyle(.borderedProminent) Button { utter("tue") } label: { Text("tue") } .buttonStyle(.borderedProminent) Button { utter("thu") } label: { Text("thu") } .buttonStyle(.borderedProminent) Button { utter("feb") } label: { Text("feb") } .buttonStyle(.borderedProminent) Button { utter("feb", lang: "de-DE") } label: { Text("feb DE") } .buttonStyle(.borderedProminent) Button { utter("wed") } label: { Text("wed") } .buttonStyle(.borderedProminent) } .padding() } private func utter(_ text: String, lang: String = "en-US") { let utterance = AVSpeechUtterance(string: text) let voice = AVSpeechSynthesisVoice(language: lang) utterance.voice = voice synthesizer.speak(utterance) } } #Preview { ContentView() } Thank you Christian
1
0
198
Nov ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
2
161
Oct ’25
How to find `AudioHardwareControl` direction?
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control? Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
1
0
504
Jan ’25
Application tones start when I get incoming call or message
I've got a problem with my app where I'm testing it on my own phone. I'm using audio kit to generate tones as part of the app. Everything seems to work fine. Sounds start, Stop, etc. They play when the app is closed and when the phone is locked, so background is working. However, I'm seeing an issue where, even when STOP is pressed and the application exited, if I get a notification such as a text message, the base tone for the app starts to play. If I then open the app, check the Start/Stop button - it says start so that. hasnt' been activated. If I click Start, then a 2nd tone starts. This one stops with the Stop button. However the original tone that was set off by an incoming message carries on playing. Until I go to the Open Apps View on the phone and slide the application upwards. For the life of me, I can't figure out whats happening here.
1
0
75
May ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
873
May ’25
AVAudioEngine. Select input device on macOS
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay. On iOS works easy with AVAudioEngine, but on macOS bruh...
1
0
434
Mar ’25
MusicKit playbackTime Accuracy
Hello, Has anyone else experienced variations in the accuracy of the playbackTime value? After a few seconds of playback, the reported time adjusts by a fraction of a second, making it difficult to calculate the actual playbackTime of the audio. This can be recreated by playing a song in MusicKit, recording the start time of the audio, playing for at least 10-20 seconds, and then comparing the playbackTime value to one calculated using the start time of the audio. In my experience this jump occurs after about 10 seconds of playback. Any help would be appreciated. Thanks!
1
0
117
May ’25
iOS - record audio fails to record
Hi, I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1. Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio. In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording. if await AVAudioApplication.requestRecordPermission() { print("permission granted") recordPermission = true } else { print("permission denied") } Permission is granted. let settings: [String : Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] recorder = try AVAudioRecorder(url: filename, settings: settings) let prepared = recorder.prepareToRecord() print("prepared started: \(prepared)") let started = recorder.record() print("recording started: \(started)") started is always false and I tried many settings. Error messages AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46 AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50 AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame prepared started: true AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5 recording started: false All examples I find are the same, but apparently there must be something different.
1
0
229
Oct ’25
Audio driver based on AudioDriverKit sometimes hangs after sleep
Dear Sirs, I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook. I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side. We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem. Best regards, Johannes
1
0
113
Aug ’25
Shazamkit with AirPods
HI Guys, I'm using Shazamkit in my IOS app and successfully capturing the currently playing track details, when using the devices (iPhone) built-in mic. When I test with AirPods though, my app cannot both send the output to through the AirPods and capture that same output with the AirPods mic, for Shazamkit recognition. I believe this must be possible, because the Shazamkit widget on IOS can do this. Is it restricted in some way for third party apps? If not, I'd appreciate some guidance on how to achieve this in Swift code. Thanks in advance.
1
1
571
Feb ’25
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
1
0
295
1w
Mic audio before and after a call is answered
I have an app that records a health provider’s conversation with a patient. I am using Audio Queue Services for this. If a phone call comes in while recording, the doctor wants to be able to ignore the call and continue the conversation without touching the phone. If the doctor answers the call, that’s fine – I will stop the recording. I can detect when the call comes in and ends using CXCallObserver and AVAudioSession.interruptionNotification. Unfortunately, when a call comes in and before it is answered or dismissed, the audio is suppressed. After the call is dismissed, the audio continues to be suppressed. How can I continue to get audio from the mic as long as the user does not answer the phone call?
1
0
60
May ’25
APNs
{ "aps": { "content-available": 1 }, "audio_file_name": "ding.caf", "audio_url": "https://example.com/audio.mp3" } When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
1
0
214
Oct ’25
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
1
0
1k
Oct ’25
When to set AVAudioSession's preferredInput?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example private func enableBuiltInMic() { // Get the shared audio session. let session = AVAudioSession.sharedInstance() // Find the built-in microphone input. guard let availableInputs = session.availableInputs, let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else { print("The device must have a built-in microphone.") return } // Make the built-in microphone input the preferred input. do { try session.setPreferredInput(builtInMicInput) } catch { print("Unable to set the built-in mic as the preferred input.") } } and calling that function once in the initializer, the audio session still switches to the external microphone once one is plugged in. The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs. So, why is the preferredInput suddenly reset? when would be the appropriate time to set the preferredInput again? Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
1
0
821
Oct ’25