3
I am working on an application to get when input audio device is being used. Basically I want to know the application using the microphone (built-in or external)
This app runs on macOS. For Mac versions starting from Sonoma I can use this code:
int getAudioProcessPID(AudioObjectID process)
{
pid_t pid;
if (@available(macOS 14.0, *)) {
constexpr AudioObjectPropertyAddress prop {
kAudioProcessPropertyPID,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain
};
UInt32 dataSize = sizeof(pid);
OSStatus error = AudioObjectGetPropertyData(process, &prop, 0, nullptr, &dataSize, &pid);
if (error != noErr) {
return -1;
}
} else {
// Pre sonoma code goes here
}
return pid;
}
which works.
However, kAudioProcessPropertyPID was added in macOS SDK 14.0.
Does anyone know how to achieve the same functionality on previous versions?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have spent a long time refactoring lots of older Swift code to compile without error in Swift 6.
The app is a v3 audio unit host and audio unit.
Having installed Sonoma and XCode 16 I compile the code using Swift 6 and it compiles and runs without any warnings or errors.
My host will load my AU no problem.
LOGIC PRO is still the ONLY audio unit host that will load native Mac V3 audio units and so I like to test my code using Logic.
In Sonoma with XCode 16...
My AU passes the most stringent AUVAL tests both in terminal and Logic pro.
If I compile the AU source in Swift 5 Logic will see the AU, load it and run it without problems.
But when I compile the AU in Swift 6 Logic sees the AU, will scan it and verify it passes the tests but will not load the AU. In XCode I see a log message that a "helper application failed to run" but the debugger never connects to the AU and I don't think Logic even gets as far as instantiating the AU.
So... what is causing this? I'm stumped..
Developing AUv3 is a brain-aching maze of undocumented hurdles and I'm hoping someone might have found a solution for this one. Meanwhile I guess my only option is to continue using the Swift 5 compiler.
(appending a little note just to mention that all the DSP code is written in C/C++, Swift is used mainly for the user interface and also does some offline thready work )
Hi,
I am looking for a good way to play sounds at a high frequency.
At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer.
When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer.
The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse.
Maybe anyone has an idea on how I can improve my method.
Its a Plugin for Flutter.
import AVFoundation
class FastSoundPlayer {
private var audioPlayers: [SoundPlayer?] = []
private var sounds: [String: Sound] = [:]
private var engine = AVAudioEngine()
let session = AVAudioSession.sharedInstance()
init() {
do {
try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers])
try session.setActive(true)
createSoundPlayers(count: 20)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
// Selector method to handle applicationDidBecomeActiveNotification
func applicationDidBecomeActive() {
// Reinitialize AVAudioEngine and reattach all nodes
do {
engine.reset()
objc_sync_enter(audioPlayers)
audioPlayers.removeAll()
createSoundPlayers(count: 20)
objc_sync_exit(audioPlayers)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
func createSoundPlayers(count: Int) {
for _ in 0..<count {
let player = SoundPlayer()
engine.attach(player.player)
engine.connect(player.player, to: engine.mainMixerNode, format: nil)
audioPlayers.append(player)
}
}
func load(sound: Data, name: String) {
let sound = Sound(soundData: sound)
sounds[name] = sound
}
func play(name: String) {
if !engine.isRunning {
applicationDidBecomeActive()
}
guard let sound = sounds[name] else {
print("Sound not found")
return
}
if let player = getAvailablePlayer() {
player.play(sound: sound)
}
}
func getAvailablePlayer() -> SoundPlayer? {
for player in audioPlayers {
if !player!.isPlaying {
return player
}
}
return nil
}
}
class SoundPlayer {
let player = AVAudioPlayerNode()
var isPlaying = false
init() {
player.volume = 1.0
}
func play(sound: Sound) {
player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in
self.complete()
}
if (player.engine != nil && player.engine!.isRunning) {
player.play()
isPlaying = true
}
}
func complete() {
isPlaying = false
}
}
class Sound {
var sound: AVAudioPCMBuffer?
init(soundData: Data) {
do {
let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav")
try soundData.write(to: temporaryURL)
// Create AVAudioFile from the temporary file URL
let audioFile = try AVAudioFile(forReading: temporaryURL)
// Define the format for the PCM buffer (44100Hz, stereo)
let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false)
// Create AVAudioPCMBuffer
guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else {
// Failed to create PCM buffer
self.sound = nil
return
}
// Read audio file into PCM buffer
try audioFile.read(into: pcmBuffer)
// Assign the created AVAudioPCMBuffer to the sound property
self.sound = pcmBuffer
} catch {
print("Error loading sound file: \(error.localizedDescription)")
self.sound = nil
}
}
}
Thanks!
A recent WWDC session "Learn about Apple Immersive Video technologies" showed a Apple Spatial Audio Format Panner plugin for Pro Tools. The presenter stated that it's available on a per-user license.
Where can users access this?
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to
int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2.
However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:
let compressedBuffer = AVAudioCompressedBuffer(
format: VoiceEncoder.Constants.networkFormat,
packetCapacity: 1,
maximumPacketSize: data.count
)
compressedBuffer.byteLength = UInt32(data.count)
compressedBuffer.packetCount = 1
compressedBuffer.packetDescriptions!
.pointee.mDataByteSize = UInt32(data.count)
data.copyBytes(
to: compressedBuffer.data
.assumingMemoryBound(to: UInt8.self),
count: data.count
)
where data: Data contains the raw OPUS frame to be decoded.
How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available?
More context:
I'm specifying the audio format like this:
static let frameSize: UInt32 = 960
static let sampleRate: Float64 = 48000.0
static var networkFormatStreamDescription =
AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatOpus,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: frameSize,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0
)
static let networkFormat =
AVAudioFormat(
streamDescription:
&networkFormatStreamDescription
)!
I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
After update,WeChat voice chatting no sounds, please help
Hi there!
We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function:
import Foundation
import AVFoundation
class AudioManager {
static let shared = AudioManager()
var audioPlayer: AVAudioPlayer?
var isPlaying: Bool {
return audioPlayer?.isPlaying ?? false
}
var playbackPosition: TimeInterval = 0
func playSound(named name: String) {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else {
print("Sound file not found")
return
}
do {
if audioPlayer == nil || !isPlaying {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.currentTime = playbackPosition
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} else {
print("Sound is already playing")
}
} catch {
print("Error playing sound: \(error.localizedDescription)")
}
}
func stopSound() {
if let player = audioPlayer {
playbackPosition = player.currentTime
player.stop()
}
}
func speak(text: String) {
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
synthesizer.speak(utterance)
}
}
And my app shows text in a ScrollView:
ScrollView {
Text(self.description)
.padding()
.foregroundColor(.black)
.font(.headline)
.background(Color.gray.opacity(0))
}.onAppear {
AudioManager.shared.speak(text: self.description)
}
However, the text doesn't get read out (in the simulator). I see some output in the console:
Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices.
I'm probably doing something wrong here, but not sure what.
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1.
Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio.
In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording.
if await AVAudioApplication.requestRecordPermission() {
print("permission granted")
recordPermission = true
} else {
print("permission denied")
}
Permission is granted.
let settings: [String : Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
recorder = try AVAudioRecorder(url: filename, settings: settings)
let prepared = recorder.prepareToRecord()
print("prepared started: \(prepared)")
let started = recorder.record()
print("recording started: \(started)")
started is always false and I tried many settings.
Error messages
AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46
AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50
AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50
from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame
to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
prepared started: true
AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5
recording started: false
All examples I find are the same, but apparently there must be something different.
I have an iPadOS M-processor application with two different running configurations.
In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz.
In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz
I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine
I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently.
How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on?
I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa.
How can I make the switch reliable without arbitrary time delays?
Is my configuration manager approach appropriate (question for Apple engineers)?
I have a SwiftUI app - (https://youtu.be/VbAfUk_eYl0?si=JxUBh0Bpb-vc1E1U) - which I thought was almost ready for release - a manager for airdropped audio files from Logic Pro or other music creation applications. It uses AVAudioEngine and AVAudioPlayerNode to play audio, and the MediaPlayer API to integrate with car audio and similar, all of which works well.
It does not currently have an explicit CarPlay integration (and I'm slightly horrified at the amount of work that is going to require).
I had the good or bad luck of getting a loaner car with carplay while mine is being repaired yesterday, and lo and behold, when connected to the vehicle via CarPlay, there is no audio output in the vehicle at all. The now playing panel correctly shows the information my app provides about the currently playing song; the player node believes it is playing, the AVAudioSession is configured as it should be. But there is no sound.
Obviously I cannot ship it in this state.
I've tried fiddling with the parameters the AVAudioSession is configured with, in case there was some parameter that was preventing audio output, to no avail - currently:
var options = AVAudioSession.CategoryOptions()
options.insert(.allowAirPlay)
options.insert(.allowBluetooth)
options.insert(.allowBluetoothA2DP)
try session.setCategory(.playback, mode: .default, options: options)
try? session.setPreferredIOBufferDuration(0.002) // ~96 samples at 44.1kHz
try? session.setPrefersNoInterruptionsFromSystemAlerts(true)
try? session.setPrefersInterruptionOnRouteDisconnect(false)
try session.setActive(true, options: [.notifyOthersOnDeactivation])
All diagnostics within the app show the player operating correctly - files are played and flushed; AVAudioPlayerNodeCompletionCallbacks are called when they should be. But the output is not audible in the vehicle.
I would much prefer to ship this app without full-blown CarPlay integration, but with working audio when connected via CarPlay, and work on full CarPlay integration for the next release.
Is there some secret handshake I am just missing to make this work?
Using an iPhone Pro 12 running iOS 26.0.1, with AirPods Pro 3. Camera app does capture video with what seems to be "Studio Quality Recording".
Am trying to replicate that SQR with my own Camera like app, and while I can pull audio in from the APP3 mic, and my video capture app is recording a 48,000Hz high-bitrate video, the audio still sounds non-SQR.
I'm seeing bluetoothA2DP , bluetoothLE , bluetoothHFP as portType, and not sure if SQR depends on one of those?
Is there sample code demonstrating a SQR capture? Nevermind video and camera, just audio even?
Also, I don't understand what SQR is doing between the APP3 and the iPhone. What codec is that? What bitrate is that? If I capture video using Capture and inspect the audio stream I see mono 74.14 kbit/s MPEG-4 AAC, 48000 Hz. But I assume that's been recompressed and not really giving me any insight into the APP3 H2 transmission?
Dear Sirs,
I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook.
I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side.
We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem.
Best regards,
Johannes
{
"aps": { "content-available": 1 },
"audio_file_name": "ding.caf",
"audio_url": "https://example.com/audio.mp3"
}
When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song,
We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error
so we may wanna know:
what about the realative process of drm, key, session, etc in the play() function?
are there some state detect function to show weather the drm is ok?
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications.
Environment
iOS: 26.0 Beta
Xcode: Beta 5
Device: iPhone 16 pro
Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025
Configuration Tested
Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES
Setup: All optimizations applied (preheating, high priority, model retention)
I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results.
I added some logs to check the specific time:
🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio!
🎙️ [20:30:41.532] Listening for transcription results...
🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false)
Questions
Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster?
Are there additional optimization steps for SpeechTranscriber?
Should we expect significant performance improvements in later betas?
I am trying to use AVAudioEngine for recording and playing for a voice chat kind of app, but when the speaker plays any audio while recording, the recording take the speaker audio as input. I want to filter that out. Are there any suggestions for the swift code
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added.
kindly consider this request !!!!
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that
"NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified.
After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.