Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

CMFormatDescription.audioStreamBasicDescription has wrong or unexpected sample rate for audio channels with different sample rates
In my app I use AVAssetReaderTrackOutput to extract PCM audio from a user-provided video or audio file and display it as a waveform. Recently a user reported that the waveform is not in sync with his video, and after receiving the video I noticed that the waveform is in fact double as long as the video duration, i.e. it shows the audio in slow-motion, so to speak. Until now I was using CMFormatDescription.audioStreamBasicDescription.mSampleRate which for this particular user video returns 22'050. But in this case it seems that this value is wrong... because the audio file has two audio channels with different sample rates, as returned by CMFormatDescription.audioFormatList.map({ $0.mASBD.mSampleRate }) The first channel has a sample rate of 44'100, the second one 22'050. If I use the first sample rate, the waveform is perfectly in sync with the video. The problem is given by the fact that the ratio between the audio data length and the sample rate multiplied by the audio duration is 8, double the ratio for the first audio file (4). In the code below this ratio is given by Double(length) / (sampleRate * asset.duration.seconds) When commenting out the line with the sampleRate variable definition in the code below and uncommenting the following line, the ratios for both audio files are 4, which is the expected result. I would expect audioStreamBasicDescription to return the correct sample rate, i.e. the one used by AVAssetReaderTrackOutput, which (I think) somehow merges the stereo tracks. The documentation is sparse, and in particular it’s not documented whether the lower or higher sample rate is used; in this case, it seems like the higher one is used, but audioStreamBasicDescription for some reason returns the lower one. Does anybody know why this is the case or how I should extract the sample rate of the produced PCM audio data? Should I always take the higher one? I created FB19620455. let openPanel = NSOpenPanel() openPanel.allowedContentTypes = [.audiovisualContent] openPanel.runModal() let url = openPanel.urls[0] let asset = AVURLAsset(url: url) let assetTrack = asset.tracks(withMediaType: .audio)[0] let assetReader = try! AVAssetReader(asset: asset) let readerOutput = AVAssetReaderTrackOutput(track: assetTrack, outputSettings: [AVFormatIDKey: Int(kAudioFormatLinearPCM), AVLinearPCMBitDepthKey: 16, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsFloatKey: false, AVLinearPCMIsNonInterleaved: false]) readerOutput.alwaysCopiesSampleData = false assetReader.add(readerOutput) let formatDescriptions = assetTrack.formatDescriptions as! [CMFormatDescription] let sampleRate = formatDescriptions[0].audioStreamBasicDescription!.mSampleRate //let sampleRate = formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate }).max()! print(formatDescriptions[0].audioStreamBasicDescription!.mSampleRate) print(formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate })) if !assetReader.startReading() { preconditionFailure() } var length = 0 while assetReader.status == .reading { guard let sampleBuffer = readerOutput.copyNextSampleBuffer(), let blockBuffer = sampleBuffer.dataBuffer else { break } length += blockBuffer.dataLength } print(Double(length) / (sampleRate * asset.duration.seconds))
0
1
117
Aug ’25
MIDI output form Standalone MIDI Processor Demo App to DAW
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton). The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing. I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference: @MainActor @Observable public class SimplePlayEngine { private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr } var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil public init() { engine.attach(player) engine.prepare() setupMIDI() } private func setupMIDI() { if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock { _ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList) } }) { fatalError("Failed to setup Core MIDI") } } func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? { reset() guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else { fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" ) } do { let audioUnit = try await AVAudioUnit.instantiate( with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess) self.avAudioUnit = audioUnit self.connect(avAudioUnit: audioUnit) return await audioUnit.loadAudioUnitViewController() } catch { return nil } } private func startPlayingInternal() { guard let avAudioUnit = self.avAudioUnit else { return } setSessionActive(true) if avAudioUnit.wantsAudioInput { scheduleEffectLoop() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) do { try engine.start() } catch { isPlaying = false fatalError("Could not start engine. error: \(error).") } if avAudioUnit.wantsAudioInput { player.play() } isPlaying = true } private func resetAudioLoop() { guard let avAudioUnit = self.avAudioUnit else { return } if avAudioUnit.wantsAudioInput { guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") } engine.connect(player, to: engine.mainMixerNode, format: format) } } public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) { guard let avAudioUnit = self.avAudioUnit else { return } engine.disconnectNodeInput(engine.mainMixerNode) resetAudioLoop() engine.detach(avAudioUnit) func rewiringComplete() { scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock if isPlaying { player.play() } completion() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) if isPlaying { player.pause() } let auAudioUnit = avAudioUnit.auAudioUnit if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock } engine.attach(avAudioUnit) if avAudioUnit.wantsAudioInput { engine.disconnectNodeInput(engine.mainMixerNode) if let format = file?.processingFormat { engine.connect(player, to: avAudioUnit, format: format) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format) } } else { let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat) } rewiringComplete() } } and my MIDI Manager @MainActor class MIDIManager: Identifiable, ObservableObject { func setupPort(midiProtocol: MIDIProtocolID, receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool { guard setupClient() else { return false } if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr { return false } for source in self.sources { if MIDIPortConnectSource(port, source, nil) != noErr { print("Failed to connect to source \(source)") return false } } setupVirtualMIDIOutput() return true } private func setupVirtualMIDIOutput() { let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource) if virtualStatus != noErr { print("❌ Failed to create virtual MIDI source: \(virtualStatus)") } else { print("✅ Created virtual MIDI source: \(virtualSourceName)") } } func sendMIDIData(_ data: [UInt8]) { print("hey") var packetList = MIDIPacketList() withUnsafeMutablePointer(to: &packetList) { ptr in let pkt = MIDIPacketListInit(ptr) _ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data) if virtualSource != 0 { let status = MIDIReceived(virtualSource, ptr) if status != noErr { print("❌ Failed to send MIDI data: \(status)") } else { print("✅ Sent MIDI data: \(data)") } } } } }
0
0
290
Aug ’25
Number of songs in the Apple Music Feed
Hello, I'm evaluating the Apple Music Feed dataset and I noticed that the total number of songs available in the feed is too small. As of today, the number of objects returned in each feed is: 51,198,712 albums 23,093,698 artists 173,235,315 songs This gives an average of 3.38 songs per album which is quite low. Also, iterating on the data I see that there are albums referencing songs that don't exist in the songs feed. I would like to know: Is the feed data incomplete? If so, in what situations an object may be missing from the feed? Thank you in advance!
0
0
251
Aug ’25
Creating an initial Now Playing state of paused - impossible?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart. The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo. What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled. Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first). I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described. https://github.com/timboudreau/ios-play-pause-demo I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown. What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
0
2
116
Apr ’25
Frequent crashes related to com.apple.coreaudio.AQClient thread
I'm encountering numerous crashes involving the com.apple.coreaudio.AQClient thread on our application. The crash details are as follows: #10 com.apple.coreaudio.AQClient SIGSEGV SEGV_ACCERR 0 libobjc.A.dylib _objc_msgSend + 44 1 AudioToolbox ClientMessageHandler::PropertyChanged(unsigned int) + 872 2 AudioToolbox ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 924 3 AudioToolbox __XCallbackNotificationsAvailable + 212 4 libAudioToolboxUtility.dylib _mshMIGPerform + 260 5 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 6 CoreFoundation ___CFRunLoopDoSource1 + 596 7 CoreFoundation ___CFRunLoopRun + 2392 8 CoreFoundation _CFRunLoopRunSpecific + 572 9 AudioToolbox CADeprecated::GenericRunLoopThread::Entry(void*) + 156 10 libAudioToolboxUtility.dylib CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 88 11 libsystem_pthread.dylib __pthread_start + 116 All these crashes occur on system versions below iOS/iPadOS 17, primarily when the device's available RAM is low. What steps can I take to resolve this issue? Any insights would be greatly appreciated!
0
0
171
Nov ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
167
May ’25
Destroy MIDIUMPMutableEndpoint again?
Is there a way to destroy MIDIUMPMutableEndpoint again? In my app, the user has a setting to enable and disable MIDI 2.0. If MIDI 2.0 should not be supported (or if iOS version < 18), it creates a virtual destination and a virtual source. And if MIDI 2.0 should be enabled, it instead creates a MIDIUMPMutableEndpoint, which itself creates the virtual destination and source automatically. So here is my problem: I didn't find any way to destroy the MIDIUMPMutableEndpoint again. There is a method to disable it (setEnabled:NO), but that doesn't destroy or hide the virtual destination and source. So when the user turns MIDI 2.0 support off, I will have two virtual destinations and sources, and cannot get rid of the 2.0 ones. What is the correct way to get rid of the MIDIUMPMutableEndpoint once it is created?
0
0
64
Sep ’25
How to disable the built-in speakers and microphone on a Mac
I need to implement a solution through an API or custom driver to completely block out the built-in speakers and microphone of Mac, because I need other apps to use specified external devices as audio input and output. Is there a way to achieve this requirement? What I mean is that even in system preferences, it should not be possible to choose the built-in microphone and speakers; only my external device can be used.
0
0
83
Apr ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
65
Apr ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
288
1w
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
48
Apr ’25
issue in recording using AVAudio
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet. **func installTap( onBus bus: AVAudioNodeBus, bufferSize: AVAudioFrameCount, format: AVAudioFormat?, block tapBlock: @escaping AVAudioNodeTapBlock ) ** It works perfectly fine. But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
0
0
113
Apr ’25
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
0
1
125
Jun ’25
Process to request the restricted entitlement behind “DJ with Apple Music” (tempo control / time-stretch on Apple Music streams)?
Hi, I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically: • Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade. • Beat-matched transitions between tracks. From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit. Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team? For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed. Thanks in advance!
0
1
224
Oct ’25
The files generated using AVAudioRecorder have a constant size of only 4kb
Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply. - (void)startRecordWithOrderID:(NSString *)orderID { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; [audioSession setActive:YES error:nil]; NSMutableDictionary *settings = [[NSMutableDictionary alloc] init]; [settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey]; [settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"]; NSURL *tmpFile = [NSURL fileURLWithPath:path]; recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil]; [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder record]; }
0
0
111
Jul ’25
AVAudioEngine failing with -10877 on macOS 26 beta, no devices detected via AVFoundation but HAL works
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877. FB#: FB19024508 Strange Behavior: AVAudioEngine.inputNode shows no channels or input format on bus 0. AVAudioEngine.start() fails with -10877 (AudioUnit connection error). AVCaptureDevice.DiscoverySession returns zero audio devices. Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input. However, CoreAudio HAL does detect all input/output devices: Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole. This suggests the lower-level audio stack is functional. I have tried: Resetting CoreAudio with sudo killall coreaudiod Rebuilding and re-signing the app Clearing TCC with tccutil reset Microphone Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
0
0
273
Jul ’25