Device: iPhone 17 Pro
iOS Version: iOS 26.1
Camera: Ultra-wide (0.5x) using AVCaptureSession
Our camera app freezes on iPhone 17 when switching frame rates (30fps ↔ 60fps). This works fine on iPhone 16 Pro and earlier.
What We've Observed:
Freeze happens on frame rate change - particularly when stabilization was enabled
Thread.sleep is used - to allow camera hardware to settle before re-enabling stabilization
Works on older iPhones - only iPhone 17 exhibits this behavior
Console shows these errors before freeze:
17281
<<<< FigXPCUtilities >>>> signalled err=18446744073709534335 <<<< FigCaptureSourceRemote >>>> err=-17281
Is Thread.sleep on the main thread causing the freeze? Should all camera configuration be on a background queue?
Is there something specific about iPhone 17 ultra-wide camera that requires different handling?
Should we use session.beginConfiguration() / session.commitConfiguration() instead of direct device configuration?
Is calling setFrameRate from a property's didSet (which runs synchronously) problematic?
Are the FigCaptureSourceRemote errors (-17281) indicative of the problem, and what do they mean?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My workout watch app supports audio playback during exercise sessions.
When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code.
try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: [])
try await session.activate()
When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select
AirPods, pauses the iPhone's music, and plays my audio.
However, when playback finishes and I end the session using the code below:
try session.setActive(false, options:[.notifyOthersOnDeactivation])
the iPhone
doesn't automatically resume the previously interrupted music playback—it requires manual intervention.
Is this expected behavior, or am I missing other important steps in my code?
【溦N51888M】腾龙公司会员申请流程步骤【罔纸 211239.com 】输入官惘到浏览器打开联系24小时在线业务人员办理上下,打开公司官网.
二、点击主页右上角注册按钮.
三、填写账号信息.
四、输入手机号,验证码,密码.
五、勾选用户协议,完成注册协议,完成注册.
注意:若出现账号已存在」提示,需重新设置唯一账号名称
Topic:
Media Technologies
SubTopic:
Audio
Is there any feasible way to get a Core Audio device's system effect status (Voice Isolation, Wide Spectrum)?
AVCaptureDevice provides convenience properties for system effects for video devices. I need to get this status for Core Audio input devices.
I'm trying to benchmark a Core Image filter chains memory footprint and notice a weird quirk in instruments.
On a real device, even with a simple Core Image chain, the memory balloons each time I ran the filter. See attached screen shots.
Running on iPhone 17 Pro:
Running on simulator (M2 Macbook Pro)
As you can see there's a huge build up of 4MB "VM: IOSurface" memory on the real device, but the simulator seems to clean it up correctly.
Here's my basic code:
func processImage() {
guard let inputImage = ContentViewModel.loadImageFromBundle(name: "kitty.HEIC") else {
print("Failed to load sample_image from bundle")
return
}
var outputImage = inputImage
outputImage = outputImage.applyingFilter("CIBloom", parameters: [
kCIInputRadiusKey: 20,
kCIInputIntensityKey: 0.8
])
DispatchQueue.global(qos: .userInitiated).async {
let data = self.context.jpegRepresentation(of: outputImage, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!)
if let data = data, let uiImage = UIImage(data: data) {
DispatchQueue.main.async {
self.displayImage = Image(uiImage: uiImage)
}
}
}
}
Why is this happening? Seems like a bug to me or I need to release an object. At the very least makes it challenging to measure memory usage.
Any help is greatly appreciated.
Alex
Hi,
We identified massive amounts of leaked memory with the tvOS 26 standard player user interface as soon as chapters (navigation markers) are involved.
Artwork images associated with chapters are not correctly released anymore, leaking memory in chunks of several MiBs.
Over time apps will be terminated by the system due to excessive memory consumption.
The issue was reported to Apple as tvOS 26 regression: Huge memory leaks associated with navigation marker artworks displayed in the tvOS standard user interface, filed under FB21160665.
hi all,
as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display.
can i somehow stop the insertion of the display sleep assertion?
pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep"
Created for PID: 4145.
where PID 4145 is spotify.
but it doesn't matter which app is playing the audio.
any help would be appreciated
thanks
Topic:
Media Technologies
SubTopic:
Audio
I want to confirm if this is a bug or a programming error. Very easy to reproduce it by modifying AVCam sample code. Steps to reproduce:
Add AVCaptureVideoDataOutput to AVCaptureSession, no need to set delegate in AVCam sample code (CaptureService actor)
private let videoDataOutput = AVCaptureVideoDataOutput()
and then in configureSession method, add the following line
try addOutput(videoDataOutput)
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
}
And next modify set HDR method:
/// Sets whether the app captures HDR video.
func setHDRVideoEnabled(_ isEnabled: Bool) {
// Bracket the following configuration in a begin/commit configuration pair.
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
do {
// If the current device provides a 10-bit HDR format, enable it for use.
if isEnabled, let format = currentDevice.activeFormat10BitVariant {
try currentDevice.lockForConfiguration()
currentDevice.activeFormat = format
currentDevice.unlockForConfiguration()
isHDRVideoEnabled = true
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange) {
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]
}
} else {
captureSession.sessionPreset = .high
isHDRVideoEnabled = false
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_32BGRA) {
print("Setting sdr pixel format \(kCVPixelFormatType_32BGRA)")
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_32BGRA]
}
try currentDevice.lockForConfiguration()
currentDevice.activeColorSpace = .sRGB
currentDevice.unlockForConfiguration()
}
} catch {
logger.error("Unable to obtain lock on device and can't enable HDR video capture.")
}
The problem now is toggling HDR on and off no longer works in video mode. If after setting HDR on, you set HDR to off, active format of device does not change (setting sessionPreset has no effect). This does not happen if video data output is not added to session.
Is there any workaround available?
I am working on Screen Record function in Apple Vision Pro, when I use broadcast upload extension, after I click record button, the XCode console show the exception:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
we create and config the project as flow:
Create a Apple Vision Project.
Create a Broadcast Upload Extension Target.
Add App Group for Project Target and Extension Target, both use the same identifier.
Add "Main Camera Access", "Passthrough in Screen Capture" Capabilities for all targets.
Add "NSScreenCaptureUsageDescription", "NSMicrophoneUsageDescription" in Plist.
Add record button in view
Run debug in Apple Vision Pro device, after click record button, throw the exception.
Hi all,
Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.
I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.
Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.
Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality.
There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.
Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!
Thanks,
em
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function:
import Foundation
import AVFoundation
class AudioManager {
static let shared = AudioManager()
var audioPlayer: AVAudioPlayer?
var isPlaying: Bool {
return audioPlayer?.isPlaying ?? false
}
var playbackPosition: TimeInterval = 0
func playSound(named name: String) {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else {
print("Sound file not found")
return
}
do {
if audioPlayer == nil || !isPlaying {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.currentTime = playbackPosition
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} else {
print("Sound is already playing")
}
} catch {
print("Error playing sound: \(error.localizedDescription)")
}
}
func stopSound() {
if let player = audioPlayer {
playbackPosition = player.currentTime
player.stop()
}
}
func speak(text: String) {
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
synthesizer.speak(utterance)
}
}
And my app shows text in a ScrollView:
ScrollView {
Text(self.description)
.padding()
.foregroundColor(.black)
.font(.headline)
.background(Color.gray.opacity(0))
}.onAppear {
AudioManager.shared.speak(text: self.description)
}
However, the text doesn't get read out (in the simulator). I see some output in the console:
Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices.
I'm probably doing something wrong here, but not sure what.
Topic:
Media Technologies
SubTopic:
Audio
Hello,
We are experiencing on some occasions a wrong behavior with PDFDocument method:
func page(at index: Int) -> PDFPage?
With certain PDF files, this method returns the wrong PDFPage.
This occurs on iOS 18.3, 18.5 and 18.6.2 (an maybe on other versions).
Try this PDF for instance (page 81 is returned when index = 2):
https://drive.google.com/open?id=1MHm2wjfsbWB8OiRmARUMmvODYxp4DIqP&usp=drive_fs
Also, I mention that this doesn't occur systematically with this PDF. When making a copy of this file we don't observe the issue.
Could this be linked some kind of internal cache issue ?
Is it possible to find IDR frame (CMSampleBuffer) in AVAsset h264 video file?
Hey Swift community!
I'm exploring building a macOS app that needs to monitor what's currently playing in music apps like Spotify and Apple Music (track info, playback position, play/pause state). I'm trying to figure out the most efficient architecture before diving in.
The Goal:
Monitor playback state across multiple music players to react to changes in real-time, ideally with minimal CPU overhead since this would run continuously in the background.
Approaches I'm Considering
AppleScript / ScriptingBridge
Distributed Notifications
Native Frameworks (Apple Music only)
What's the recommended way to do this on macOS?
Are distributed notifications reliable enough to avoid polling entirely?
Is there a performance difference between AppleScript and ScriptingBridge for IPC?
For Apple Music specifically, should I use MusicKit, MediaPlayer, or stick with AppleScript?
Are there other approaches I'm missing?
Topic:
Media Technologies
SubTopic:
General
I believe this should work:
CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionaryAddValue(attrs, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2);
CFDictionaryAddValue(attrs, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2);
CFDictionaryAddValue(attrs, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2);
CVPixelBufferRef pixelBuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs, &pixelBuffer);
assert(CFDictionaryGetCount(CVBufferGetAttachments(pixelBuffer, kCVAttachmentMode_ShouldPropagate)) > 0);
But that last assert fails, so it appears the color info does not get attached.
kCVImageBufferColorPrimariesKey and the others are not one of the keys listed under BufferAttributeKeys, but I think they're supposed to be allowed because they're listed by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers().
I'm hoping that putting the color matrix info in there will control how AVAssetWriter converts the RGB to YCbCr.
Topic:
Media Technologies
SubTopic:
Video
My app want Converting iphone12 HDR Video to SDR,to edit。
follow the doc Apple-HDR-Convert.
My code setting the pixBuffAttributes
[pixBuffAttributes setObject:(id)(kCVImageBufferYCbCrMatrix_ITU_R_709_2) forKey:(id)kCVImageBufferYCbCrMatrixKey];
[pixBuffAttributes setObject:(id)(kCVImageBufferColorPrimaries_ITU_R_709_2) forKey:(id)kCVImageBufferColorPrimariesKey];
[pixBuffAttributes setObject:(id)kCVImageBufferTransferFunction_ITU_R_709_2 forKey:(id)kCVImageBufferTransferFunctionKey];
playerItemOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pixBuffAttributes];
but I get the playerItemOutput's output buffer
CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);
CFTypeRef colorPrimaries = CVBufferGetAttachment(pixelBuffer, kCVImageBufferColorPrimariesKey, NULL);
CFTypeRef colorTransFunc = CVBufferGetAttachment(pixelBuffer, kCVImageBufferTransferFunctionKey, NULL);
NSLog(@"colorAttachments = %@", colorAttachments);
NSLog(@"colorPrimaries = %@", colorPrimaries);
NSLog(@"colorTransFunc = %@", colorTransFunc);
log output:
colorAttachments = ITU_R_2020
colorPrimaries = ITU_R_2020
colorTransFunc = ITU_R_2100_HLG
pixBuffAttributes setting output format invalid,please help!
So experimenting with the new SpeechTranscriber, if I do:
let transcriber = SpeechTranscriber(
locale: locale,
transcriptionOptions: [],
reportingOptions: [.volatileResults],
attributeOptions: [.audioTimeRange]
)
only the final result has audio time ranges, not the volatile results.
Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses.
I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while.
The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
I have an iPadOS M-processor application with two different running configurations.
In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz.
In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz
I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine
I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently.
How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on?
I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa.
How can I make the switch reliable without arbitrary time delays?
Is my configuration manager approach appropriate (question for Apple engineers)?
I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago.
it still works on iOS as expected
on MacOS the extension is never registered/installed and won't load
extension won't show up with AUVal
seems to have stopped working with the 26.1 XCode update
I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.)
I have compared all settings with another still-working project and can't find any meaningful difference
(I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.)
How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
I’m building a macOS video editor that uses AVComposition and AVVideoComposition.
Initially, my renderer creates a composition with some default video/audio tracks:
@Published var composition: AVComposition?
@Published var videoComposition: AVVideoComposition?
@Published var playerItem: AVPlayerItem?
Then I call a buildComposition() function that inserts all the default video segments.
Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like:
private func handlePickedVideo(_ url: URL) {
guard url.startAccessingSecurityScopedResource() else {
print("Failed to access security-scoped resource")
return
}
let asset = AVURLAsset(url: url)
let videoTracks = asset.tracks(withMediaType: .video)
guard let firstVideoTrack = videoTracks.first else {
print("No video track found")
url.stopAccessingSecurityScopedResource()
return
}
renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack)
url.stopAccessingSecurityScopedResource()
}
What I want to achieve is the same behavior professional video editors provide,
after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually.
How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero?
You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor