Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

coreaudiod display sleep
hi all, as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display. can i somehow stop the insertion of the display sleep assertion? pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep" Created for PID: 4145. where PID 4145 is spotify. but it doesn't matter which app is playing the audio. any help would be appreciated thanks
0
0
75
Nov ’25
AVCaptureSession setting preset has no effect if HDR configured with AVCaptureVideoDataOutput
I want to confirm if this is a bug or a programming error. Very easy to reproduce it by modifying AVCam sample code. Steps to reproduce: Add AVCaptureVideoDataOutput to AVCaptureSession, no need to set delegate in AVCam sample code (CaptureService actor) private let videoDataOutput = AVCaptureVideoDataOutput() and then in configureSession method, add the following line try addOutput(videoDataOutput) if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] } And next modify set HDR method: /// Sets whether the app captures HDR video. func setHDRVideoEnabled(_ isEnabled: Bool) { // Bracket the following configuration in a begin/commit configuration pair. captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } do { // If the current device provides a 10-bit HDR format, enable it for use. if isEnabled, let format = currentDevice.activeFormat10BitVariant { try currentDevice.lockForConfiguration() currentDevice.activeFormat = format currentDevice.unlockForConfiguration() isHDRVideoEnabled = true if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange] } } else { captureSession.sessionPreset = .high isHDRVideoEnabled = false if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_32BGRA) { print("Setting sdr pixel format \(kCVPixelFormatType_32BGRA)") videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_32BGRA] } try currentDevice.lockForConfiguration() currentDevice.activeColorSpace = .sRGB currentDevice.unlockForConfiguration() } } catch { logger.error("Unable to obtain lock on device and can't enable HDR video capture.") } The problem now is toggling HDR on and off no longer works in video mode. If after setting HDR on, you set HDR to off, active format of device does not change (setting sessionPreset has no effect). This does not happen if video data output is not added to session. Is there any workaround available?
1
0
250
Nov ’25
Broadcast Upload Extension not work in Apple Vision Pro, throw throw "getMXSessionProperty unsupported"
I am working on Screen Record function in Apple Vision Pro, when I use broadcast upload extension, after I click record button, the XCode console show the exception: <<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606 we create and config the project as flow: Create a Apple Vision Project. Create a Broadcast Upload Extension Target. Add App Group for Project Target and Extension Target, both use the same identifier. Add "Main Camera Access", "Passthrough in Screen Capture" Capabilities for all targets. Add "NSScreenCaptureUsageDescription", "NSMicrophoneUsageDescription" in Plist. Add record button in view Run debug in Apple Vision Pro device, after click record button, throw the exception.
1
0
565
Nov ’25
PDFKit doesn't return the correct page
Hello, We are experiencing on some occasions a wrong behavior with PDFDocument method: func page(at index: Int) -> PDFPage? With certain PDF files, this method returns the wrong PDFPage. This occurs on iOS 18.3, 18.5 and 18.6.2 (an maybe on other versions). Try this PDF for instance (page 81 is returned when index = 2): https://drive.google.com/open?id=1MHm2wjfsbWB8OiRmARUMmvODYxp4DIqP&usp=drive_fs Also, I mention that this doesn't occur systematically with this PDF. When making a copy of this file we don't observe the issue. Could this be linked some kind of internal cache issue ?
1
0
307
Nov ’25
When does AVPlayer exclude _HLS_part query param in LL-HLS requests?
Hello, I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time). I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter. While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation? Clarification on this behavior would help us greatly in debugging our stream delivery. Thanks in advance.
1
0
150
Nov ’25
Best Approach for Monitoring Music Playback State Across Multiple Apps?
Hey Swift community! I'm exploring building a macOS app that needs to monitor what's currently playing in music apps like Spotify and Apple Music (track info, playback position, play/pause state). I'm trying to figure out the most efficient architecture before diving in. The Goal: Monitor playback state across multiple music players to react to changes in real-time, ideally with minimal CPU overhead since this would run continuously in the background. Approaches I'm Considering AppleScript / ScriptingBridge Distributed Notifications Native Frameworks (Apple Music only) What's the recommended way to do this on macOS? Are distributed notifications reliable enough to avoid polling entirely? Is there a performance difference between AppleScript and ScriptingBridge for IPC? For Apple Music specifically, should I use MusicKit, MediaPlayer, or stick with AppleScript? Are there other approaches I'm missing?
0
0
99
Nov ’25
Attaching color properties to CVPixelBufferRef
I believe this should work: CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionaryAddValue(attrs, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2); CVPixelBufferRef pixelBuffer = NULL; CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs, &pixelBuffer); assert(CFDictionaryGetCount(CVBufferGetAttachments(pixelBuffer, kCVAttachmentMode_ShouldPropagate)) > 0); But that last assert fails, so it appears the color info does not get attached. kCVImageBufferColorPrimariesKey and the others are not one of the keys listed under BufferAttributeKeys, but I think they're supposed to be allowed because they're listed by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers(). I'm hoping that putting the color matrix info in there will control how AVAssetWriter converts the RGB to YCbCr.
2
0
337
Nov ’25
Issue with FairPlay Streaming Certificate SDK 26.x
Hi, I'm trying to create a FairPlay Streaming Certificate for the SDK 26.x version. Worth to mention that we already have 2 (1024 and 2048) and we only have the possibility to use our previous 1024-bit certificate (which we do not want because we want a 2048 cert) Our main issue is that when I upload a new "CSR" file, the "Continue" button is still on "gray" and cannot move forward on the process. The CSR file has been created with this command: openssl req -out csr_2048.csr -new -newkey rsa:2048 -keyout priv_key_2048.pem -subj /CN=SubjectName/OU=OrganizationalUnit/O=Organization/C=US Some help will be appreciated. Thanks in advance Best,
1
0
629
Nov ’25
Mac Catalyst: AUv3 Extension no longer works on MacOS, still works on iOS
I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago. it still works on iOS as expected on MacOS the extension is never registered/installed and won't load extension won't show up with AUVal seems to have stopped working with the 26.1 XCode update I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.) I have compared all settings with another still-working project and can't find any meaningful difference (I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.) How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
0
0
166
Nov ’25
How to dynamically update an existing AVComposition when users add a new custom video clip?
I’m building a macOS video editor that uses AVComposition and AVVideoComposition. Initially, my renderer creates a composition with some default video/audio tracks: @Published var composition: AVComposition? @Published var videoComposition: AVVideoComposition? @Published var playerItem: AVPlayerItem? Then I call a buildComposition() function that inserts all the default video segments. Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like: private func handlePickedVideo(_ url: URL) { guard url.startAccessingSecurityScopedResource() else { print("Failed to access security-scoped resource") return } let asset = AVURLAsset(url: url) let videoTracks = asset.tracks(withMediaType: .video) guard let firstVideoTrack = videoTracks.first else { print("No video track found") url.stopAccessingSecurityScopedResource() return } renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack) url.stopAccessingSecurityScopedResource() } What I want to achieve is the same behavior professional video editors provide, after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually. How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero? You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor
0
0
322
Nov ’25
General iOS/iPadOS 26 decoding bug: MP4 unexpectedly hangs, video image frozen, audio goes on
Playback of any kind of HD H.264 MP4 files (720p, 50fps) could randomly cause a stalled image. Playback does not stop in such a case. Image is frozen/stalled. Audio goes on. Timeline goes on. By tapping play/pause or scrubbing in the timeline, the playback recovers. It could also happen, if you are scrubbing in the timeline, especially to areas not loaded already (progressive MP4 download). Behaviour is always the same: image is stalled/frozen, audio goes on. To reproduce: use example project https://developer.apple.com/documentation/AVKit/playing-video-content-in-a-standard-user-interface Example file: https://www.keepinmind.info/test.mp4
0
0
254
Nov ’25
SpeechTranscriber supported Devices
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
1
0
433
Nov ’25
Adding AVCaptureMovieFileOutput and AVCaptureVideoDataOutput with ProRes422
Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0. Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
0
0
217
Nov ’25
Is there an errors with SpatialAudioCLI?
Hi, everyone, I downloaded the source code EditingSpatialAudioWithAnAudioMix.zip from https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix, when I carried out one of the actions named "process" in command line the program crashed!! Form the source code, I found that the value of componentType is set to kAudioUnitType_FormatConverter: // The actual `AudioUnit`. public var auAudioMix = AVAudioUnitEffect() init() { // Generate a component description for the audio unit. let componentDescription = AudioComponentDescription( componentType: kAudioUnitType_FormatConverter, componentSubType: kAudioUnitSubType_AUAudioMix, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0) auAudioMix=AVAudioUnitEffect(audioComponentDescription: componentDescription) } But in the document from https://developer.apple.com/documentation/avfaudio/avaudiouniteffect/init(audiocomponentdescription:), it seems that componentType can not be set to kAudioUnitType_FormatConverter and : Has everyone encountered this problem?
1
0
161
Nov ’25
PhotoKit Background Upload Extension not working on iOS 26.2 iPhone 17 Simulator
Hi, I’m trying to implement the new PhotoKit PHBackgroundResourceUploadExtension. I created the extension, enabled full photo library access in the host app, and registered the extension point using the string: com.apple.photos.background-upload. However, when I attempted to enable the extension with: try library.setUploadJobExtensionEnabled(true) I received the following error: Error Domain=PHPhotosErrorDomain Code=-1 "(null)" This happens when running the app on Xcode 26.1 and 26.2 Beta, using the iPhone 17 Pro Max simulator (iOS 26.1 and 26.2). My question is: Is this extension supported on the simulator? I’m asking because at the moment it’s difficult for me to test this on a physical device. Also, What's the meaning of the error? Thanks.
1
1
616
Nov ’25
Background uploading extension's identifier not official yet
Recently Apple gave us the possibility to upload asset resources in the background. We implemented our background upload extension but when our CI tried to upload the app on TestFlight we got an error that the extension point identifier - in our case com.apple.photos.backgound-upload - is not an official one. Any idea when it will become official and we will be able to release a working background uploading?
1
0
204
Nov ’25
Music in iOS 26.2
I’m running the iOS 26.2 Public Beta update and my album artwork is missing from the music app (I’m not using Apple Music). I use google to get my album artwork. Do I need to wait for a new update?
1
0
157
Nov ’25
PhotosPickerItem.itemIdentifier always nil
I'm using the SwiftUI Photos Picker to select videos from the users Photos library and then opening the video using the PhotosPickerItem. I'm looking for a way to allow the user to open the same video on their other devices as the app uses SwiftData and CloudKit to provide access to a recently watched list of videos. The URL from the PhotosPickerItem appears to be device specific and so I was looking to see if I can use the itemIdentifier and then the init that takes the itemIdentifier to create the PhotosPickerItem on the other devices. The itemIdentifier however is always nil and so wouldn't be able to be used in this way. Is there an alternative approach whereby the users can open a video using a PhotosPickerItem and that item would be viewable on their other devices with an item identifier or a URL that is device agnostic. This approach should also not involve copying the video into other storage as it would simply expand the use of the users iCloud storage, providing a less than ideal user experience. If the user has opened the video from their Photos library, there should be a way to allow the same user (e.g. same Apple ID), to use the same app on another device to open the video again.
1
0
433
Nov ’25
Correct way for an Audio Unit v3 to return fewer than requested number of samples given a buffer
I have an AUv3 plugin which uses an FFT - which requires n samples before it can produce any output - so, depending on the relation between the host's buffer size and the FFT window size, it may receive a several buffers of samples, producing no output, and then dumping out what it has once a sufficient number of samples have been received. This means that output is produced in fits and starts, in batches that match the FFT size (modulo oversampling) - e.g. if being fed buffers of 256 samples with an fft size of 1024, the output buffer sizes will be 0 for the first 3 buffers, and upon the fourth, the first 256 processed samples are returned and the remaining 768 cached; the next three buffers will return the remaining cached samples while processing and buffering subsequent ones, and so forth. The internal mechanics of that I have solved, caching output if the current output buffer is too small, and so forth - so it all works as advertised, and the plugin reports its latency correctly. And when run as an app in demo-mode, playback works as expected. In the plugin's render block, it captures the number of frames written, and if it is less than the number of frames passed in, adjusts the mDataByteSize of the output buffers to match the actual quantity of data being returned: unsigned int framesWritten = (unsigned int) processHelper->processWithEvents(inAudioBufferList, outAudioBufferList, timestamp, frameCount, realtimeEventListHead); if (framesWritten < frameCount) { for (UInt32 i = 0; i < outAudioBufferList->mNumberBuffers; ++i) { outAudioBufferList->mBuffers[i].mDataByteSize = framesWritten * 4; // assume 4 byte floats } } However, there are a couple of serious issues: auval -v fails it with - Render Test at 64 frames, sample rate: 22050 Hz ERROR: Output Buffer Size does not match requested When connected to Logic Pro, it appears that mDataByteSize is ignored, and the entire allocated buffer is read - audio has sections of silence snipped into it which corresponds the number of empty buffers being returned If I set Logic's buffer size to 1024 and use a 1024 sample FFT window, the plugin works correctly - but of course a plugin cannot dictate buffer size, and `1024 is too small a window size to be useful for anything but filtering very high frequencies This seems like it has to be a solvable problem, and most likely the issue is in how my code reports the number of usable samples in the returned buffer. So, what is the correct way for a plugin to report that it has no samples to return, but will, uh, real soon now? I know I could convert this plugin to be one that does offline rendering of the entire input, but this is real-time processing, just with a fixed amount of latency, so that should not be necessary.
0
0
261
Nov ’25