Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Issues After Switching to v3
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward. I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case). I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox): Authorization failed: AUTHORIZATION_ERROR: Unauthorized MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13 authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 _next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 media.mydomain.com:398:19 https://media.mydomain.com/:398
5
0
687
Feb ’25
CoreMediaErrorDomain -12035 error when playing a Fairplay-protected HLS stream on iOS 18+ through the Apple lightning AV Adapter
Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too. When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected. Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device. The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse. Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means. The issue does not occur: on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams) when playing DRM-free video content (whatever the iOS version) when using the USB-C AV Adapter (whatever the iOS version) Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.
5
2
1.4k
Feb ’25
CoreMediaErrorDomain -15628 playback failure in iOS 26 (React Native / AVPlayer, HLS stream)
Hi, After updating to iOS 26, our app is facing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier. Error - Domain[CoreMediaErrorDomain]:Code[-15628]:Desc[The operation couldn’t be completed.]:Underlying Error Domain[(null)]:Code[0]:Desc[(null)] Environment: iOS version: ios 26 React Native: 0.69 Video library: react-native-video (AVPlayer under the hood) Stream type: HLS (m3u8) with segment (.ts) files Observed behaviour: Playback works initially on iOS 26. On iOS 26, the stream fails at runtime after a few seconds/minutes (not on first load). Network logs show 307 redirects on some segment requests. After this, AVPlayer throws the above error. Playback fails intermittently on slow/unstable networks.
5
18
1.1k
1d
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'.
I have an app that edits photos in your library. When I call try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!) The following is logged to the console: writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. What does that mean and how can I resolve it? Xcode Version 16.0 (16A242d) iOS 18.1 (22B82)
5
9
2.9k
Jan ’25
Terrible performance when using MusicKit's Artwork with UIKit
I'm building a UIKit app that reads user's Apple Music library and displays it. In MusicKit there is the Artwork structure which I need to use to display artwork images in the app. Since I'm not using SwiftUI I cannot use the ArtworkImage view that is recommended way of displaying those images but the Artwork structure has a method that returns url for the image which can be used to read the image. The way I have it setup is really simple: extension MusicKit.Song { func imageURL(for cgSize: CGSize) -> URL? { return artwork?.url( width: Int(cgSize.width), height: Int(cgSize.height) ) } func localImage(for cgSize: CGSize) -> UIImage? { guard let url = imageURL(for: cgSize), url.scheme == "musicKit", let data = try? Data(contentsOf: url) else { return nil } return .init(data: data) } } Now, everytime I access .artwork property (so a lot of times) the main thread gets blocked and the console output gets bombared with messages like these: 2023-07-26 11:49:47.317195+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: <MPMediaLibraryArtwork: 0x289591590> with error; Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction.} 2023-07-26 11:49:47.317262+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: file:///var/mobile/Media/iTunes_Control/iTunes/Artwork/Originals/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7 2023-07-26 11:49:47.323099+0200 Plum[998:297013] [Plum] IIOImageWriteSession:121: cannot create: '/var/mobile/Media/iTunes_Control/iTunes/Artwork/Caches/320x320/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7.sb-f9c7943d-6ciLNp'error = 1 (Operation not permitted) My guess is that the most performance-heavy task here is performing the color analysis for each artwork but IMO the property backgroundColor should not be a stored property if that's the case. I am not planning to use it anywhere and if so it should be a computed async property so it doesn't block the caller. I know I can move the call to a background thread and that fixes the issue of blocking main thread but still the loading times for each artwork are terribly slow and that impacts the UX. SwiftUI's ArtworkImage loads the artworks much quicker and without the errors so there must be a better way to do it.
5
0
1.5k
Mar ’25
ApplicationMusicPlayer / MediaPlayer Refuses to Play
We use BassDSDPlayer / SFBAudioEngine to play just about any file, but playing Apple Music is failing. All subscriptions are up to date. We stop the SFBAudioEngine and the BassDSDPlayer before playing Apple Music to no avail. PRINTS: Supported files in /Users/dorian/Music/Music/Media.localized/Music/4: 28364 Apple Music is authorized and can play catalog. Resetting default output device... Releasing BassDSDPlayer audio device... BassDSDPlayer: Audio device released. STOPPED sfbAudioDevice Default output device is ID: 76 applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong] applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong] Player State - After resetting output: Playback Status: stopped Queue Count: 0 No track is playing. Music player reset successfully. BassDSDPlayer: Audio device released. Default output device set successfully: 76 Default output device is ID: 76 Default output device set successfully: 76 Default output device ID: 76 Validated PlayParameters for track: squabble up PlayParameters: PlayParameters(id: 1781270321, kind: "song", isLibrary: nil, catalogID: nil, libraryID: nil, deviceLocalID: nil, rawValues: [:]) Starting playback... Player State - After playback: Playback Status: stopped Queue Count: 1 No track is playing. Notification BASS DSD NSConcreteNotification 0x600007ce2b00 {name = kUpdateSongInfo; object = { AlbumTitle = GNX; ArtistName = "Kendrick Lamar"; SongArtwork = "<NSImage 0x6000041b7ca0 Size={300, 300} RepProvider=<NSImageArrayRepProvider: 0x600003518770, reps:(\n "NSBitmapImageRep 0x600009ed9dc0 Size={300, 300} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=300x300 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600007ce15c0"\n)>>"; SongLength = "157.992"; SongTitle = "squabble up"; Source = AppleMusic; }} Apple Music track loaded: squabble up by Kendrick Lamar Player State - Before play: Playback Status: stopped Queue Count: 1 No track is playing. prepareToPlay failed [no target descriptor] NSError Code: 1, Domain: MPMusicPlayerControllerErrorDomain Player State - After play: Playback Status: stopped Queue Count: 1 No track is playing. func playAppleMusicTracks(tracks: [Track]) { AppleMusicManager.shared.isAuthorizedAndReadyForPlayback { isAuthorized in guard isAuthorized else { print("Apple Music authorization or capabilities insufficient for playback.") return } print("Resetting default output device...") self.stopSFBAudioDevice() self.resetMusicPlayer() self.resetAudioSystem() self.ensureOutputDeviceReady() Task { for track in tracks { guard self.validatePlayParameters(for: track) else { continue } do { try await ApplicationMusicPlayer.shared.queue.insert(track, position: .afterCurrentEntry) guard !ApplicationMusicPlayer.shared.queue.entries.isEmpty else { print("Queue is empty after queuing. Playback cannot proceed.") return } self.notifyAppleMusicTrackInfo(track) } catch { print("Error starting playback: \(error)") if let nsError = error as NSError? { print("NSError Code: \(nsError.code), Domain: \(nsError.domain)") } } } MusicKitWrapper.shared.logPlayerState(message: "After playback") } } } @objc public class MusicKitWrapper: NSObject { @objc public static let shared = MusicKitWrapper() private let player = ApplicationMusicPlayer.shared // Play the current track @objc public func play() { guard !player.queue.entries.isEmpty else { print("Queue is empty. Cannot start playback.") return } logPlayerState(message: "Before play") Task { do { try await player.prepareToPlay() try await player.play() print("Playback started successfully.") } catch { if let nsError = error as NSError? { print("NSError Code: \(nsError.code), Domain: \(nsError.domain)") } } logPlayerState(message: "After play") } } Any help would be appreciated. Thanks!
5
0
602
Jan ’25
Playing music with Musickit.js in Chrome and Firefox
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers. Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox. I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me. Thanks for any help in advance!
5
0
946
Jan ’25
iOS AUv3 extension: no Icon shown in host
Hi, I'm working on an AUv3 project. The app itself displays my icon. However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0). I thought that the extension would inherit the host app icon but that it does not appear to be the case. I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either. It must surely be really easy. What am I missing? Thanks in advance for your help!
5
0
129
May ’25
How to consume video from an RTSP service?
Hi,It seems like it's pretty easy to consume HTTP Live Streaming content in an iOS app. Unfortunately, I need to consume media from an RTSP server. It seems to me that this is a very similar thing, and that all of the underpinnings for doing it ought to be present in iOS, but I'm having a devil of a time figuring out how to make it work without doing a lot of programming.For starters, I know that there are web-based services that can consume an RTSP stream and rebroadcast it as an HTTP Live Stream that can be easily consumed by the media players in iOS. This won't work for me because my application needs to function in an environment where there is no internet access (it's on a private Wifi network where the only other thing on the network is the device that is serving the RTSP stream).Having read everything I can get my hands on and exploring third-party and open-source solutions, I've compiled the following list of ideas:1. Using an iOS build of the open-source ffmpeg library, which supports RTSP, I've come up with a test app that can receive the RTSP packets, decode them, create UIImages out of the frames, and display those frames on-screen. This provides a crude player, but performance is poor, most likely because ffmpeg can't take advantage of any hardware acceleration. It also doesn't provide me with any way to integrate the video stream into AVFoundation, so I'm on my own as far as saving the stream to a file, transcoding it, etc.2. I know that the AVURLAsset class doesn't directly support the RTSP scheme. Since I have access to the undecoded RTSP packets via ffmpeg, I've thought it should be possible to implement RTSP support myself via a custom NSURLProtocol, essentially fooling AVFoundation into reading those packets as if they originated in a file. I'm not sure if this would work, since the raw packets coming from the RTSP server might lack the headers that would otherwise be present in data being read from a file. I'm not even sure if AVFoundation would recognize my custom protocol.3. If a protocol doesn't work, I've considered that I might be able to implement my own local HTTP Live Streaming server that converts the RTSP packets into an HTTP stream that the media players can read. This sounds like a terribly convoluted solution to the problem, at best, and very difficult at worst.4. Going back to solution (1), if I could speed up the decoding by using some iOS CoreVideo function instead of ffmpeg, this solution might be okay. However, I can't find any documentation for CoreVideo on iOS (Apple only documents it for OS X).5. I'm certainly willing to license a third-party solution if it works well and provides good performance. Unfortunately, everything I've found so far is pretty crummy and mostly just leverages ffmpeg and/or VLC. What is most disappointing to me is that nobody seems to be able or willing to provide a solution that neatly integrates with AVFoundation. I really want to make my RTSP stream available as an AVAsset so I can use it with AVFoundation players and other classes -- I don't want to build an app that relies on custom third-party code for everything.Any ideas, tips, advice would be greatly appreciated.Thanks,Frank
9
1
16k
Oct ’25
How to use the SpeechDetector Module
I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule') Below is how I am using it let speechDetector = Speech.SpeechDetector() let transcriber = SpeechTranscriber(locale: Locale.current, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange]) speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
4
2
392
Aug ’25
AUv3 recent "Failed to find component with type..." frequent issues
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...") In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails. If I "Archive" the app+plugin, there is no audio unit extension in the bundle. If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there. Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
4
1
703
Jan ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
376
Aug ’25
MacOS: AudioUnit packaged as .appex won't load when host app is sandboxed
Hi, I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality. For the next release of that app, we are planning to make two changes: make the app sandboxed package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception: AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863 Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct. It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice? "BuildMachineOSBuild" => "23H222" "CFBundleDisplayName" => "elgato_sample_recorder" "CFBundleExecutable" => "ElgatoSampleRecorder" "CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3" "CFBundleName" => "elgato_sample_recorder" "CFBundlePackageType" => "XPC!" "CFBundleShortVersionString" => "1.0.0.0" "CFBundleSignature" => "????" "CFBundleSupportedPlatforms" => [ 0 => "MacOSX" ] "CFBundleVersion" => "1.0.0.0" "DTCompiler" => "com.apple.compilers.llvm.clang.1_0" "DTPlatformBuild" => "24C94" "DTPlatformName" => "macosx" "DTPlatformVersion" => "15.2" "DTSDKBuild" => "24C94" "DTSDKName" => "macosx15.2" "DTXcode" => "1620" "DTXcodeBuild" => "16C5032a" "LSMinimumSystemVersion" => "10.13" "NSExtension" => { "NSExtensionAttributes" => { "AudioComponents" => [ 0 => { "description" => "Elgato Sample Recorder" "factoryFunction" => "elgato_sample_recorderAUFactoryAUv3" "manufacturer" => "Manu" "name" => "Elgato: Elgato Sample Recorder" "sandboxSafe" => 1 "subtype" => "Znyk" "tags" => [ 0 => "Effects" ] "type" => "aufx" "version" => 65536 } ] } "NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI" "NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3" } "NSHighResolutionCapable" => 1 } Any ideas what I am missing?
4
0
445
Feb ’25
Raycasting VNFaceLandmarkRegion2D
Hello, Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage? More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point? Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
4
0
293
Sep ’25
Constituent active device switching very slow on iPhone 16 Pro Models on focus changes
Hi all, we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching. This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all. Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera. Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation. So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches). Here is a list of devices and our test results: iPhone 15 Pro, iOS 17.6: very fast and reliable switching iPhone 15 Pro, iOS 18.1: very fast and reliable switching iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching iPhone 16 Pro, iOS 18.1: slow switching, unreliable iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable Questions: Does anyone else have seen this issue? And possibly found a workaround? Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed? Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch. Thank you very much! Kind regards, Sebastian import UIKit import AVFoundation class ViewController: UIViewController { var captureSession : AVCaptureSession! var captureDevice : AVCaptureDevice! var captureInput : AVCaptureInput! var previewLayer : AVCaptureVideoPreviewLayer! var activePrimaryConstituentToken: NSKeyValueObservation? var zoomToken: NSKeyValueObservation? override func viewDidLoad() { super.viewDidLoad() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) checkPermissions() setupAndStartCaptureSession() } func checkPermissions() { let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) switch cameraAuthStatus { case .authorized: return case .denied: abort() case .notDetermined: AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (authorized) in if(!authorized){ abort() } }) case .restricted: abort() @unknown default: fatalError() } } func setupAndStartCaptureSession() { DispatchQueue.global(qos: .userInitiated).async{ self.captureSession = AVCaptureSession() self.captureSession.beginConfiguration() if self.captureSession.canSetSessionPreset(.photo) { self.captureSession.sessionPreset = .photo } self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true self.setupInputs() DispatchQueue.main.async { self.setupPreviewLayer() } self.captureSession.commitConfiguration() self.captureSession.startRunning() self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in let type = device.activePrimaryConstituent!.deviceType.rawValue print("Device type: \(type)") }) self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in let zoom = device.videoZoomFactor print("Zoom: \(zoom)") }) let switchZoomFactor = 2.0 DispatchQueue.main.async { self.setZoom(CGFloat(switchZoomFactor), animated: false) } } } func setupInputs() { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { captureDevice = device } else { fatalError("no back camera") } guard let input = try? AVCaptureDeviceInput(device: captureDevice) else { fatalError("could not create input device from back camera") } if !captureSession.canAddInput(input) { fatalError("could not add back camera input to capture session") } captureInput = input captureSession.addInput(input) } func setupPreviewLayer() { previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) view.layer.addSublayer(previewLayer) previewLayer.frame = self.view.layer.frame } func setZoom(_ value: CGFloat, animated: Bool) { guard let device = captureDevice else { return } let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor let zoomValue = max(min(value, maxZoom), minZoom) let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor)) do { try device.lockForConfiguration() if animated { device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0)) } else { device.videoZoomFactor = zoomValue } device.unlockForConfiguration() } catch { return } } }
4
2
520
Dec ’24
Alternative for crashing API MPMediaItemArtwork
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734). On iOS 17 this gave the warning that the completion handler was not run on the main thread. I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231 but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue. .task { await MainActor.run { let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default() var nowPlayingInfo = [String: Any]() let image = NSImage(named: "image")! // warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in // Not on main thread here! return image }) nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo } } I'm wondering if there is an alternative method to set the now playing artwork?
4
0
906
Feb ’25
AVSampleBufferDisplayLayerContentLayer memory leaks.
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released. It is possible to reproduce the issue with the simple code: import AVFoundation import UIKit class ViewController: UIViewController { var displayBufferLayer: AVSampleBufferDisplayLayer? override func viewDidLoad() { super.viewDidLoad() let displayBufferLayer = AVSampleBufferDisplayLayer() displayBufferLayer.videoGravity = .resizeAspectFill displayBufferLayer.frame = view.bounds view.layer.insertSublayer(displayBufferLayer, at: 0) self.displayBufferLayer = displayBufferLayer DispatchQueue.main.asyncAfter(deadline: .now() + 1) { self.displayBufferLayer?.flush() self.displayBufferLayer?.removeFromSuperlayer() self.displayBufferLayer = nil } } } In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing. I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs. Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
4
1
527
Mar ’25