Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

New playback error on iOS/tvOS 18.x "CoreMediaErrorDomain Code=-15486"
Hello, Our users have started to see a new fatal AVPlayer error during playback starting with iOS/tvOS 18.0. The error is defined as "CoreMediaErrorDomain Code=-15486". We have not been able to reproduce this issue locally within our development team. Is there any documentation on the cause of this error or steps to recover from this error? Thank you, Howard
0
2
672
Feb ’25
PushToTalk
Using the PushToTalk library, call requestBeginTransmitting (channelUUID: UUID) on a Bluetooth device and then use the PTChannelManagerial Delegate proxy method channelManager:(PTChannelManager *)channelManager didActivateAudioSession:(AVAudioSession *)audioSession Start recording sound inside. Completed recording
11
0
897
Oct ’25
Alternative for crashing API MPMediaItemArtwork
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734). On iOS 17 this gave the warning that the completion handler was not run on the main thread. I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231 but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue. .task { await MainActor.run { let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default() var nowPlayingInfo = [String: Any]() let image = NSImage(named: "image")! // warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in // Not on main thread here! return image }) nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo } } I'm wondering if there is an alternative method to set the now playing artwork?
4
0
906
Feb ’25
WKWebView Crashes on iOS During YouTube Playlist Playback
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator. Here’s a simplified Swift example of my setup: import SwiftUI import WebKit struct ContentView: View { private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN" var body: some View { YouTubeWebView(playlistID: playlistID) .edgesIgnoringSafeArea(.all) } } struct YouTubeWebView: UIViewRepresentable { let playlistID: String func makeUIView(context: Context) -> WKWebView { let config = WKWebViewConfiguration() config.allowsInlineMediaPlayback = true let webView = WKWebView(frame: .zero, configuration: config) webView.scrollView.isScrollEnabled = true let html = """ <!doctype html> <html> <head> <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0"> <style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style> </head> <body> <iframe src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3" frameborder="0" allow="encrypted-media; picture-in-picture; fullscreen" webkit-playsinline allowfullscreen ></iframe> </body> </html> """ webView.loadHTMLString(html, baseURL: nil) return webView } func updateUIView(_ uiView: WKWebView, context: Context) {} } #Preview { ContentView() } Observed behavior: First video plays without issue. Web process crashes when the second video in the playlist starts. Console logs show WebProcessProxy::didClose and repeated memory status messages. Using ProcessAssertion or background activity does not prevent the crash. Only occurs on physical devices; simulators do not reproduce the issue. Questions: Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices? Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback? Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes? Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback? Thanks in advance for any guidance!
2
0
368
Oct ’25
Constituent active device switching very slow on iPhone 16 Pro Models on focus changes
Hi all, we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching. This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all. Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera. Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation. So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches). Here is a list of devices and our test results: iPhone 15 Pro, iOS 17.6: very fast and reliable switching iPhone 15 Pro, iOS 18.1: very fast and reliable switching iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching iPhone 16 Pro, iOS 18.1: slow switching, unreliable iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable Questions: Does anyone else have seen this issue? And possibly found a workaround? Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed? Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch. Thank you very much! Kind regards, Sebastian import UIKit import AVFoundation class ViewController: UIViewController { var captureSession : AVCaptureSession! var captureDevice : AVCaptureDevice! var captureInput : AVCaptureInput! var previewLayer : AVCaptureVideoPreviewLayer! var activePrimaryConstituentToken: NSKeyValueObservation? var zoomToken: NSKeyValueObservation? override func viewDidLoad() { super.viewDidLoad() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) checkPermissions() setupAndStartCaptureSession() } func checkPermissions() { let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) switch cameraAuthStatus { case .authorized: return case .denied: abort() case .notDetermined: AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (authorized) in if(!authorized){ abort() } }) case .restricted: abort() @unknown default: fatalError() } } func setupAndStartCaptureSession() { DispatchQueue.global(qos: .userInitiated).async{ self.captureSession = AVCaptureSession() self.captureSession.beginConfiguration() if self.captureSession.canSetSessionPreset(.photo) { self.captureSession.sessionPreset = .photo } self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true self.setupInputs() DispatchQueue.main.async { self.setupPreviewLayer() } self.captureSession.commitConfiguration() self.captureSession.startRunning() self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in let type = device.activePrimaryConstituent!.deviceType.rawValue print("Device type: \(type)") }) self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in let zoom = device.videoZoomFactor print("Zoom: \(zoom)") }) let switchZoomFactor = 2.0 DispatchQueue.main.async { self.setZoom(CGFloat(switchZoomFactor), animated: false) } } } func setupInputs() { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { captureDevice = device } else { fatalError("no back camera") } guard let input = try? AVCaptureDeviceInput(device: captureDevice) else { fatalError("could not create input device from back camera") } if !captureSession.canAddInput(input) { fatalError("could not add back camera input to capture session") } captureInput = input captureSession.addInput(input) } func setupPreviewLayer() { previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) view.layer.addSublayer(previewLayer) previewLayer.frame = self.view.layer.frame } func setZoom(_ value: CGFloat, animated: Bool) { guard let device = captureDevice else { return } let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor let zoomValue = max(min(value, maxZoom), minZoom) let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor)) do { try device.lockForConfiguration() if animated { device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0)) } else { device.videoZoomFactor = zoomValue } device.unlockForConfiguration() } catch { return } } }
4
2
520
Dec ’24
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
0
1
125
Jun ’25
_MediaPlayer_AppIntents compilation error for iOS 26
getting an interesting error attempting to compile my app in Xcode 26 beta. error: Unable to find module dependency: '_MediaPlayer_AppIntents' (in target 'icatcher' from project 'icatcher') note: A dependency of main module 'MainModuleCrossImportOverlays' (in target 'icatcher' from project 'icatcher') Unable to find module dependency: '_MediaPlayer_AppIntents' Not sure what to try and pull to fix this issue
1
1
134
Jun ’25
Torch Freezes Ultra-Wide Camera When Switching Between Wide & Ultra-Wide Lenses (AVFoundation Bug?)
I'm developing an iOS app using AVFoundation for real-time video capture and object detection. While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active. Issue If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well. Code snippet DispatchQueue.global(qos: .userInitiated).async { [weak self] in guard let self = self else { return } let isSwitchingToUltraWide = !self.isUsingFisheyeCamera let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide" guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.") } return } do { let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput self.videoCapture.captureSession.beginConfiguration() if isSwitchingToUltraWide &amp;&amp; self.isFlashlightOn { self.forceEnableTorchThroughWide() } if let currentInput = currentInput { self.videoCapture.captureSession.removeInput(currentInput) } let videoInput = try AVCaptureDeviceInput(device: selectedCamera) self.videoCapture.captureSession.addInput(videoInput) self.videoCapture.captureSession.commitConfiguration() self.videoCapture.updateVideoOrientation() DispatchQueue.main.async { if let barButton = sender as? UIBarButtonItem { barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide" barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white } print("Switched to \(cameraName) camera.") } self.isUsingFisheyeCamera.toggle() } catch { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)") } } } } Expected Behavior Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does. The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON. AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active. Questions &amp; Help Needed Is this a known issue with AVFoundation? How does the iPhone Camera app allow using the torch while recording in Ultra-Wide? What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active? Info Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15 iOS Version: iOS 17.3 / iOS 18.0 Xcode Version: 16.2
2
1
663
Oct ’25
Apple Music API: Adding To Collaborative playlist gives 500 error
I am using https://developer.apple.com/documentation/applemusicapi/add-tracks-to-a-library-playlist to add tracks to playlists. This endpoint works fine for all playlists except for collaborative playlists. For collaborative playlist I get the following 500 error as a response: "errors": [ { "id": "<some id>", "title": "Upstream Service Error", "detail": "Unable to update tracks", "status": "500", "code": "50001" } ] } Steps to reproduce: Create a playlist in your library. Use the api to add a song. Confirm that it works. Make that same playlist collaborative. Update the playlist ID in your api request (as making a playlist collaborative changes its id) Confirm that you get the 500 error.
5
0
787
Oct ’25
AUv3 recent "Failed to find component with type..." frequent issues
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...") In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails. If I "Archive" the app+plugin, there is no audio unit extension in the bundle. If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there. Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
4
1
703
Jan ’25
TypeScript definitions for MusicKit JS
I take that MusicKit JS is built with TypeScript, based on the attributions in the script: https://js-cdn.music.apple.com/musickit/v3/musickit.js In the script it points to https://js-cdn.music.apple.com/musickit/v1/acknowledgements.txt – I assume this should be the v3 URL for the v3 version? It returns the same content nonetheless. This contains attributions for TypeScript. Currently there's a third-party effort with DefinitelyTyped, which publishes the NPM package @types/musickit-js. The latest supported version available is v1. However, there is no version compatible with v3. This makes it hard to use MusicKit JS v3 in a TypeScript project. Please publish the types, ideally on the CDN along with the musickit.js file. Also consider publishing an officially Apple supported DefinitelyTyped package, or help to maintain the existing @types/musickit-js to make consuming this even easier.
3
1
156
Jun ’25
AVCaptureMetadataOutput .face detection not working on iOS 26 Beta with high sessionPreset
In iOS 26 (Developer Beta), the AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when metadataOutput.metadataObjectTypes = [.face] is set. On earlier iOS versions the issue does not occur. Interestingly, face detection works if I set the sessionPreset to .medium, but not with .high — except on the iPhone 16 Pro Max, where it works regardless.
2
1
358
Sep ’25
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
1
1
802
Mar ’25
ScreenCaptureKit and mixed Retina/non-Retina configuration
The two ScreenCaptureKit WWDC22 sessions show how to capture with the new framework but the retina factor is hardcoded to 2 in SCStreamConfiguration. When using on a non-retina display, the screencapture is floating on the upper-left corner of the image buffer. There does not seem to be a simple way to retrieve the retina factor from the SCShareableContent data (when configuring the capture). When processing the streaming output, the SCStreamFrameInfo attachment is supposed to have a scaleFactor property but .scaleFactor does not return a value. I have found out that the attachement dictionary contains SCStreamUpdateFrameDisplayResolution. This entry gives me the retina factor but it is not an official SCStreamFrameInfo key. I list the keys to access it. What is the proper way with ScreenCapture to handle the retina factors ?
1
1
823
Jun ’25
Playlist IDs from MusicKit not working with Apple Music API
I am building an app for MacOS and I am trying to implement the code to add songs to a library playlist (which is added below). The issue I am having is that if I use Music Kit to load a users library playlists, the ID for the playlist (which is just a string of numbers) does not work with the Add tracks to a Library Playlist endpoint of Apple Music API. If I retrieve the playlists from the Apple Music API and use that playlist ID (which is different than the id I get from MusicKit) my code works fine and adds the song to the playlist. The problem is that when getting a users library playlists from Apple Music API is that it does not give me all of the library playlists that I get when using Music Kit and it also does not give me Artwork for playlists that have the collage of album covers, so I would prefer to use Music Kit to get the playlists. I have also tested trying to retrieve a single playlist using the Apple Music API with the playlist Id from Music Kit and it does not work. I get the error that the resource cannot be found. Since this is a macOs app I cannot use MusicKit to add songs to library playlists. Does anyone know a way to resolve this? Or a possible workaround? Ideally I want to use MusicKit to get the library playlists and have some way to use the playlist Id and add songs to that playlist. Below is my code for adding a song to a playlist using the Apple Music API, which works correctly only if I originally get the library playlist's id value from a playlist retrieved from the Apple Music API. Also, does anyone know why the playlist Id's are not universal and are different when using Music Kit and Apple Music API? For songs and tracks it does not seem to matter if I use music kit or Apple Music API, the Id's are in the correct format for Apple Music API to use and work with my code. Thanks everyone for any and all help! func addToPlaylist(songs: [Track], playlist: Playlist, alert: Binding<AlertItem?>) async { let tracks = AppleMusicPlaylistPostRequestBody(data: songs.compactMap { AppleMusicPlaylistPostRequestItem(id: $0.id.rawValue, type: "songs") // or "library-songs" }) let playlistID = playlist.id // Build the request URL for adding a song to a playlist guard let url = URL(string: "https://api.music.apple.com/v1/me/library/playlists/\(playlistID)/tracks") else { alert.wrappedValue = AlertItem(title: "Error", message: "Invalid URL for the playlist.") return } // Authorization Header guard let musicUserToken = try? await MusicUserTokenProvider().getUserMusicToken() else { alert.wrappedValue = AlertItem(title: "Error", message: "Unable to retrieve Music User Token.") return } do { var request = URLRequest(url: url) request.httpMethod = "POST" request.setValue("Bearer \(musicUserToken)", forHTTPHeaderField: "Authorization") request.setValue("application/json", forHTTPHeaderField: "Content-Type") let encoder = JSONEncoder() let data = try encoder.encode(tracks) request.httpBody = data let musicRequest = MusicDataRequest(urlRequest: request) let musicRequestResponse = try await musicRequest.response() // Check if the request was successful (status 201) if musicRequestResponse.urlResponse.statusCode == 201 { alert.wrappedValue = AlertItem(title: "Success", message: "Song successfully added to the playlist.") } else { print("Status Code: \(musicRequestResponse.urlResponse.statusCode)") print("Response Data: \(String(data: musicRequestResponse.data, encoding: .utf8) ?? "No Data")") // Attempt to decode the error response into the AppleMusicErrorResponse model if let appleMusicError = try? JSONDecoder().decode(AppleMusicErrorResponse.self, from: musicRequestResponse.data) { let errorMessage = appleMusicError.errors.first?.detail ?? "Unknown error occurred." alert.wrappedValue = AlertItem(title: "Error", message: errorMessage) } else { alert.wrappedValue = AlertItem(title: "Error", message: "Failed to add song to the playlist.") } } } catch { alert.wrappedValue = AlertItem(title: "Error", message: "Network error: \(error.localizedDescription)") } }
1
1
790
Dec ’24
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
2
0
207
1d
Dockkit custom tracking does not work on iOS18.3
Hi all, I'm using Apple Sample Code below to create application using dockkit. "Controlling a DockKit accessory using your camera app" https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8 I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3. If I use other functions like face search (system tracking) in the app, those work ok. I used Belkin and Insta360 Flow 2 Pro to reproduce the problem. My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work. If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta. Regards, TO
1
1
273
3d
HDR10 MVHECV can not play on Safari
Hi, just generated a HDR10 MVHEVC file, mediainfo is below: Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Codec configuration box : hvcC+lhvC then generate the segment files with below command: mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov then upload the segment files and prog_index.m3u8 to web server. just find that can not play the HLS stream on Safari... the url is http://ip/vod/prog_index.m3u8 just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file. above same mediafilesegmenter command and upload the files to web server. the new version of HLS stream is can play on Safari... Is there any way to play HLS PQ video on Safari. thanks.
2
1
822
Feb ’25
On iOS 18, Mandarin is read aloud as Cantonese
Please include the line below in follow-up emails for this request. Case-ID: 11089799 When using AVSpeechUtterance and setting it to play in Mandarin, if Siri is set to Cantonese on iOS 18, it will be played in Cantonese. There is no such issue on iOS 17 and 16. 1.let utterance = AVSpeechUtterance(string: textView.text) let voice = AVSpeechSynthesisVoice(language: "zh-CN") utterance.voice = voice 2.In the phone settings, Siri is set to Cantonese
3
1
535
Feb ’25