Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Apple Music API no longer returns standalone singles as “single” albums
I’ve been using Apple Music API for quite a while now and a recent change must have happened which is quite disruptive. On many occasions, artists release singles from an album as part of promoting this album. For recent examples, Harry Styles released “Aperture” (a single) to promote his upcoming album “Kiss All The Time. Disco, Occasionally“. Similarly, Bruno Mars released “I Just Might”, a single from the upcoming album “The Romantic”. Previously, those would return at the endpoint ”artists/{artist_id}/albums” with a “- Single” suffix. But it seems a recent change happens where they only appear as playable tracks inside the album. This behavior is also evident in the Apple Music app itself. Those singles no longer appear under “Singles & EPs”. Instead, they would only be visible if the single becomes popular enough to be shown on “Top Songs“. Otherwise one would have to know to tap on the (future) album to discover if there are released singles. Meanwhile Spotify’s API returns those as singles properly, just like Apple Music API used to. This change must be recent but the question is if it’s intentional, and if so, how can the API be used from here on out to “extract” those singles and represent them?
0
1
152
1w
iPad app on macOS not asking for microphone permission
Hello, I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works. I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record. Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS) thanks
2
1
825
Mar ’25
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
2
0
366
1w
AVSpeechSynthesizer & Bluetooth Issues
Hello, I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out. After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem? import SwiftUI import AVFoundation class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate { private var speechSynthesizer = AVSpeechSynthesizer() static let shared = TextToSpeechService() override init() { super.init() speechSynthesizer.delegate = self } func configureAudioSession() { speechSynthesizer.delegate = self do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth]) } catch { print("Failed to set audio session category: \(error.localizedDescription)") } } func speak(_ text: String) { Task(priority: .high) { let speechUtterance = AVSpeechUtterance(string: text) speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) speechSynthesizer.speak(speechUtterance) } } func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { Task { stopSpeech() try AVAudioSession.sharedInstance().setActive(false) } } func stopSpeech() { speechSynthesizer.stopSpeaking(at: .immediate) } }
1
1
661
2w
iOS 26: Underexposed image in exposure bracket appears clamped vs single capture
Hello, I have an iOS camera app that captures exposure brackets and performs custom HDR processing. On iOS 26, I’m observing a visual difference between: a single photo captured at –2 EV, and the –2 EV frame from an exposure bracket (–2 / 0 / +2 EV). On iOS 26: The single –2 EV image looks natural and consistent. The –2 EV image from the bracket appears clamped / distorted, most noticeably in high dynamic range scenes (highlight compression and loss of detail). On iOS 18, both approaches produce visually identical and correct –2 EV images. The issue only appears for bracketed captures on iOS 26. Attachments (examples) iOS 26 Single capture –2 EV (JPEG): /Users/danilobudimir/Downloads/ios26SingleImage/JPEG image-4006-8B77-51-0.jpeg Single capture –2 EV — Capture report (dumped settings): /Users/danilobudimir/Downloads/ios26SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T15-59-20Z.md Bracket capture –2 EV frame (JPEG): /Users/danilobudimir/Downloads/bracket_iOS26/JPEG image-45CE-9793-A5-0.jpeg Bracket capture — Capture report (dumped settings): /Users/danilobudimir/Downloads/bracket_iOS26/UnderExposureDebug_CaptureReport_2026-01-09T15-55-42Z.md iOS 18 Single capture –2 EV (JPEG): /Users/danilobudimir/Downloads/ios18SingleImage/JPEG image-47FD-AF73-28-0.jpeg Single capture –2 EV — Capture report: /Users/danilobudimir/Downloads/ios18SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T16-25-27Z.md Bracket capture — –2 EV frame (JPEG): /Users/danilobudimir/Downloads/bracket_iOS18/JPEG image-4A4C-9E93-46-0.jpeg Bracket capture — Capture report: /Users/danilobudimir/Downloads/bracket_iOS18/UnderExposureDebug_CaptureReport_2026-01-09T16-27-23Z.md Question Is there any new behavior in iOS 26 AVFoundation related to: AVCapturePhotoBracketSettings, tone mapping / HDR preprocessing, or internal image processing applied specifically to bracketed frames? Is there a new flag, format requirement or opt-out mechanism required to preserve linear underexposed frames in exposure brackets?
2
0
801
3w
Access to favorited artists from Music API?
It's been well over a year since Apple added favoriting of artists back to Apple Music (the little star icon on an artist page), but yet I still haven't seen a way to get this data from an authenticated user from Music API. I was expecting to hear something about this during the WWDC, but there have been no announcements that I've caught. Has anyone else heard anything? People assume when they provide access to their Apple Music account that we can actually get to the data in their Apple Music account, and we end up looking a little dumb not being able to get this core data.
0
1
298
Jul ’25
VisionKit - crash after photo taken in VNDocumentCameraViewController in iOS 26 when Liquid Glass enabled
I'm adopting Liquid Glass in iOS 26, when I try to test VNDocumentCameraViewController with document scanning after Liquid Glass enabled, there's a crash just after a photo is taken in VNDocumentCameraViewController, here's the screenshot when it crashed The exception output in XCode console is this: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Layout requested for visible navigation bar, <UINavigationBar: 0x1240bde00; frame = (0 117; 390 54); opaque = NO; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c21e60>> standardAppearance=0x12407b900 scrollEdgeAppearance=0x12407bb80 compactAppearance=0x12407b880 no-scroll-edge-support, when the top item belongs to a different navigation bar. topItem = <UINavigationItem: 0x1240bd800> style=navigator leftBarButtonItems=0x123d4e5f0 rightBarButtonItems=0x123d4d5a0, navigation bar = <UINavigationBar: 0x107b9ad00; frame = (0 47; 390 54); opaque = NO; autoresize = W; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c20150>> delegate=0x10a805200 standardAppearance=0x107b2c300 scrollEdgeAppearance=0x107b2c280 compactAppearance=0x107b2c100, possibly from a client attempt to nest wrapped navigation controllers.' *** First throw call stack: (0x18e1db994 0x18b0f5814 0x18c092aa0 0x193b18660 0x193a7d540 0x193a7e020 0x1953ec4a0 0x1943b7d78 0x18ed83420 0x18ed82f74 0x18eb83134 0x18eb44c10 0x18eb70bc4 0x18eb7e74c 0x193ac8cd0 0x193ac8c04 0x193ad6afc 0x193ad5f8c 0x27b456560 0x18e12c4cc 0x18e15c0b0 0x18e15bfd8 0x18e133c1c 0x18e132a6c 0x22ed54498 0x193af6ba4 0x193a9fa78 0x193bcb68c 0x102cc2718 0x102cc2688 0x102cc2794 0x18b14ae28) libc++abi: terminating due to uncaught exception of type NSException
1
1
411
4w
FairPlay: SDK4 vs SDK26 credentials/certificate for iOS/tvOS client apps
Hi We’re updating our KSM to support SPC v2/v3 and currently operate with both legacy SDK4 credentials (ASK + 1024 cert) and SDK26 credentials (certificate bundle + provisioning data + 1024/2048 keys). Our client apps run across a wide range of iOS/tvOS versions, so we want to follow Apple’s recommended client strategy for certificate selection. The docs describe SHA‑1 vs SHA‑256 in the SPC header, but do not specify which OS versions should use SDK4 vs SDK26 credentials. Could you clarify: Is there an official minimum iOS/tvOS version where you recommend SDK26 credentials for client apps? For older OS versions (e.g. iOS 15), is SDK4 still the recommended choice for client apps? Are there any official migration guidelines for client apps moving from SDK4 to SDK26 credentials? Thanks in advance.
1
0
124
17h
Feature / Workaround wanted: Seamless, Automated AirPlay Screen Streaming on visionOS for Demos
Hello Apple team and developer community, I am preparing a visionOS app for a fair environment, where we want to automatically stream the current experience to a nearby monitor via AirPlay, without requiring guests or staff to manually interact with the Control Center or AirPlay pickers all the time. The goal is to provide a smooth, frictionless setup so attendees can focus on the demo, not the configuration. Feature Request: A supported API or method to programmatically start/stop AirPlay video streaming (mirroring or external playback) from within a visionOS app, allowing the current experience to be instantly displayed on an external monitor or Apple TV for the audience. Context & Rationale: In a trade fair or exhibition setting, rapid guest turnaround and minimal staff intervention are crucial. Having to manually guide each visitor through AirPlay setup is impractical. As I understood, AVRoutePickerView can be used for this on iOS/macOS, but this is not available in visionOS. Enabling similar automated streaming on visionOS would make the device far more suitable for live demos and public showcases. Questions: Are there any supported workarounds or best practices for enabling automated screen streaming or AirPlay initiation on visionOS in public demo environments that I missed? Is Apple considering adding programmatic AirPlay control or accessibility features to support such use cases in future visionOS releases? Thank you for considering this request! If there are recommended patterns, entitlements, or accessibility solutions we could explore for trade fair scenarios, your guidance would be greatly appreciated. Best regards, Julian Zürn - IPI, HS Kempten
4
0
567
1w
AVSpeechSynthesizer system voices (SLA clarification)
Hello, I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage. For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using: AVSpeechSynthesizer.write(_:toBufferCallback:) Converting the received AVAudioPCMBuffer buffers into audio data Storing the audio inside the app sandbox Playing it back using AVAudioPlayer / AVAudioEngine The cached audio is: Generated fully on-device using system voices Stored only inside the app’s private container Used only for internal playback controls (timeline, seek, skip ±5 seconds) Never shared, exported, uploaded, or exposed outside the app The alternative approaches would be: Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach I have reviewed the current iOS Software License Agreement: https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt: “…use … only for your personal, non-commercial use… No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.” I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app. My question is: Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage? If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact. Thank you.
0
1
289
3w
AVSampleBufferDisplayLayer drop frame when play uncompressed video with bframe>3
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes. Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ): ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4 ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4 Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer frame drops can be observed. And following log can be found in devices console. mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2 PS. I was using iphone11 iOS14.6, to replay this issue. May I ask why frame drops occur in this case? Is there any configuration or API usage change that could help fix the frame drop issue? Many thanks!
2
1
266
Jul ’25
Apple Music / MusicKit and Simulator
It's been an ask for a few years and I'm wondering if there are any plans, or whether the '26 SDKs/Tools allow Apple Music to work in the simulator? I develop for the Vision Pro so the usual 'fix' of running on the device is a bit of a hard ask. At the very least a small sample library that works in the simulator would be welcome (similar to how photos works) Cheers
1
1
183
Jul ’25
Making DataScannerViewController work in the Simulator
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv) Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into: Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.) My question: How does this view controller determine whether scanning is available? Is there a certain capability the available AVCaptureDevice's need to support maybe? Any direction would be helpful for me to make this work for developers, making them build apps faster!
0
1
340
Jul ’25
Terrible performance when using MusicKit's Artwork with UIKit
I'm building a UIKit app that reads user's Apple Music library and displays it. In MusicKit there is the Artwork structure which I need to use to display artwork images in the app. Since I'm not using SwiftUI I cannot use the ArtworkImage view that is recommended way of displaying those images but the Artwork structure has a method that returns url for the image which can be used to read the image. The way I have it setup is really simple: extension MusicKit.Song { func imageURL(for cgSize: CGSize) -> URL? { return artwork?.url( width: Int(cgSize.width), height: Int(cgSize.height) ) } func localImage(for cgSize: CGSize) -> UIImage? { guard let url = imageURL(for: cgSize), url.scheme == "musicKit", let data = try? Data(contentsOf: url) else { return nil } return .init(data: data) } } Now, everytime I access .artwork property (so a lot of times) the main thread gets blocked and the console output gets bombared with messages like these: 2023-07-26 11:49:47.317195+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: <MPMediaLibraryArtwork: 0x289591590> with error; Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction.} 2023-07-26 11:49:47.317262+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: file:///var/mobile/Media/iTunes_Control/iTunes/Artwork/Originals/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7 2023-07-26 11:49:47.323099+0200 Plum[998:297013] [Plum] IIOImageWriteSession:121: cannot create: '/var/mobile/Media/iTunes_Control/iTunes/Artwork/Caches/320x320/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7.sb-f9c7943d-6ciLNp'error = 1 (Operation not permitted) My guess is that the most performance-heavy task here is performing the color analysis for each artwork but IMO the property backgroundColor should not be a stored property if that's the case. I am not planning to use it anywhere and if so it should be a computed async property so it doesn't block the caller. I know I can move the call to a background thread and that fixes the issue of blocking main thread but still the loading times for each artwork are terribly slow and that impacts the UX. SwiftUI's ArtworkImage loads the artworks much quicker and without the errors so there must be a better way to do it.
5
0
1.6k
Mar ’25
Start and stop recording Voice Memos with Siri
using iOS 26.2; Airpods 4 Long press stem to launch Siri Speak "Record Voice Memo" -> Recording starts Recording in progress... Long press stem to launch Siri -> Nothing happens. To stop recording need use phone. is this intended behaviour? i would like to be able to stop recording with Siri I am able to launch Siri from phone while recording, but point is to keep phone in pocket and start/stop recordings only via Airpods.
1
0
164
Dec ’25
Always audio from latest connected external USB mic
Hello! I've two mics connected to a USB-hub. The USB-hub is then connected to my iPad. Both mics are part of the audio session's list of available inputs. The problem is that regardless of which mic I select in my app (using setPreferredInput() on the audio session), the audio keeps coming from the mic that was last connected to the USB-hub. Anyone that knows if this is a limitation in iPadOS/iOS?
1
1
205
Jul ’25
VideoToolbox Encoder's Unregistered User Data SEI NAL UUID
I was advised to post here by a Code-Level Support representative. Below will be a copy of my initial issue report, and my minimally reproductive test project can be found at the following GitHub repository URL... https://github.com/PierceLBrooks/vtUudSeiNalCmake DESCRIPTION OF PROBLEM When encoding H264 video codec data using the VTCompressionSession API facilities available through the VideoToolbox framework on MacOS, the resultant bitstream will invariably include Unregistered User Data SEI NAL units that carry the UUID "47564adc-5c4c-433f-94ef-c5113cd143a8". The proprietary decoders we are working with currently struggle with filtering out these NAL units. Can you explain what purpose this serves, what the meaning of the byte-wise unit payloads are, and which configuration settings the VideoToolbox encoder instance specifically depends upon for triggering the insertion of them? STEPS TO REPRODUCE 1. Invoke the instantiation of a new VideoToolbox H264 encoder object by calling VTCompressionSessionCreate with appropriate configuration flags. 2. Push frames through the encoder, receiving their encoded byte buffer counterparts through an asynchronous callback. 3. Write that encoded data to some buffer which will contain the totality of the encoder's output. 4. Inspect the NAL units of the initial portion of this output bitstream buffer. 5. Observe the presence of at least one Unregistered User Data SEI NAL unit carrying the "47564adc-5c4c-433f-94ef-c5113cd143a8" UUID near the beginning of the output segment.
1
2
222
Apr ’25
How to detect the end of playback with the system music player?
Since iOS 12 it has become difficult to detect the end of playback using the system music player. In earlier iOS versions, the now playing item would be set nil and you would receive a notification that the player stopped. In iOS 12 and later, nowPlayingItem still contains the current song and the only notification you get is MPMusicPlayerControllerPlaybackStateDidChangeNotification with the playbackState set to MPMusicPlaybackStatePaused. Pressing pause in my car (or any remote access) generates the same conditions making it difficult to correctly detect the difference. It would be nice if they added a notification that playback was done (similar to the other players). Any suggestions?
1
1
814
Mar ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
117
Jun ’25