Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Do we need an MFi authentication chip to send video/commands to iPhone?
Hello, my company is developing a product that will send data to/from the phone over cable and Wi-Fi. I have three questions: Do we need an MFi authentication chip in our product if we plan to send video and commands to the iPhone/iPad over USB or Lightning cable? Likewise, do we need an MFI authentication chip for communication over Wi-Fi? (Informal research suggests that the answer is no to this one.) And, do we even still need MFI certification at all for Wi-Fi comms? (We are not using HomeKit.) Thank you!
0
0
437
Feb ’25
iPhone 16 Camera Control and AVCaptureSlider – Is there a way to detect which slider is active?
I am following the Apple sample code and trying to add a manual focus lens position slider: @available(iOS 18.0, *) private func addCameraControls() { if !self.session.controls.isEmpty { for control in self.session.controls { self.session.removeControl(control) } } self.cameraControlFocusSlider = nil //Focus Slider if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported { self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0) self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in //Do manual focus } if self.session.canAddControl(self.cameraControlFocusSlider!) { self.session.addControl(self.cameraControlFocusSlider!) } } } So there are these AVCaptureSessionControlsDelegate methods: final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeActive") } final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillEnterFullscreenAppearance") } final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillExitFullscreenAppearance") } final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeInactive") } So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used? Please note that I will have more than one AVCaptureSlider in the final code.
0
0
428
Mar ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
0
0
71
Apr ’25
SystemMusicPlayer.item nil when state = .playing
I've been working with MusicKit without enrolling for a developer account and I haven't run into any issues until I noticed nil on the SysteyMusicPlayer item for some songs and I don't understand why. Are some songs blocked from the framework? Or is it somehow a limitation to not having registered MusicKit to the bundle ID? I am planning on using MusicKit properly in prod and this is just a test app for a package I'm working on. These are the nil songs which I got from the Discovery Station: https://music.apple.com/ca/album/okay/950816298?i=950816304 https://music.apple.com/ca/album/youre-so-cool/1670485433?i=1670485446
0
0
416
Jan ’25
Usage of colorCurvesFilter
How can I use my RGB Curve points: let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)] in colorCurvesFilter which I've found in Apple Docs: func colorCurves(inputImage: CIImage) -> CIImage { let colorCurvesEffect = CIFilter.colorCurves() colorCurvesEffect.inputImage = inputImage colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1) colorCurvesEffect.curvesData = Data( bytes: [Float32]([ 0.0,0.0,0.0, 0.8,0.8,0.8, 1.0,1.0,1.0 ]), count: 36) colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB() return colorCurvesEffect.outputImage! }
0
0
365
Jan ’25
Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
0
0
91
Mar ’25
Regarding FPS SDK Upgrade
Hi Apple Team, We have integrated FairPlay Streaming Server SDK v3 into our MDRM platform since 2017, the system works stable and stayed untouched. As you know, both Widevine and Playready have requirements to upgrade the Server SDK regularly. We want to know if Apple imposes similar requirements for upgrading the FPS SDK, or if we may continue using the old one without any updates. Thanks for your support!
0
1
363
Feb ’25
ShazamKit Background Operation Broken on iOS 18 - SHManagedSession Stops Working After ~20 Seconds
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow: Hi Apple Engineers and fellow developers, I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17. The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground. My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background. To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground. This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far. I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds? Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models. Any guidance would be greatly appreciated.
0
0
136
Jun ’25
How to detect HDCP support in Safari.
I am playing FairPlay + Multi-Key content (fMP4) in Safari browser. I want to implement the implementation to distinguish between SD and HD video quality, and play it in HD if HDCP is supported, and in SD if HDCP is not supported. I have already confirmed that HDCP support is the default, and that a black screen is output in non-HDCP environments. What I want is to improve the user experience by appropriately switching to SD/HD depending on HDCP support when playing DRM content. Question: Is there an API or function that can detect HDCP support in Safari through JavaScript or other methods? Or is there a way to indirectly guess it?
0
0
178
Mar ’25
Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
593
Dec ’24
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
378
Dec ’24
EXT-X-DISCONTINUITY misalignment
We encounter issue with avplayer in case of EXT-X-DISCONTINUITY misalignment between audio and video produced after insertion of gaps. The initial objective is to introduce an EXT-X-DISCONTINUITY in audio playlist after some missing segments (EXT-X-GAP) which durations are aligned to video segments durations, to handle irregular audio durations. Please find below an example of corresponding video and audio playlists: video: #EXTM3U #EXT-X-VERSION:7 #EXT-X-MEDIA-SEQUENCE:872524632 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-TARGETDURATION:2 #USP-X-TIMESTAMP-MAP:MPEGTS=7096045027,LOCAL=2025-05-09T12:38:32.369100Z #EXT-X-MAP:URI="hls/StreamingBasic-video=979200.m4s" #EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:32.369111Z #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524632.m4s #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524633.m4s #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524634.m4s #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524635.m4s #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524636.m4s ## Media sequence discontinuity #EXT-X-GAP #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524637.m4s ## Media sequence discontinuity #EXT-X-GAP #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524638.m4s #EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:46.383111Z #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524639.m4s #EXTINF:2.002, no desc hls/StreamingBasic-video=979200-872524640.m4s audio: EXTM3U #EXT-X-VERSION:7 #EXT-X-MEDIA-SEQUENCE:872524632 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-TARGETDURATION:2 #USP-X-TIMESTAMP-MAP:MPEGTS=7096045867,LOCAL=2025-05-09T12:38:32.378400Z #EXT-X-MAP:URI="hls/StreamingBasic-audio_99500_eng=98800.m4s" #EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:32.378444Z #EXTINF:2.0053, no desc hls/StreamingBasic-audio_99500_eng=98800-872524632.m4s #EXTINF:2.0053, no desc hls/StreamingBasic-audio_99500_eng=98800-872524633.m4s #EXTINF:2.0053, no desc hls/StreamingBasic-audio_99500_eng=98800-872524634.m4s #EXTINF:1.984, no desc hls/StreamingBasic-audio_99500_eng=98800-872524635.m4s #EXTINF:2.0053, no desc hls/StreamingBasic-audio_99500_eng=98800-872524636.m4s ## Media sequence discontinuity #EXT-X-GAP #EXTINF:2.002, no desc hls/StreamingBasic-audio_99500_eng=98800-872524637.m4s ## Media sequence discontinuity #EXT-X-GAP #EXTINF:2.002, no desc hls/StreamingBasic-audio_99500_eng=98800-872524638.m4s #EXT-X-DISCONTINUITY #EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:46.778444Z #EXTINF:1.6213, no desc hls/StreamingBasic-audio_99500_eng=98800-872524639.m4s #EXTINF:2.0053, no desc hls/StreamingBasic-audio_99500_eng=98800-872524640.m4s In this case playback is broken with avplayer. Is it conformed to Http Live Streaming? Is it an avplayer bug? What are the guidelines to handle such gaps?
0
0
171
Jul ’25
Title: Ambisonic B-Format Playback Issues on Vision Pro
I'm trying to implement Ambisonic B-Format audio playback on Vision Pro with head tracking. So far audio plays, head tracking works, and the sound appears to be stereo. The problem is that it is not a proper binaural playback when compared to playing back the audiofile with a DAW. Has anyone successfully implemented B-Format playback on Vision Pro? Any suggestions on my current implementation: func playAmbiAudioForum() async { do { try AVAudioSession.sharedInstance().setCategory(.playback) try AVAudioSession.sharedInstance().setActive(true) // AudioFile laoding/preperation guard let testFileURL = Bundle.main.url(forResource: "audiofile", withExtension: "wav") else { print("Test file not found") return } let audioFile = try AVAudioFile(forReading: testFileURL) let audioFileFormat = audioFile.fileFormat // create AVAudioFormat with Ambisonics B Format guard let layout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Ambisonic_B_Format) else { print("layout failed") return } let format = AVAudioFormat( commonFormat: audioFile.processingFormat.commonFormat, sampleRate: audioFile.fileFormat.sampleRate, interleaved: false, channelLayout: layout ) // write audiofile to buffer guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(audioFile.length)) else { print("buffer failed") return } try audioFile.read(into: buffer) playerNode.renderingAlgorithm = .HRTF // connecting nodes audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: format) audioEngine.prepare() playerNode.scheduleBuffer(buffer, at: nil) { print("File finished playing") } try audioEngine.start() playerNode.play() } catch { print("Setup error:", error) } }
0
0
483
Jan ’25
The files generated using AVAudioRecorder have a constant size of only 4kb
Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply. - (void)startRecordWithOrderID:(NSString *)orderID { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; [audioSession setActive:YES error:nil]; NSMutableDictionary *settings = [[NSMutableDictionary alloc] init]; [settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey]; [settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"]; NSURL *tmpFile = [NSURL fileURLWithPath:path]; recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil]; [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder record]; }
0
0
111
Jul ’25
MusicKit: Best way to check if all tracks of albums are added to library.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline). Using .with(.tracks) on the library album fetches the tracks added to the library. The trackCount property is referring to the tracks that can be fetched from the library. The isComplete property is always nil when fetching from the library. One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well. Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids. Thank you
0
0
75
Mar ’25
Videos viewed in WKWebView have Spatial Audio issues
I have an app that has a WKWebView for watching YouTube videos. When the videos are windowed the audio seems fine, positionally as well. All perfectly. When I fullscreen the video and it goes into the native visionOS video player the audio messes up. It will suddenly sound like it is in your ears, or maybe even just one ear channel, or the position will be wrong. It might be fine for a moment but the second I touch the controls or move the window the sound jumps across the room, away from the window, or switches to stereo. Sometimes exiting windows entirely you will still hear the videos playing. Even if you open the window back up and go to another screen and open another video, now you hear 2 videos playing at the same time with no way to stop the first one in the background, requiring to force restart the app. It is all sorts of glitchy. I haven't the slightest clue what is happening here. I am strongly feeling this is a visionOS bug. I tried using AVAudioSession to change some of the sound settings, and that makes zero difference in behavior. Multiple testers have also reported this behavior and it has been seen on both visionOS 2.3 and 2.4 betas. Thanks for the help! This is driving me mad! It is extremely consistent behavior!
0
0
152
Mar ’25
Has Dext perfectly replaced Kext functionality?
I found some documentation about Kext, but I heard they have now moved to Dext. So I was wondering if Dext could completely imitate the previous Kext. https://developer.apple.com/documentation/kernel/implementing_drivers_system_extensions_and_kexts This page is written like this Important In macOS 11 and later, the kernel doesn’t load a kext if an equivalent DriverKit solution exists. You may continue to use kexts in macOS 10.15 and earlier.
0
0
256
Mar ’25
Slow Image Retrieval Using requestImageForAsset:targetSize:contentMode:options:resultHandler:
Hello, I am experiencing slow image retrieval when using the requestImageForAsset:targetSize:contentMode:options:resultHandler: method in my application. The delay is significantly impacting the performance of my app. Here are the details of my implementation: for (PHAsset *asset in assets) { @autoreleasepool { PHImageManager *imageManager = [PHImageManager defaultManager]; PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init]; options.synchronous = YES; options.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat; options.resizeMode = PHImageRequestOptionsResizeModeNone; [imageManager requestImageForAsset:asset targetSize:CGSizeMake(100, 100) contentMode:PHImageContentModeAspectFill options:options resultHandler:^(UIImage *thumbnail, NSDictionary *info) { CameraRollCellDto *cellDto = [[CameraRollCellDto alloc] init]; cellDto.index = index; cellDto.thumbnail = thumbnail; cellDto.propertyDate = asset.creationDate; if (self.segmentedTorikomi.selectedSegmentIndex == SEG_INDEX_IKKATSU) { cellDto.isSelected = YES; } else { cellDto.isSelected = NO; } [list addObject:cellDto]; }]; index++; } } Has anyone else encountered this issue? Are there any known solutions or optimizations that can help improve the speed of image retrieval using this method? Thank you for your assistance.
0
0
300
Jan ’25