Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
553
Dec ’24
Changing playback rate of AVQueuePlayer not working for some Airplay devices.
Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device. When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed. Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app. Kind regards.
0
0
428
Jan ’25
Slow Image Retrieval Using requestImageForAsset:targetSize:contentMode:options:resultHandler:
Hello, I am experiencing slow image retrieval when using the requestImageForAsset:targetSize:contentMode:options:resultHandler: method in my application. The delay is significantly impacting the performance of my app. Here are the details of my implementation: for (PHAsset *asset in assets) { @autoreleasepool { PHImageManager *imageManager = [PHImageManager defaultManager]; PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init]; options.synchronous = YES; options.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat; options.resizeMode = PHImageRequestOptionsResizeModeNone; [imageManager requestImageForAsset:asset targetSize:CGSizeMake(100, 100) contentMode:PHImageContentModeAspectFill options:options resultHandler:^(UIImage *thumbnail, NSDictionary *info) { CameraRollCellDto *cellDto = [[CameraRollCellDto alloc] init]; cellDto.index = index; cellDto.thumbnail = thumbnail; cellDto.propertyDate = asset.creationDate; if (self.segmentedTorikomi.selectedSegmentIndex == SEG_INDEX_IKKATSU) { cellDto.isSelected = YES; } else { cellDto.isSelected = NO; } [list addObject:cellDto]; }]; index++; } } Has anyone else encountered this issue? Are there any known solutions or optimizations that can help improve the speed of image retrieval using this method? Thank you for your assistance.
0
0
300
Jan ’25
Musickit Media player missing output device selection
Hi All, I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying. Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device? Thanks Pancras
2
0
568
Feb ’25
Why do I get .ts and .aac requests to the server when playing a downloaded HLS?
I am developing an app to stream and download DRM protected HLS videos based on the official “FairPlay Streaming Server SDK”. When I play the downloaded video, it asks the server for .ts or .aac, even though I have passed the path of the downloaded video to AVURLAsset. As a result, playback fails when the device is offline, such as in airplane mode. This behavior depends on the playback time of the video and occurs when trying to download and play a video with a playback time of 19 hours or more. It did not occur for videos with a playback time of 18 hours. The environment we checked is iOS 18.3. The solution at this time is to limit the video playback time to 18 hours, but if possible, we would like to allow download playback of videos longer than 19 hours. Does anyone have any information or know of a solution to this problem, such as if you have experienced this type of event, or if you know that content longer than 19 hours cannot be played offline? // load let path = ".../xxx.movpkg" // Path of the downloaded file videoAsset = AVURLAsset(url: path) playerItem = AVPlayerItem(asset: videoAsset!) player.replaceCurrentItem(with: playerItem) // isPlayableOffline print("videoAsset.assetCache.isPlayableOffline = \(videoAsset.assetCache.isPlayableOffline)") // true
0
0
419
Feb ’25
Error codes -1102 and 1852797029 when entering the background playing HLS assets
In our logging tools (Firebase) I see a lot of errors reported when users are playing content and the app transitions to the background. A AVPlayerItemFailedToPlayToEndTime notification is fired with an error containing error codes like -1102 and 1852797029 which seem to correspond to NSURLErrorNoPermissionsToReadFile and kCMIOHardwareIllegalOperationError respectively. To me, it looks like these might have something to do with caching logic. The items being played are HLS streams and we make use of AVAssetDownloadTask to make any streamed content offline available. Our setup is similar to the sample provided here: https://developer.apple.com/documentation/avfoundation/using-avfoundation-to-play-and-persist-http-live-streams. Whenever an item is selected for playback the app will check if a cached version is available and if so gets the url to the stored file like the "localAssetForStream()" method in the example, or get the asset from a currently running AVAssetDownloadTask for the item, or else, starts a new AVAssetDownloadTask and returns a AVAsset from that task to play. This seems to work fine, and I can't reproduce the issues our users and our logging tools are reporting. Is there some case I am missing where AVAssetDownloadTask and associated AVAssets might become unreadable when the app transitions to the background? Or do these errors indicate a different problem entirely?
0
0
214
Mar ’25
SFSpeechRecognizer throws User denied access to speech recognition
I have created an app where you can speak using SFSpeechRecognizer and it will recognize you speech into text, translate it and then return it back using speech synthesis. All locales for SFSpeechRecognizer and switching between them work fine when the app is in the foreground but after I turn off my screen(the app is still running I just turned off the screen) and try to create new recognitionTask it it receives this error inside the recognition task: User denied access to speech recognition. The weird thing about this is it only happens with some languages. The error happens with Croatian or Hungarian locale for speech recognition but doesn't with English or Spanish locale.
1
0
376
Mar ’25
Missing Depth Frames When Recording with AVCaptureVideoDataOutputSampleBufferDelegate/AVCaptureDataOutputSynchronizerDelegate and AVAssetWriter
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter. In my code, I save: Timestamps for each frame to a metadata file Depth frames to a binary file Video to an .mp4 file If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120. Does anyone know why this mismatch happens? func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { // Read all outputs guard let syncedDepthData: AVCaptureSynchronizedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { // only work on synced pairs return } if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped { return } let depthData = syncedDepthData.depthData let depthPixelBuffer = depthData.depthDataMap let sampleBuffer = syncedVideoData.sampleBuffer guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { return } addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer)) if !canWrite() { return } // Extract the presentation timestamp (PTS) from the sample buffer let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) //sessionAtSourceTime is the first buffer we will write to the file if self.sessionAtSourceTime == nil { //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time) guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return } self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp) } if self.videoWriterInput!.isReadyForMoreMediaData { self.videoWriterInput!.append(sampleBuffer) self.videoTimestamps.append( Timestamp( frame: videoTimestamps.count, value: timestamp.value, timescale: timestamp.timescale ) ) let ddm = depthData.depthDataMap depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp) } }
3
0
447
Feb ’25
Using AVCaptureSession to record a video on initialising audio session from a Push To Talk call, audio of the ongoing video recording is getting stopped while the video recording is still ongoing.
We have a Push To Talk application which allow user to record video and audio. When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress. Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture. We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below: [OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
1
0
583
Feb ’25
App recedes to background,audioEngine.start()
private var audioEngine = AVAudioEngine() private var inputNode: AVAudioInputNode! func startAnalyzing() { inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) let hardwareSampleRate = recordingSession.sampleRate inputNode.removeTap(onBus: 0) if recordingFormat.sampleRate != hardwareSampleRate { print("。") let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat, sampleRate: hardwareSampleRate, channels: recordingFormat.channelCount, interleaved: recordingFormat.isInterleaved) inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } else { inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } do { audioEngine.prepare() try audioEngine.start() } catch { print(": \(error)") } } I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。 error: [10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187) [10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187 Audio engine couldn't start. Is background boot not allowed?
1
0
515
Jan ’25
3D scanning and 3D model with texture mapping
I am new here and would appreciate help in coding or an explanation what to use in swift for an app which will be able to capture LiDAR scanning and RGB data from taken pictures, generate a 3D mesh, and create .OBJ, .MTL, and .JPEG file set for further manipulation of 3D model. I am able to create from LiDAR scanning 3D mesh and .OBJ file but can not generate .MTL and .JPEG for a texture of 3D model.
1
0
385
Feb ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
109
Apr ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
464
Jan ’25
Transparent overlay changes color in HDR video
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color) the distorted colorsthe way it should look(sdr) Im creating the overlay with a CGContext class CustomHdrCompositor: NSObject, AVVideoCompositing { private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false]) let combinedFilter = CIFilter(name: "CISourceOverCompositing")! var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var supportsWideColorSourceFrames = true var supportsHDRSourceFrames = true func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { return } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else { print("No valid pixel buffer found. Returning.") request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage) return } guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else { print("No valid track IDs found in composition instruction.") return } let sourceCount = requiredTrackIDs.count if sourceCount > 1 { request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources) return } if sourceCount == 1 { let sourceID = requiredTrackIDs[0] let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)! let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer) var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds) combinedFilter.setValue(textImage, forKey: "inputImage") if let outputImage = combinedFilter.outputImage { let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer) do { try coreImageContext.startTask(toRender: outputImage, to: renderDestination) } catch { } } } request.finish(withComposedVideoFrame: outputPixelBuffer) } } func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition { self.isHdr = checkHdr(asset: asset) let avComposition = AVMutableComposition() let composition = AVMutableVideoComposition() composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020 composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020 composition.renderSize = assetSize composition.frameDuration = CMTime(value: 1, timescale: 30) composition.customVideoCompositorClass = CustomHdrCompositor.self composition.perFrameHDRDisplayMetadataPolicy = .propagate return composition } I’m using this function to transfer the transparent CGImage to CIImage that supports HDR func convertToHDRCIImage(from cgImage: CGImage, maxBrightness: CGFloat = 3.0) -> CIImage? { // Create a CIImage from the input CGImage let baseImage = CIImage(cgImage: cgImage) // Create HDR color adjustment filter let colorAdjust = CIFilter(name: "CIColorMatrix")! colorAdjust.setValue(baseImage, forKey: kCIInputImageKey) // Calculate HDR multipliers based on maxBrightness // This will maintain color ratios while increasing brightness colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector") colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector") colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector") // Maintain alpha channel colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector") guard let adjustedImage = colorAdjust.outputImage else { return nil } // Apply color space transformation using CIImage's colorSpace property let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)! // Create context with HDR color space let context = CIContext(options: [ .workingColorSpace: hdrColorSpace, .outputColorSpace: hdrColorSpace ]) // Get the image bounds let bounds = transformedImage.extent // Create a new pixel buffer with HDR format var pixelBuffer: CVPixelBuffer? let pixelBufferAttributes = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf, kCVPixelBufferMetalCompatibilityKey: true ] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(bounds.width), Int(bounds.height), kCVPixelFormatType_64RGBAHalf, pixelBufferAttributes, &pixelBuffer) guard let destinationBuffer = pixelBuffer else { return nil } context.render(transformedImage, to: destinationBuffer, bounds: bounds, colorSpace: hdrColorSpace) // Create final CIImage from the HDR pixel buffer let finalImage = CIImage(cvPixelBuffer: destinationBuffer, options: [.colorSpace: hdrColorSpace]) return finalImage } When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
0
0
396
Jan ’25
VNDocumentCameraViewController not working on macOS
I have an iPad app that I want to run on Apple Silicon macs. Everything works fine except for VNDocumentCameraViewController. According to the docs this class is available on: iOS 13.0+ iPadOS 13.0+ Mac Catalyst 13.1+ visionOS 1.0+ yet when I try using it I get Document camera is not available on my Mac Studio running macOS 15.2 Is this expected behaviour? Thanks
0
0
367
Jan ’25
AVFoundationErrorDomain Code=-11819 in AVPlayer – Causes & Fixes?
We are encountering an issue where AVPlayer throws the error: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" > Underlying Error Domain[(null)]:Code[0]:Desc[(null)] This error seems to occur intermittently during video playback, especially after extended usage or when switching between different streams. We observe Error 11819 (AVFoundationErrorDomain) in Conviva platform that some of our users experience it but we couldn't reproduce it so far and we’re need support to determine the root cause and/or best practices to prevent it. Some questions we have: What typically triggers this error? Could it be related to memory/resource constraints, network instability, or backgrounding? Are there any recommended ways to handle or recover from this error gracefully? Any insights or guidance would be greatly appreciated. Thanks!
0
0
145
Apr ’25
Use Metal to conver HDR Pixelbuffer to SDR Pixelbuffer
I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation. But In some cases,I want to render HDR Pixelbuffer and record video. AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetHigh; AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if ([videoDevice isVideoHDRSupported]) { NSError *error = nil; if ([videoDevice lockForConfiguration:&error]) { videoDevice.automaticallyAdjustsVideoHDREnabled = NO; videoDevice.videoHDREnabled = YES; // 开启 HDR [videoDevice unlockForConfiguration]; } else { NSLog(@"Error: %@", error.localizedDescription); } } Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc. How to solve this problem?
1
0
462
Jan ’25