I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error
Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule')
Below is how I am using it
let speechDetector = Speech.SpeechDetector()
let transcriber = SpeechTranscriber(locale: Locale.current,
transcriptionOptions: [],
reportingOptions: [.volatileResults],
attributeOptions: [.audioTimeRange])
speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already:
• Display detailed scrolling waveforms of Apple Music songs
• Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions:
1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content?
2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access?
3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
Hi team,
In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables.
The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below:
select id, name, namedefault, primaryartists from amf_song where id = '1698723329'
id | name | namedefault | primaryartists
----------------------------------------
1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}]
select * from amf_artist where id = '1264818718'
id | name | namedefault | namepronunciation |
----------------------------------------------
1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} |
Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version?
When can we expect this bug to resolved?
Thanks,
Hello,
Our users have started to see a new fatal AVPlayer error during playback starting with iOS/tvOS 18.0. The error is defined as "CoreMediaErrorDomain Code=-15486".
We have not been able to reproduce this issue locally within our development team.
Is there any documentation on the cause of this error or steps to recover from this error?
Thank you,
Howard
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
HTTP Live Streaming
AVFoundation
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available.
When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when:
Audio MIDI Setup is configured for 7.1.4 (12-channel) output
The same tracks play in full multichannel through the native Apple Music app
Dolby Atmos is set to "Automatic" in Apple Music preferences
Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator.
Here’s a simplified Swift example of my setup:
import SwiftUI
import WebKit
struct ContentView: View {
private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN"
var body: some View {
YouTubeWebView(playlistID: playlistID)
.edgesIgnoringSafeArea(.all)
}
}
struct YouTubeWebView: UIViewRepresentable {
let playlistID: String
func makeUIView(context: Context) -> WKWebView {
let config = WKWebViewConfiguration()
config.allowsInlineMediaPlayback = true
let webView = WKWebView(frame: .zero, configuration: config)
webView.scrollView.isScrollEnabled = true
let html = """
<!doctype html>
<html>
<head>
<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0">
<style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style>
</head>
<body>
<iframe
src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3"
frameborder="0"
allow="encrypted-media; picture-in-picture; fullscreen"
webkit-playsinline
allowfullscreen
></iframe>
</body>
</html>
"""
webView.loadHTMLString(html, baseURL: nil)
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {}
}
#Preview {
ContentView()
}
Observed behavior:
First video plays without issue.
Web process crashes when the second video in the playlist starts.
Console logs show WebProcessProxy::didClose and repeated memory status messages.
Using ProcessAssertion or background activity does not prevent the crash.
Only occurs on physical devices; simulators do not reproduce the issue.
Questions:
Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices?
Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback?
Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes?
Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback?
Thanks in advance for any guidance!
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
I was testing audio playback from YouTube in Safari, and the sound was clipping heavily. At first, I thought it might be due to the poor quality of my small sound system. However, when I took a screenshot and the screenshot sound effect itself produced a loud clipping noise, it became clear that this is not a mechanical problem with my speakers, nor an issue specific to YouTube or Safari. This appears to be a system-wide audio issue in macOS Tahoe 26 - Beta 5.
Hi,
I’m trying to implement the new PhotoKit PHBackgroundResourceUploadExtension. I created the extension, enabled full photo library access in the host app, and registered the extension point using the string: com.apple.photos.background-upload.
However, when I attempted to enable the extension with:
try library.setUploadJobExtensionEnabled(true)
I received the following error:
Error Domain=PHPhotosErrorDomain Code=-1 "(null)"
This happens when running the app on Xcode 26.1 and 26.2 Beta, using the iPhone 17 Pro Max simulator (iOS 26.1 and 26.2).
My question is: Is this extension supported on the simulator?
I’m asking because at the moment it’s difficult for me to test this on a physical device.
Also, What's the meaning of the error?
Thanks.
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means.
default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0
default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0
default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4
default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0
default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry
default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
I'm wondering if someone happened issues with currentPlaybackRate in released version of iOS 26.0?
There is no issue happened in case of iOS 18.5.
When I changed currentPlaybackRate on iOS 26.0, it seems to unexpectedly change currentPlaybackRate to be 0 or 1.0 forcibly no matter DRM or non-DRM contents. And also, playing music will be abnormal behavior and unstable with noise if currentPlaybackRate is not 1.0. And changes stop state and play state frequently.
Hi everyone,
After updating my Apple TV HD (model A1625) to tvOS 26, I’ve noticed a significant spike in CPU usage—up to 3× higher than before the update. Go from around 40% to 120%
Model: Apple TV HD (A1625)
tvOS Version: 26 (stable release) and beta version of 26.1,
App downgrade stream due to lack of cpu power
If anyone else is experiencing this, please share your findings or workarounds.
Would love to hear from Apple engineers or other developers if this is a known regression or if there’s a recommended fix.
Thanks!
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people.
The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes.
Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
PLATFORM AND VERSION :iOS 18.5
I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view.
This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app.
Any help/suggestion to this issue would be greatly appreciated.
STEPS TO REPRODUCE
Open the app and click on camera icon.
It will display camera to capture photo.
Camera shows black for few production user's.
class ViewController: UIViewController {
@IBOutlet private weak var captureButton: UIButton!
private var fillLayer: CAShapeLayer!
private var previewLayer : AVCaptureVideoPreviewLayer!
private var output: AVCapturePhotoOutput!
private var device: AVCaptureDevice!
private var session : AVCaptureSession!
private var highResolutionEnabled: Bool = false
private let sessionQueue = DispatchQueue(label: "session queue")
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
customiseUI()
}
@IBAction func startCamera(sender: UIButton) {
didTapTakePhoto()
}
private func setupCamera() {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
previewLayer = AVCaptureVideoPreviewLayer(session: session)
output = AVCapturePhotoOutput()
device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
if let device = self.device{
do{
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input){ session.addInput(input)}
else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") }
if session.canAddOutput(output){
output.isHighResolutionCaptureEnabled = self.highResolutionEnabled
session.addOutput(output)
} else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") }
previewLayer.videoGravity = .resizeAspectFill
previewLayer.session = session
sessionQueue.async { session.startRunning() }
self.session = session
self.session.accessibilityElementIsFocused()
try device.lockForConfiguration()
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) {
device.whiteBalanceMode = .autoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) {
device.whiteBalanceMode = .continuousAutoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus}
else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus }
device.unlockForConfiguration()
} catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") }
} else { print("\(#fileID):\(#function):\(#line) : Device found as nil") }
}
private func customiseUI() {
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0)
let rectangleWidth = view.frame.width - (view.frame.width * 0.16)
let x = (view.frame.width - rectangleWidth) / 2
let rectangleHeight = view.frame.height - (view.frame.height * 0.16)
let y = (view.frame.height - rectangleHeight) / 2
let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0))
roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y))
path.append(roundRect)
path.usesEvenOddFillRule = true
fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
fillLayer.fillRule = .evenOdd
fillLayer.opacity = 0.4
previewLayer.addSublayer(fillLayer)
previewLayer.frame = view.bounds
view.layer.addSublayer(previewLayer)
view.bringSubviewToFront(captureButton)
}
private func didTapTakePhoto() {
let settings = self.getSettings(camera: self.device)
if device.isAdjustingFocus {
do {
try device.lockForConfiguration()
device.focusMode = .continuousAutoFocus
device.unlockForConfiguration()
device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil)
} catch { print(error) }
} else { output.capturePhoto(with: settings, delegate: self) }
}
func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings {
var settings = AVCapturePhotoSettings()
if let rawFormat = output.availableRawPhotoPixelFormatTypes.first {
settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat))
}
settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any]
settings.previewPhotoFormat = previewFormat
return settings
}
}
extension ViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
AudioServicesDisposeSystemSoundID(1108)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation() else { return }
let image = UIImage(data: data)!
showImage(cropped: image)
}
func showImage(cropped: UIImage) {
let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController
vc?.captured = cropped
self.present(vc!, animated: true)
}
}```
I often find when doing basic actions in MusicKit it is incredibly slow compared to Apple's Music App. I've tried different versions, devices, networks, Apple's sample code, it all throughout the last several years, and it is all the same. Does anyone else have this issue?
Our multimedia application Boinx FotoMagico displays media files of various kinds with a Metal rendering engine. At the moment we still use .bgra8Unorm pixel format and sRGB color space and only render in SDR, which is increasingly a problem, as much of the video content is HDR nowadays (e.g. videos shot on an iPhone). For that reason we would like to switch to EDR rendering with .rgba16Float pixel format and extendedLinearDisplayP3 color space.
We have already worked out how to do this for HDR image files, but still have a technical problem when rendering HDR video files. We are using AVFoundation to get the video frames as CVPixelBuffers and convert them to MTLTexture using a CVMetalTextureCache. MTLTextures are then further processed in various compute shaders before being rendered to screen. However the pixel values in the texture are not what we expected. Video frames appear too bright/overexposed.
In WWDC21 session "Explore HDR rendering with EDR" Ken Greenebaum mentioned:
“AVFoundation does not presently decode HDR formats, such as HDR10, to EDR. Consequently, these need to be adapted for use with EDR rendering. This conversion is straightforward and involves two steps. First, converting to linear light by applying the inverse transfer function. And second, dividing by the medium's reference white.”
https://developer.apple.com/videos/play/wwdc2021/10161?time=1498
However, the session does not explain, how to get or calculate the correct value for "reference white". We could not find any relevant info on the web. This is why we need DTS assistance. We need the code that calculates the correct value for reference white for any kind of video, whether it is SDR or HDR, and regardless of codec and encoding. I assume that Ken Greenebaum is the best Apple engineer to ask in this case, because he recorded most of the EDR related WWDC sessions in recent years?
We have written a small test app that renders a short sample video (HLG encoding). The window contains two views. The upper view uses an AVPlayerLayer and renders the video natively just like QuickTime Player. The video content looks correct here. BTW, the window background is SDR white, so that bright EDR pixels can be clearly identified, e.g. the clouds just above the mountains in the upper left corner of the sample video. You may need to lower display brightness a bit if these clouds do not appear brighter than the white window background.
The bottom view uses a CAMetalLayer and low-level Metal rendering. The CVPixelBuffers we receive from AVFoundation still need to be scaled down so that SDR reference white reaches pixel value 1.0. Entering a value of 9.0 to 10.0 for reference white in the text field makes it look about right on my Studio Display. But that is just experimental for this sample video file. We need code to calculate the correct value for reference white for any kind of video file!
We have a couple of questions:
SDR videos should probably use 1.0 as reference white, as their encoded pixel values can already be used as is? Is this assumption correct?
Different video encoding of HDR video (HLG, PQ, etc) will probably lead to different values for reference white?
Is the value for reference white constant throughout a video, or can it vary over time, either scene by scene, or even frame by frame?
If it can vary, does the CVPixelBuffer of the current video frame contain all the necessary metadata to calculate the correct value?
Does the NSScreen.maximumExtendedDynamicRangeColorComponentValue also influence the reference white value?
The attached sample project is structured in a way that the only piece of code that needs to be modified is the ViewController.sdrReferenceWhiteValue() function. Please read the comments and the #warning in this function. This is where the code for calculating the reference white value should be inserted.
Here is the download link for the sample project:
https://www.dropbox.com/scl/fi/4w5gmftav5xhbixu9u6pb/HDRMetalTest.zip?rlkey=n8cm02soux3rx03vplgo6h1lm&dl=0
Topic:
Media Technologies
SubTopic:
Video
Our streaming app uses FairPlay-protected video streams, which previously worked fine when using AVAssetResourceLoaderDelegate to provide CKCs.
Recently, we migrated to AVContentKeySession, and while everything works as expected during regular playback, we encountered an issue with AirPlay.
Our CKC has a 120-second expiry, so we renew it by calling renewExpiringResponseData..
This trigger the didProvideRenewingContentKeyRequest delegate and we respond with updated CKC.
However, when streaming via AirPlay, both video and audio freeze exactly after 120 seconds.
To validate the issue, I tested with AVAssetResourceLoaderDelegate and found that I can reproduce the same freeze if I do not renew the key. This suggests that AirPlay is not accepting the renewed CKC when using AVContentKeySession.
Additional Details:
This issue occurs across different iOS versions and various AirPlay devices.
The same content plays without issues when played directly on the device.
The renewal process is successful, and segments continue to load, but playback remains frozen.
Tried renewing the CKC bit early (100s).
I also tried setting player.usesExternalPlaybackWhileExternalScreenIsActive = true, but the issue persists.
We don't use persistentKey.
Is there anything else that needs to be considered for proper key renewal when AirPlaying?
Any help on how to fix this or confirmation if this is a known issue would be greatly appreciated.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?