I am currently developing an HEVC player using VideoToolbox on an iOS device.
I have successfully created an HEVC decoder that receives HEVC streams from our custom image capture and encoding device, and it can decode and display images properly.
However, when my image capture and encoding device configures the encoder to output HEVC streams with fragmented NALUs (i.e., an I-frame or P-frame is split and stored across multiple slice NALUs), the iOS decoder can be initialized successfully but fails to decode and output images.
Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?
Key Observations:
1. Single-NALU frames work fine.
2. Multi-NALU frames (sliced I/P-frames) cause decoding failure.
3. The decoder session is created successfully (VTDecompressionSessionCreate returns no error).
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
try device.lockForConfiguration()
// Check if focus mode and point of interest are supported
if device.isFocusModeSupported(.locked) {
device.focusMode = .locked
}
if device.isFocusPointOfInterestSupported {
// Map the slider value (0.0 to 1.0) to the focus point's X coordinate
let focusX = CGFloat(sender.doubleValue)
let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
device.focusPointOfInterest = focusPoint
} else {
print("Focus point of interest is not supported on this device.")
}
device.unlockForConfiguration()
// Log focus settings
print("Focus point: \(device.focusPointOfInterest)")
print("Focus mode: \(device.focusMode.rawValue)")
} catch {
print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
Hello Apple Developer Community,
We are developing a music management platform for restaurants and cafes in Saudi Arabia. Our app enables businesses to schedule playlists and allows visitors to request songs via barcodes. Music playback is powered by Apple Music, and users must have their own Apple Music subscriptions to access the music. Our service charges a monthly subscription fee for these management features, not for music access itself.
Project Overview and MusicKit Role
Our app integrates MusicKit to leverage Apple Music’s catalog and playback capabilities. Users log in with their Apple Music accounts, ensuring they have an active subscription for music playback. Our platform’s value lies in its tools—playlist scheduling and song requests—which are built on top of MusicKit’s APIs. We offer these features exclusively in Saudi Arabia.
Legal Context in Saudi Arabia
In Saudi Arabia, to our understanding, no special licenses are required for playing music in commercial venues like restaurants and cafes. This means our clients can use Apple Music subscriptions for playback without additional performance rights licenses. While this aligns with local laws, we recognize that Apple’s global policies may impose stricter requirements, prompting our need for clarification.
Subscription Model and Monetization Concerns
We charge a monthly subscription fee for access to our app’s features (e.g., scheduling playlists and managing song requests). This fee is separate from the Apple Music subscription, which users must maintain for playback. However, Apple’s MusicKit terms state: "You agree not to require payment for or indirectly monetize access to the Apple Music service." We’re concerned whether our subscription model might be interpreted as indirectly monetizing Apple Music access, given its reliance on MusicKit for functionality.
Scheduling Feature and Synchronization Rights
Our app allows businesses to schedule playlists for general time slots (e.g., “play this playlist from 6 PM to 8 PM”). It does not support precise scheduling, such as playing a specific song at an exact moment (e.g., “play this song at 7:30 PM”). Apple’s guidelines mention that “deeper or more complex music integration” may require additional licenses, like synchronization rights. We’re unsure if our general scheduling feature crosses this threshold or remains within MusicKit’s standard usage.
Questions for Clarification
We’d greatly appreciate expert input on the following:
Monetization: Does our subscription fee for management features (scheduling and song requests) violate Apple’s policy against indirectly monetizing Apple Music access?
Local Context: Given that Saudi Arabia requires no additional licenses for commercial music playback, does this impact our compliance with Apple’s global terms?
Scheduling: Does our playlist scheduling for general time slots (not exact moments) fall within MusicKit’s permitted scope, or does it require further licensing?
Thank you in advance for any insights or guidance to ensure our app aligns with Apple’s policies!
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other.
Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart.
The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo.
What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled.
Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first).
I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described.
https://github.com/timboudreau/ios-play-pause-demo
I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown.
What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Photos and Imaging
Core Graphics
Core Image
EDR
Hello everyone,
I'm working on implementing a screen sharing feature using RPSystemBroadcastPickerView and a Broadcast Upload Extension to share the entire app screen in an iOS application.
The Broadcast Upload Extension is set up following Apple's ReplayKit guidelines. However, I’m encountering an issue during the broadcast startup sequence:
❗ Problem Description
The Screen Broadcast UI appears as expected
I tap “Start Broadcast”
The countdown (3 → 2 → 1) completes
Then it immediately reverts to the "Start Broadcast" screen, and screen sharing does not begin
No error messages are displayed
None of the extension lifecycle methods (broadcastStarted(withSetupInfo:), processSampleBuffer, etc.) are called
There are no logs or crash reports, neither in the main app nor in the extension
✅ What Has Been Verified
Info.plist of the Broadcast Upload Extension includes:
NSExtensionPointIdentifier = com.apple.broadcast-services-upload
NSExtensionPrincipalClass set correctly
RPBroadcastProcessMode = RPBroadcastProcessModeSampleBuffer
preferredExtension is set properly to the extension’s bundle identifier
Extension is listed in the main app's build settings under "Frameworks, Libraries, and Embedded Content"
⚠️ Additional Concern
We noticed that in Xcode (latest version), the Broadcast Upload Extension is listed under "Embedded Frameworks" with the setting "Embed Without Signing", and there is no option to change it to "Embed & Sign". We're wondering if this could be the reason the extension fails to launch correctly at runtime, despite being detected by the broadcast picker.
❓ Questions
Has anyone faced similar issues where the broadcast never starts despite correct setup?
Could the "Embed Without Signing" be causing the system to silently cancel or ignore the extension at runtime?
Are there any provisioning profile or entitlement requirements specific to Broadcast Upload Extensions that might trigger this behavior silently?
Any insights, suggestions, or workarounds would be greatly appreciated.
Thank you in advance!
For some users in production, there's a high probability that after launching the App, using AVPlayer to play any local audio resources results in the following error. Restarting the App doesn't help.
issue:
[error: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x30311f270 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
I've checked the code, and there aren't actually multiple AVPlayers playing simultaneously. What could be causing this?
It sounds simple but searching for the name "Favorite Songs" is a non-starter because it's called different names in different countries, even if I specify "&l=en_us" on the query.
So is there another property, relationship or combination thereof which I can use to tell me when I've found the right playlist?
Properties I've looked at so far:
canEdit: will always be false so narrows things down a little
inFavorites: not helpful as it depends on whether the user has favourite the favourites playlist, so not relevant
hasCatalog: seems always true so again may narrow things down a bit
isPublic: doesn't help
Adding the catalog relationship doesn't seem to show anything immediately useful either.
Can anyone help?
Ideally I'd like to see this as a "kind" or "type" as it has different properties to other playlists, but frankly I'll take anything at this point.
Hello,
I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out.
After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem?
import SwiftUI
import AVFoundation
class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate {
private var speechSynthesizer = AVSpeechSynthesizer()
static let shared = TextToSpeechService()
override init() {
super.init()
speechSynthesizer.delegate = self
}
func configureAudioSession() {
speechSynthesizer.delegate = self
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth])
} catch {
print("Failed to set audio session category: \(error.localizedDescription)")
}
}
func speak(_ text: String) {
Task(priority: .high) {
let speechUtterance = AVSpeechUtterance(string: text)
speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode())
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
speechSynthesizer.speak(speechUtterance)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
Task {
stopSpeech()
try AVAudioSession.sharedInstance().setActive(false)
}
}
func stopSpeech() {
speechSynthesizer.stopSpeaking(at: .immediate)
}
}
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
Our team conducted security testing and found one vulnerability with fairplay license acquisition.
Our QA engineer manually changed the device's system date and time (setting it 4 days into the future) and was able to successfully obtain a license response and initiate playback on an iOS device. However, on an Android device, the license acquisition failed.
Can you please tell us if Time Manipulation Detection is available in FairPlay SDK?
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird interruption we are seeing
I’m an amateur developer working on a free utility for composers/producers, for which the macOS release needs to create and name RTP-MIDI sessions in Audio MIDI Setup from the command line (so I can ship a small C helper instead of telling users to click through the UI). Here’s what I’ve tried so far, without luck:
• Plist hacks: Injecting entries into ~/Library/Audio/MIDI Configurations/*.mcfg works when AMS is closed, but AMS immediately locks and reverts my changes when it’s open.
• CoreMIDI C API: I can create virtual ports with MIDISourceCreate, but attempting MIDIObjectGetDataProperty on the apple.midirtp.session plugin always returns err –10836.
• Obj-C & Swift: Loading MIDINetworkSession and calling defaultSession, init, setNetworkName: and setting enabled = YES doesn’t produce a new session object in the Network panel.
• dlopen/dlsym: I extracted the real CoreMIDI binary out of the dyld shared cache and tried binding _MIDINetworkSessionCreate, _SetName, _SetEnabled, etc., but all the symbols come back null or my tool segfaults.
• Plugin registration: I’ve pulled the factory UUID (70C9C5EA-7C65-11D8-B317-000393A34B5A) from /System/Library/Extensions/AppleMIDIRTPDriver.plugin/Contents/Info.plist and called CFPlugInRegisterFactories, but it still never exposes the session-creation calls.
At this point I’m convinced I’m either loading the wrong binary or missing one critical step in registering the RTP-MIDI plugin’s private API. Can anyone point me to:
The exact path of the dylib or bundle that actually exports the MIDINetworkSessionCreate/MIDINetworkSessionSetName/MIDINetworkSessionSetEnabled symbols?
A minimal working snippet (C or Obj-C) that reliably creates and names a Network-MIDI session?
Any pointers, sample code, or even ideas about where Apple hides this functionality on macOS 15 would be hugely appreciated. Thanks!
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)?
Thanks so much for the help.
class CameraManager: NSObject {
enum Errors: Error {
case noCaptureDevice
case couldNotAddInput
case unsupportedConfiguration
}
enum Resolution {
case hd1080p
case uhd4K
var preset: AVCaptureSession.Preset {
switch self {
case .hd1080p:
return .hd1920x1080
case .uhd4K:
return .hd4K3840x2160
}
}
var dimensions: CMVideoDimensions {
switch self {
case .hd1080p:
return CMVideoDimensions(width: 1920, height: 1080)
case .uhd4K:
return CMVideoDimensions(width: 3840, height: 2160)
}
}
}
enum CameraType {
case wide
case ultraWide
var captureDeviceType: AVCaptureDevice.DeviceType {
switch self {
case .wide:
return .builtInWideAngleCamera
case .ultraWide:
return .builtInUltraWideCamera
}
}
}
enum FrameRate: Int {
case fps60 = 60
case fps120 = 120
case fps240 = 240
}
let orientationManager = OrientationManager()
let captureSession: AVCaptureSession
let previewLayer: AVCaptureVideoPreviewLayer
let movieFileOutput = AVCaptureMovieFileOutput()
let videoDataOutput = AVCaptureVideoDataOutput()
private var videoCaptureDevice: AVCaptureDevice?
override init() {
self.captureSession = AVCaptureSession()
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
super.init()
self.previewLayer.videoGravity = .resizeAspect
}
func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws {
assert(Thread.isMainThread)
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
captureSession.sessionPreset = resolution.preset
if captureSession.canAddOutput(movieFileOutput) {
captureSession.addOutput(movieFileOutput)
} else {
throw Errors.couldNotAddInput
}
videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue"))
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// Set the video orientation if needed
if let connection = videoDataOutput.connection(with: .video) {
//connection.videoOrientation = .portrait
}
} else {
throw Errors.couldNotAddInput
}
guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else {
throw Errors.noCaptureDevice
}
let useDimensions = resolution.dimensions
guard let format = videoCaptureDevice.formats.first(where: { format in
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height
let frameRates = format.videoSupportedFrameRateRanges
return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) })
}) else {
throw Errors.unsupportedConfiguration
}
self.videoCaptureDevice = videoCaptureDevice
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
} else {
throw Errors.couldNotAddInput
}
try videoCaptureDevice.lockForConfiguration()
videoCaptureDevice.activeFormat = format
videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000)
videoCaptureDevice.exposureMode = .locked
videoCaptureDevice.unlockForConfiguration()
} catch {
throw error
}
configureStabilization(enabled: stabilizationEnabled)
}`
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
Hi There, I have an app which access the media library, to save and load files. Since the IOS 18.2, the access to the media library stopped working.
Now, I've noticed that our App doesn't show in the List of apps with access to Files ( Privacy & Security -> Files & Folders).
Weird behavior is that, one iPhone with iOS 18.3.1 can access to the Files but others no, same iOS version 18.3.1. Test on Simulators (MAC) and works fine also.
My info.plist file have the keys to access media library for long time and hasn't changed (at least in the las 4 years) including the key "Privacy - Media Library Usage Description".
Also, I've noticed, that the message (popup) that request access to the media library, when using the app for the first time, doesn't show up anymore. We request access to the network (wifi) and this message still showing up but no the media library.
I'm using Visual Studio with Xamarin on a MAC.
I really appreciate any help you can because is very odd behavior and this started from the iOS 18.2.
Hi,
I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more.
Would love to hear your thoughts about this and Happy New Year!