My app is properly configured with MusicKit. I've generated a JWT using my valid credentials (Team ID, Key ID, private key), and I’ve ensured the time settings are correct via NTP.
When I call:
https://api.music.apple.com/v1/catalog/jp/search?term=ado&types=songs
I consistently receive a 500 Internal Server Error.
The JWT is generated using ES256 with valid iat and exp values. I’ve confirmed the token decodes properly using jwt.io, and it's passed via the Authorization: Bearer header.
Things I’ve confirmed:
Key ID, Team ID, private key are correct
App ID is configured with MusicKit capability
JWT is generated and signed correctly
macOS time is synced via NTP
Used both curl and Python to test — same result
Is there anything else I should check on the Apple Developer Console (like App ID, Certificates, or provisioning profile)?
Or could this be a backend issue on Apple’s side?
Any guidance would be appreciated.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed).
There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix.
I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes.
I noticed that:
The issue occurs whether or not the app is sandboxed.
The issue does no longer occur when the app itself runs under Rosetta.
There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback.
With most x86 plugins, the error is on first call:
kAudioUnitErr_RenderTimeout
and on any subsequent call:
kAudioComponentErr_InstanceInvalidated
On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty.
With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits
kAudioUnitErr_NoConnection
from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound.
I also find these messages in the console (printed in that order):
CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them
AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte
My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems.
However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API.
I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options:
In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist?
Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it?
Any tip or idea about this issue will be much appreciated.
Thanks in advance!
Hi,
I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance.
I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to.
These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages.
I have written a short example program for demonstration purposes.
import SwiftUI
import AVFoundation
import Foundation
let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
struct ContentView: View {
var body: some View {
VStack {
Button {
utter("mon")
} label: {
Text("mon")
}
.buttonStyle(.borderedProminent)
Button {
utter("tue")
} label: {
Text("tue")
}
.buttonStyle(.borderedProminent)
Button {
utter("thu")
} label: {
Text("thu")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb")
} label: {
Text("feb")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb", lang: "de-DE")
} label: {
Text("feb DE")
}
.buttonStyle(.borderedProminent)
Button {
utter("wed")
} label: {
Text("wed")
}
.buttonStyle(.borderedProminent)
}
.padding()
}
private func utter(_ text: String, lang: String = "en-US") {
let utterance = AVSpeechUtterance(string: text)
let voice = AVSpeechSynthesisVoice(language: lang)
utterance.voice = voice
synthesizer.speak(utterance)
}
}
#Preview {
ContentView()
}
Thank you
Christian
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。
iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。
調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。
なんでも情報下さい。
環境情報
WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入)
シグナリングサーバー: AWS Kinesis Video Streams
問題が発生するデバイス:
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,1, OS: 26.1
問題が発生しないデバイス:
iPhone17,5 以前の多数のモデル
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,3, OS: 26.0
Hello!
We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5?
Currently it's set up like this:
override func viewDidLoad() {
super.viewDidLoad()
UIApplication.shared.isIdleTimerDisabled = true
mRoomId = appDelegate.getRoomId()
let audioSession = AVAudioSession.sharedInstance()
try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
try! audioSession.setPreferredSampleRate(48000)
try! audioSession.setActive(true, options: [])
}
Topic:
Media Technologies
SubTopic:
Audio
We have a Low-Latency HLS stream, and on iOS 26, even though the bandwidth is sufficient, it still selects a low-bandwidth resolution (e.g., RESOLUTION=640x360) for playback instead of using a higher-bandwidth resolution (e.g., RESOLUTION=1920x1080) when using AVPlayerViewController with AVPlayer.
This works fine on iOS version 18 and previous versions. What could be the solution to this issue?
Topic:
Media Technologies
SubTopic:
Streaming
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX.
To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like
AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far.
Can this be done?
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling.
As far as I understand, when issuing a FairPlay offline license, there are typically two time windows:
1. The period during which the user can start offline playback (the longer “rental window”).
2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”).
I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset.
My question is:
👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)?
Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
How to USE iphone16 pro to capture HDR dolby_vision and use videotoolbox to encode with dolby_vision metadata in bitstream!
I know apple ios system can achieve this target,but I want to get source code how to do that ?
Topic:
Media Technologies
SubTopic:
Video
Hi everyone,
I’m exploring using the iPhone 17 Pro with the Blackmagic ProDock in a custom capture app. The genlock functionality seems accessible via AVExternalSyncDevice and related APIs, which is great.
I’m specifically curious about external timecode coming in from the ProDock:
• Is there a public way to access the timecode feed in a custom app via AVFoundation or another Apple API?
• If so, what is the recommended approach to read or apply that timecode during capture?
• Are there any current limitations or entitlements required to access timecode from ProDock in a third-party app?
I’m excited to start integrating synchronized capture in my app, and any guidance or sample patterns would be greatly appreciated.
Thanks in advance!
— [Artem]
Topic:
Media Technologies
SubTopic:
Video
When I’m on FaceTime, my phone will randomly end my call? I have an iPhone 17, iOS 26.1. Some times I won’t even be touching my phone screen and it’ll hang up. I’m not sure if this is a universal issue or just a me problem. It’s getting really annoying.
Topic:
Media Technologies
SubTopic:
General
Hey everyone,
I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS.
So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
I assign my CustomVideoCompositor class to the videoComposition.
I create an AVPlayerItem with the composition and video composition.
The Problem:
Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second.
Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97).
I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video.
My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60.
Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :-
func exportVideo(to outputURL: URL) async throws {
guard let composition = composition,
let videoComposition = videoComposition else {
throw VideoCompositionError.noValidVideos
}
try? FileManager.default.removeItem(at: outputURL)
guard let exportSession = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetHighestQuality // Is this the problem?
) else {
throw VideoCompositionError.trackCreationFailed
}
exportSession.outputFileType = .mp4
exportSession.videoComposition = videoComposition // This has the 60fps setting
try await exportSession.export(to: outputURL, as: .mp4)
}
I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor
My Question:
Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
Environment
Device: iPhone 16e
iOS Version: 18.4.1 - 18.7.1
Framework: AVFoundation (AVAudioEngine)
Problem Summary
On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices.
Expected Behavior
After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers.
Actual Behavior
After resuming from phone call interruption:
Tap callback is no longer invoked
No audio data is captured
No errors are thrown
Engine appears to be running normally
Note: Normal pause/resume (without phone call interruption) works correctly.
Steps to Reproduce
Start audio recording on iPhone 16e
Receive or make a phone call (triggers AVAudioSession interruption)
End the phone call
Resume recording with audioEngine.start()
Result: Tap callback is not invoked
Tested devices:
iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗
iPhone 14 (iOS 18.x): Works correctly ✓
iPhone SE 3 (iOS 18.x): Works correctly ✓
Code
Initial Setup (Works)
let inputNode = audioEngine.inputNode
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioEngine.start()
Interruption Handling
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: AVAudioSession.sharedInstance(),
queue: nil
) { notification in
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
if type == .began {
self.audioEngine.pause()
} else if type == .ended {
try? self.audioSession.setActive(true)
try? self.audioEngine.start()
// Tap callback doesn't work after this on iPhone 16e
}
}
Workaround
Full engine restart is required on iPhone 16e:
func resumeAfterInterruption() {
audioEngine.stop()
inputNode.removeTap(onBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioSession.setActive(true)
try audioEngine.start()
}
This works but adds latency and complexity compared to simple resume.
Questions
Is this expected behavior on iPhone 16e?
What is the recommended way to handle phone call interruptions?
Why does this only affect iPhone 16e and not iPhone 14 or SE 3?
Any guidance would be appreciated!
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Apple TV
tvOS
HTTP Live Streaming
Hi,
I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback.
Console Output Shows Frame Skipping
Frame #0 at 0.0 ms (fps: 60.0)
Frame #2 at 33.333333333333336 ms (fps: 60.0)
Frame #6 at 100.0 ms (fps: 60.0)
Frame #10 at 166.66666666666666 ms (fps: 60.0)
Frame #32 at 533.3333333333334 ms (fps: 60.0)
Frame #62 at 1033.3333333333335 ms (fps: 60.0)
Frame #96 at 1600.0 ms (fps: 60.0)
Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart.
Renderer.swift (Key Parts)
@MainActor
class Renderer: ObservableObject {
@Published var playerItem: AVPlayerItem?
private let assetManager: ProjectAssetManager?
private let compositorId: String
func buildComposition() async {
// ... load mouse moves/clicks data ...
let composition = AVMutableComposition()
let videoTrack = composition.addMutableTrack(
withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid
)
var currentTime = CMTime.zero
var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = []
// Insert video segments
for videoURL in videoURLs {
let asset = AVAsset(url: videoURL)
let tracks = try await asset.loadTracks(withMediaType: .video)
let assetVideoTrack = tracks.first
let duration = try await asset.load(.duration)
try videoTrack.insertTimeRange(
CMTimeRange(start: .zero, duration: duration),
of: assetVideoTrack,
at: currentTime
)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
let transform = try await assetVideoTrack.load(.preferredTransform)
layerInstruction.setTransform(transform, at: currentTime)
layerInstructions.append(layerInstruction)
currentTime = CMTimeAdd(currentTime, duration)
}
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS
// Set render size from first video
if let firstURL = videoURLs.first {
let firstAsset = AVAsset(url: firstURL)
let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first
let naturalSize = try await firstTrack.load(.naturalSize)
let transform = try await firstTrack.load(.preferredTransform)
videoComposition.renderSize = CGSize(
width: abs(naturalSize.applying(transform).width),
height: abs(naturalSize.applying(transform).height)
)
}
let instruction = CompositorInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime)
instruction.layerInstructions = layerInstructions
instruction.compositorId = compositorId
videoComposition.instructions = [instruction]
videoComposition.customVideoCompositorClass = CustomVideoCompositor.self
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoComposition
self.playerItem = playerItem
}
}
class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange = .zero
var enablePostProcessing: Bool = false
var containsTweening: Bool = false
var requiredSourceTrackIDs: [NSValue]?
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
var layerInstructions: [AVVideoCompositionLayerInstruction] = []
var compositorId: String = ""
}
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value,
let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID),
let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1))
return
}
let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition
let frameDuration = videoComposition.frameDuration
let fps = Double(frameDuration.timescale) / Double(frameDuration.value)
let compositionTime = asyncVideoCompositionRequest.compositionTime
let seconds = CMTimeGetSeconds(compositionTime)
let frameInMilliseconds = seconds * 1000
let frameNumber = Int(round(seconds * fps))
print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))")
asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer)
}
func cancelAllPendingVideoCompositionRequests() {}
}
VideoPlayerViewModel
@MainActor
class VideoPlayerViewModel: ObservableObject {
let player = AVPlayer()
private let renderer: Renderer
func loadVideo() async {
await renderer.buildComposition()
if let playerItem = renderer.playerItem {
player.replaceCurrentItem(with: playerItem)
}
}
}
What I've Tried
Frame skipping is consistent—exact same timestamps on every playback
Issue persists even with minimal processing (just passing through buffers)
Occurs regardless of compositor complexity
Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
I use
htttps://api.music.apple.com/v1/me/library/playlists/${playlistId}/tracks
to add tracks to a playlist I created.
How do I DELETE tracks from the playlist?
The documentation does not mention a method for this. I have tried calling DELETE methods in various combinations but nothing seems to work.
Is this possible?
With older iOS versions, when user taps Mute/Volume button on AVPLayerViewController to unmute, the system restores the sound volume of device to the level when user muted before.
On iOS 26, when user taps unmute button on screen, the volume starts from 0 (not restore). (but it still restores if user unmutes by pressing physical volume buttons).
As I understand, the Volume bar/button on AVPlayerViewController is MPVolumeView, and I can not control it. So this is a feature of the system.
But I got complaints that this is a bug. I did not find documents that describe this change of Mute button behavior.
I need some bases to explain this situation. Thank you.
Hello,
We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active.
Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously?
Any guidance would be greatly appreciated.
Thank you!
In what scenario will an app receive the limitExceeded PHPhotosError code? This case was added in iOS 26.1 and is not currently documented. What PhotoKit APIs can encounter this error and how should it be handled?