We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers.
We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting.
We generate a live stream of data using something like:
UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4];
m_writter = [[AVAssetWriter alloc] initWithContentType:t];
// - videoHint describes HEVC and width/height
// - m_videoConfig includes compression settings and, when using MV-HEVC,
// the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs)
// The app was throwing an exception without these which was
// useful to know when we got the configuration right.
m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint];
For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible.
If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called.
However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly.
We create the tagged buffers with something like:
CMTagCollectionRef collections[2];
CMTag leftTags[] = {
CMTagMakeWithSInt64Value(
kCMTagCategory_VideoLayerID, (int64_t)0),
CMTagMakeWithSInt64Value(
kCMTagCategory_StereoView, kCMStereoView_LeftEye)
};
CMTagCollectionCreate(
kCFAllocatorDefault, leftTags, 2, &(collections[0])
);
CMTag rightTags[] = {
CMTagMakeWithSInt64Value(
kCMTagCategory_VideoLayerID, (int64_t)1),
CMTagMakeWithSInt64Value(
kCMTagCategory_StereoView, kCMStereoView_RightEye)
};
CMTagCollectionCreate(
kCFAllocatorDefault, rightTags, 2, &(collections[1])
);
CFArrayRef tagCollections = CFArrayCreate(
kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks
);
CVPixelBufferRef buffers[] = {*b, *alt};
CFArrayRef b = CFArrayCreate(
kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks
);
CMTaggedBufferGroupRef bufferGroup;
OSStatus res = CMTaggedBufferGroupCreate(
kCFAllocatorDefault, tagCollections, b, &bufferGroup
);
Perhaps there's something about this OBJC code that I've buggered up? Hopefully!
Anyways, when I submit this tagged bugger group to the adaptor:
if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) {
// report error...
}
Appending does not raise any errors - eventually it just hangs on us and we never return from it...
Real Issue:
So either:
The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments
The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens).
This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
i need to make app for iOS to get code for XCode
Topic:
Media Technologies
SubTopic:
Streaming
Hi Team,
We are using AVFoundation to read metadata from a stream and have noticed some delay between when the stream provides metadata and when the app receives it. Could someone from the team advise on ways to reduce this?
Thanks
Hello,
We're seeing an intermittent issue when playing back FairPlay-protected HLS downloads while the device is offline.
Assets are downloaded using AVAggregateAssetDownloadTask with FairPlay protection.
After download, asset.assetCache.isPlayableOffline == true.
On first playback attempt (offline), ~8% of downloads fail.
Retrying playback always works. We recreate the asset and player on each attempt.
During the playback setup, we try to load variants via:
try await asset.load(.variants)
This call sometimes fails with:
Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSUnderlyingError=0x105654a00 {Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSDescription=The Internet connection appears to be offline.}}, NSErrorFailingURLStringKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSErrorFailingURLKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSURL=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, AVErrorFailedDependenciesKey=(
“assetProperty_HLSAlternates”
), NSLocalizedDescription=The Internet connection appears to be offline.}
This variant load is used to determine available audio tracks, check for Dolby support, and apply user language preferences.
After this step, the AVPlayerItem also fails via Combine’s publisher for .status.
However, retrying the entire process immediately after (same offline conditions, same asset path, new AVURLAsset) results in successful playback.
Assets are represented using the following class:
public class DownloadedAsset: AVURLAsset {
public let id: String
public let localFileUrl: URL
public let fairplayLicenseUrlString: String?
public let drmToken: String?
var isProtected: Bool {
return fairplayLicenseUrlString != nil
}
public init(id: String,
localFileUrl: URL,
fairplayLicenseUrlString: String?,
drmToken: String?) {
self.id = id
self.localFileUrl = localFileUrl
self.fairplayLicenseUrlString = fairplayLicenseUrlString
self.drmToken = drmToken
super.init(url: localFileUrl, options: nil)
}
}
We use user-selected quality levels to control bitrate and multichannel (e.g. Dolby 5.1) downloads:
let downloadQuality = UserDefaults.standard.downloadVideoQuality
let bitrate: Int
let shouldDownloadMultichannelTracks: Bool
switch downloadQuality {
case .dataSaver:
shouldDownloadMultichannelTracks = false
bitrate = 596564
case .standard:
shouldDownloadMultichannelTracks = false
bitrate = 1503844
case .best:
shouldDownloadMultichannelTracks = true
bitrate = 7038970
}
var selections = multichannelIdentifiedMediaSelections
if !shouldDownloadMultichannelTracks {
selections = selections.filter { !$0.isMultichannel }
}
let task = session.aggregateAssetDownloadTask(
with: asset,
mediaSelections: selections.map { $0.mediaSelection },
assetTitle: title,
assetArtworkData: nil,
options: [AVAssetDownloadTaskMinimumRequiredMediaBitrateKey: bitrate]
)
Seen on devices running iOS 16, iOS 17, and iOS 18.
What could cause the initial failure of an otherwise valid, offline-ready FairPlay HLS asset?
Could .load(.variants) internally trigger a failed network resolution, even when offline?
Is there an internal caching or initialization behavior in AVFoundation that might explain why the second attempt works?
Any guidance would be appreciated.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
iOS
HTTP Live Streaming
AVFoundation
Keywords: FairPlay, FPS Certificate, DRM, FairPlay Streaming, license server
Hi all,
We are currently using FairPlay Streaming in production and already have an FPS certificate in place.
However, the passphrase for the existing FPS certificate has unfortunately been lost.
We are now considering reissuing a new FPS certificate, and I would like to confirm a few points before proceeding:
1️⃣ If we reissue a new FPS certificate, will the existing certificate be automatically revoked?
Or will it remain valid until its original expiration date?
2️⃣ Is it possible to have both the newly issued and the existing certificates valid at the same time?
In other words, can we serve DRM licenses using either certificate depending on the packaging or client?
3️⃣ Are there any caveats or best practices we should be aware of when reissuing an FPS certificate?
For example, would existing packaged content become unplayable, or would CDN/packaging server configurations need to be updated carefully?
Since this affects our production environment, we would like to minimize any service disruption or compatibility issues.
Unfortunately, when we contacted Apple support directly, we were advised to post this question here in the Forums for additional guidance.
Any advice or experiences would be greatly appreciated!
Thank you in advance.
I’m building a music app using Apple Music streaming via ApplicationMusicPlayer.
My goal is to decrease the volume of the current song during the last 10 seconds, and when the next track begins, restore the volume to its normal level.
I know that ApplicationMusicPlayer doesn’t expose a volume API, and I want to avoid triggering the system volume HUD.
✅ Using Apple Music streaming (not local files)
❓ Is it possible to implement per-track fade-out/fade-in logic with ApplicationMusicPlayer?
Appreciate any clarification or official guidance!
While validating a Dolby Vision Profile 5 playlist in CMAF format (with segments in MP4), the Media Stream Validator reported the following error in the MUXT-FIX-ISSUES list:
However, the playlist correctly specifies Dolby Vision Profile 5 in both the EXT-X-STREAM-INF and EXT-X-I-STREAM-INF tags.
Playlist:
#EXTM3U
#EXT-X-VERSION:8
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio-ec3",LANGUAGE="und",NAME="Undetermined",AUTOSELECT=YES,CHANNELS="6",URI="var14711339/aud1257/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1"
#EXT-X-STREAM-INF:BANDWIDTH=14680000,AVERAGE-BANDWIDTH=14676380,VIDEO-RANGE=PQ,CODECS="ec-3,dvh1.05.06",RESOLUTION=3840x2160,AUDIO="audio-ec3"
var14711339/vid/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=1419881,URI="trk14711339/playlist.m3u8?device_profile=hls&cmaf=1",VIDEO-RANGE=PQ,CODECS="dvh1.05.06",RESOLUTION=3840x2160
Could you please review this and clarify:
Why is the Media Stream Validator reporting this error, even though the playlist correctly includes Dolby Vision Profile 5 parameters in CMAF format?
Why is this error not reported when using a playlist with TS segments instead of CMAF (MP4)?
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means.
default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0
default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0
default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4
default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0
default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry
default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
Hey there,
We're seeing a high rate of 403 - Invalid Authentication on this endpoint v1/me/library/artists since a few days.
Does anyone have the same issue ?
We have a Low-Latency HLS stream, and on iOS 26, even though the bandwidth is sufficient, it still selects a low-bandwidth resolution (e.g., RESOLUTION=640x360) for playback instead of using a higher-bandwidth resolution (e.g., RESOLUTION=1920x1080) when using AVPlayerViewController with AVPlayer.
This works fine on iOS version 18 and previous versions. What could be the solution to this issue?
Topic:
Media Technologies
SubTopic:
Streaming
Hi, I submitted the FairPlay Streaming Credentials Approval request, but it's been 15 days and I haven't received a response yet. Do you happen to know how long they usually take to reply to these requests?
Hi,
I'm trying to create a FairPlay Streaming Certificate for the SDK 26.x version.
Worth to mention that we already have 2 (1024 and 2048) and we only have the possibility to use our previous 1024-bit certificate (which we do not want because we want a 2048 cert)
Our main issue is that when I upload a new "CSR" file, the "Continue" button is still on "gray" and cannot move forward on the process.
The CSR file has been created with this command:
openssl req -out csr_2048.csr -new -newkey rsa:2048
-keyout priv_key_2048.pem
-subj /CN=SubjectName/OU=OrganizationalUnit/O=Organization/C=US
Some help will be appreciated.
Thanks in advance
Best,
Hello,
I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time).
I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter.
While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation?
Clarification on this behavior would help us greatly in debugging our stream delivery.
Thanks in advance.
I am working on Screen Record function in Apple Vision Pro, when I use broadcast upload extension, after I click record button, the XCode console show the exception:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
we create and config the project as flow:
Create a Apple Vision Project.
Create a Broadcast Upload Extension Target.
Add App Group for Project Target and Extension Target, both use the same identifier.
Add "Main Camera Access", "Passthrough in Screen Capture" Capabilities for all targets.
Add "NSScreenCaptureUsageDescription", "NSMicrophoneUsageDescription" in Plist.
Add record button in view
Run debug in Apple Vision Pro device, after click record button, throw the exception.
Hi
Is it possible to have a playlist where I have a indication of a stream in clear, but then, someone started a DRM encrypted period and then someone turns it off.
Can I just do the following (I've removed the video segments part, I'm just interested in the parts where I want notify the new drm region )?
#EXT-X-MAP:URI="video_2_10000000_t17586401730000000_init.mp4"
#EXT-X-KEY:METHOD=NONE
...
#EXT-X-MAP:URI="video_2_10000000_t17587374640000000_init.mp4"
#EXT-X-KEY:METHOD=SAMPLE-AES,URI="skd://5df0b36ac4bb4d0ff954a73b502ac332",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1"
...
#EXT-X-MAP:URI="video_2_10000000_t17587376740000000_init.mp4?"
#EXT-X-KEY:METHOD=NONE
Should I insert discontinuity tags or something else?
Right now what I can observe is that I got some audio drops when I try to do this.
Our license service is based on version 4.5.4 and we make use of sample .c/.h files for building license service.
We are told that version 4.5.4 is going to be deprecated in 2026 and we should migrate to latest SDK version 26.
When explored the SDK, we noticed that only python and Swift based SDk is provided.
Does Apple also provide C/C++ based SDK as it is going to easier for us to integrate.
If yes, please share the SDK package and sample license service solution.
Hi,
I understand that AVPlayer/AVFoundation doesn’t natively play MPEG-DASH manifests (.mpd) today, while HLS is supported and widely documented by Apple.
I’m not asking for roadmap commitments, but I’d like to understand whether there is any publicly documented rationale for not supporting DASH/MPD in AVFoundation (e.g., technical constraints, platform integration, DRM ecosystem, power/performance considerations, etc.).
Questions:
Is there any Apple statement / documentation explaining why DASH (MPD) isn’t supported in AVFoundation?
Is Apple’s recommended approach still “provide HLS for Apple clients” (potentially sharing CMAF segments and generating separate manifests)?
If there’s no public rationale, is filing Feedback Assistant the best channel for requesting MPD playback support?
Thanks!
The ASk is used by the KSM to derive the dASk, which is then used to decrypt the SK...R1.
If the only thing we give the client is the certificate, how does it encrypt the SK...R1 so the server is able to process it.
Would be nice to know it it works generally, because I've been getting questions about it and can't provide a helpful answer.
Thanks in advance.
Hello,
I am currently developing a video player using Custom AVPlayer SDK and testing LL-HLS live streaming.
I encountered a specific error, CoreMediaErrorDomain -15418, during playback. I have searched through the official documentation and the forums, but I could not find any information regarding this error code.
I would like to inquire about the following:
Description & Cause: What does the error code -15418 specifically represent in the context of CoreMedia and LL-HLS?
Severity: Is this a critical error that halts playback, or is it merely a warning?
Environment Details:
iOS Version: iOS 26.2
Device: iPhone 15 Pro Max
Stream Type: LL-HLS (Low-Latency HLS)
Impact: Quality drops
Any insights or references to documentation would be greatly appreciated.
Thank you.
Hi
We’re updating our KSM to support SPC v2/v3 and currently operate with both legacy SDK4 credentials (ASK + 1024 cert) and SDK26 credentials (certificate bundle + provisioning data + 1024/2048 keys).
Our client apps run across a wide range of iOS/tvOS versions, so we want to follow Apple’s recommended client strategy for certificate selection. The docs describe SHA‑1 vs SHA‑256 in the SPC header, but do not specify which OS versions should use SDK4 vs SDK26 credentials.
Could you clarify:
Is there an official minimum iOS/tvOS version where you recommend SDK26 credentials for client apps?
For older OS versions (e.g. iOS 15), is SDK4 still the recommended choice for client apps?
Are there any official migration guidelines for client apps moving from SDK4 to SDK26 credentials?
Thanks in advance.