I'm trying to benchmark a Core Image filter chains memory footprint and notice a weird quirk in instruments.
On a real device, even with a simple Core Image chain, the memory balloons each time I ran the filter. See attached screen shots.
Running on iPhone 17 Pro:
Running on simulator (M2 Macbook Pro)
As you can see there's a huge build up of 4MB "VM: IOSurface" memory on the real device, but the simulator seems to clean it up correctly.
Here's my basic code:
func processImage() {
guard let inputImage = ContentViewModel.loadImageFromBundle(name: "kitty.HEIC") else {
print("Failed to load sample_image from bundle")
return
}
var outputImage = inputImage
outputImage = outputImage.applyingFilter("CIBloom", parameters: [
kCIInputRadiusKey: 20,
kCIInputIntensityKey: 0.8
])
DispatchQueue.global(qos: .userInitiated).async {
let data = self.context.jpegRepresentation(of: outputImage, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!)
if let data = data, let uiImage = UIImage(data: data) {
DispatchQueue.main.async {
self.displayImage = Image(uiImage: uiImage)
}
}
}
}
Why is this happening? Seems like a bug to me or I need to release an object. At the very least makes it challenging to measure memory usage.
Any help is greatly appreciated.
Alex
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Device: iPhone 17 Pro
iOS Version: iOS 26.1
Camera: Ultra-wide (0.5x) using AVCaptureSession
Our camera app freezes on iPhone 17 when switching frame rates (30fps ↔ 60fps). This works fine on iPhone 16 Pro and earlier.
What We've Observed:
Freeze happens on frame rate change - particularly when stabilization was enabled
Thread.sleep is used - to allow camera hardware to settle before re-enabling stabilization
Works on older iPhones - only iPhone 17 exhibits this behavior
Console shows these errors before freeze:
17281
<<<< FigXPCUtilities >>>> signalled err=18446744073709534335 <<<< FigCaptureSourceRemote >>>> err=-17281
Is Thread.sleep on the main thread causing the freeze? Should all camera configuration be on a background queue?
Is there something specific about iPhone 17 ultra-wide camera that requires different handling?
Should we use session.beginConfiguration() / session.commitConfiguration() instead of direct device configuration?
Is calling setFrameRate from a property's didSet (which runs synchronously) problematic?
Are the FigCaptureSourceRemote errors (-17281) indicative of the problem, and what do they mean?
Hey,
There seems to be an inconsistency when capturing a photo using
QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced.
See below:
with .Quality
with .Balanced
This does not happen on the other lenses.
I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful.
Thanks,
Alex
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized
Iinfo.plist Privacy - Photo Library Usage Description set
I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am new to Swift and iOS development, and I have a question about video capture performance.
Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)?
I want to know:
whether iOS devices support capturing video at that resolution,
whether the frame rate drops significantly at that scale,
and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution.
If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.