iPhone 11 Cameras: What is “Semantic Rendering”?

What’s the difference between an iPhone X and an iPhone 11? Well, looks like Apple have really focused on the new cameras and camera software.

An ultra-wide replaces the telephoto. In addition, computational photography provides a significant leap forward in image quality. Now, one feature which comes with the new software is Semantic Rendering.

Put simply, Semantic Rendering is an intelligent approach to automatically adjusting highlights, shadows, and sharpness in specific areas of a photo.

In the field of AI, “semantics” refers to an ability to segment information, as humans do. In photography, this process starts with subject recognition.

An iPhone 11 uses Semantic Rendering to look for people within the frame. But there’s more to it than that. According to Apple, the iPhone 11 can now differentiate between skin, hair, and even eyebrows. The software then renders these segments differently. This should, for example, improve your portrait photography.

So, what’s the difference?

Normally, a camera has no idea what it’s aiming at. With new developments in software, cameras are beginning to “see” like humans. In other words, cameras with AI are beginning to understand what they are seeing.

With regular High Dynamic Range processors, cameras simply apply a group of settings over the whole image. Highlights are lowered, shadows are raised and perhaps midrange contrast enhanced. But the camera does not understand that one area of the image might be different to another.

This is good for landscapes. For portraits, not so good.

However, with Semantic Rendering, the iPhone 11 applies local (rather than global) adjustments. Therefore, it reduces a sky’s brightness to maintain color and detail. Meanwhile, the software reduces highlights on a face less, so preserving depth in the subject. It can also sharpen skin and hair in different strengths.

Normally, this kind of work would be done by hand using programs like Adobe Photoshop. But with the iPhone 11, these enhancements are applied instantly.

Just for portraits

It’s probably worth noting Semantic Rendering is currently only used for portraits. When shooting landscapes, regular HDR will be used. Having said that, it’s not only employed when shooting with portrait mode. The software recognises any human within the frame and kicks into action.

The great thing about computational photography is how it compensates for the limitations of small lenses and sensors. With Semantic Rendering software, smartphone cameras can surpass their physical limitations.

While Apple’s semantic rendering is among the next evolution of these technologies, Google has been using similar machine learning to power the camera in its Pixel smartphones.

I think this is what separates smartphone cameras from DSLR, mirrorless and other bigger, more expensive cameras. They are faster and easier to use. Having said that, one wonders when DSLR and other camera manufacturers will start to employ AI software in their cameras.

Because, right now smartphones are creeping ever closer (and in some respects surpassing) traditional DSLRs. Meanwhile, camera manufacturers appear to caught in the smartphone flashlights.

One problem is that those large sensors will need far more computing power to use AI. This could lead to slowing down your photography and overheating. So, while we are wishing for larger sensors in our smartphones, maybe we should be careful what we wish for…

Eager to learn more?
Join our weekly newsletter featuring inspiring stories, no-budget filmmaking tips and comprehensive equipment reviews to help you turn your film projects into reality!