Why Samsung’s Live Focus Fake Video Bokeh Doesn’t Work
Samsung recently snarked Apple with a video showing of their fake bokeh mode or “Live Focus” as they call it. The iPhone 11 (all models) only has fake bokeh for stills and has yet to introduce it for video. So, Samsung were keen to point this out, hoping to pick up some extra sales.
Only one problem – it’s really bad. Smartphone makers often present their new models with various gimmicks to try to edge out the competition. Hey, I just discovered Samsung’s AR Emoji on my S9 so I can now create animated emoji GIFs of myself.
Whoop. Not sure if AR Emoji would steer me towards buying a Samsung, though. But at least it works pretty well.
However, including Samsung’s Live Focus for video… even boasting about it… before it really functions appears more of an act of desperation.
How is it supposed to work?
The idea is AI software in the camera works out what is supposed to be the main object in the video. Apple’s iPhone AI does this too, but for purposes of adding extra detail and dynamic range. But for fake blurry backgrounds, the AI now has to draw a mask around the object and everything outside the mask will have a blur effect added to it.
Now, we’ve all got used to this effect being applied to our selfie photos. And for this purpose it works pretty well. It usually takes a moment for the AI to work out where to mask and the you snap the pic.
This is why it doesn’t work for video
For video, the AI simply isn’t fast enough to keep up with any movement. I was watching a video on a YouTube channel called Camera Conspiracies where he demonstrated the problem. While he remained motionless, Samsung’s Live Focus worked ok. As soon as he moved, the mask lost track of his position and shape so the blur spread across his face and body.
You can see from this frame grab how the blur spills over into the wrong places, so it looks like the lens is smudged with grease. When you see this in motion, it looks even worse as you become aware of the mask shifting around.
If only it were true
If you’ve done much rotoscoping (as I have), after a couple of days tedious, back-pain-inducing work, you begin to pray for a machine that will do this work for you. Because even doing this work by hand, going frame by frame, doesn’t always come out well.
The AI used in smartphones to create fake bokeh is similar to the rotobrush tool in Adobe After Effects. But even that takes some time to instruct, telling the software what you want to mask. The you skip ahead 10 frames to find the AI has glitched out and needs fixing.
Samsung’s S10 AI has to guess. From the beginning and for every frame. So it’s no wonder the mask goes crazy choosing the wrong objects. CC demonstrates this effect perfectly here:
Will it ever work?
Is this software actually getting any better at this job? I’ve been using rotobrush for about 5 years and it doesn’t seem to have got any better. The problem is all it has to go on is 2D shapes and colours. If there’s a clear definition between the object and the background then it’s fine. But once there are similar colours crossing over, the auto mask struggles.
The other thing is that video noise adds to the AI’s problems. And as we know, smartphones are one of the worst culprits for producing it.
The only way this AI fake bokeh can work in the future, is if the camera can create a 3D map of each frame. Then it has to do some calculation and allow you to add and place a different focal depth later. I remember a couple of years ago a cinema camera was developed which could do this, but Google is failing me right now.
Anyway, this is not an easy thing to do and would require lots of processing power to achieve live when shooting video.
How to make it look ok
One thing I learned from months of rotoscoping work is that the less effect you place on the unmasked part of the image, the less your inaccurate mask is noticeable. For example, if I added 2 or 3 pixels of blur to the background, I could get away with a rough mask. But if I wanted to add very deep blur and perhaps some colour grade, then the mask edges really show up.
With Samsung’s Live Focus you can adjust the effect strength. So my tip is to keep the effect at a lower settings and you might just get away with some shots.
The Holy Grail of Shallow Depth of Field
One of the elements where DSLRs with their proper lenses still have the edge over smartphone cameras is in their ability to capture shallow depth of field. It will be a huge leap in the field of smartphone videography if ever AI is developed which can fake this without the need for DoF adapters.
But it’s not just about blurry backgrounds. Having long lenses means you can do nice focus pulls, which are key to modern film language. You can still do focus pulls with a smartphone, but you will need manual control and you will need to be pretty close to the object.
Ultimately, you can create beautiful video images with your smartphone without shallow depth of field or fake bokeh. Every year, we at Mobile Motion Film Festival receive many stunning looking films and they’re getting better and better. What I’m noticing is that the use of lighting and framing is key to creating many of these images.
You might have to work a bit harder. But in the end that work will stand out from those who simply use shallow depth of field as an easy cinematography trick.
Eager to learn more?
Join our weekly newsletter featuring inspiring stories, no-budget filmmaking tips and comprehensive equipment reviews to help you turn your film projects into reality!
Simon Horrocks
Simon Horrocks is a screenwriter & filmmaker. His debut feature THIRD CONTACT was shot on a consumer camcorder and premiered at the BFI IMAX in 2013. His shot-on-smartphones sci-fi series SILENT EYE featured on Amazon Prime. He now runs a popular Patreon page which offers online courses for beginners, customised tips and more: www.patreon.com/SilentEye