Depth Of Field Explained

What is Depth of field or portrait mode or bokeh effect?

A camera only focuses its lens at a single point though there will be an area that stretches in front and behind this point where objects in this area remains sharp. This area of sharpness is known as Depth of Field and can be described as shallow (where only a narrow zone around the focus point is in focus,) or deep (where a wider zone around the focus point is in focus.)

A DSLR can create very accurate and real life depth of field with only one lens due to its huge sensor and big lenses but small smartphone cameras cannot so they use other means to achieve this shallow depth of field. But we will come to that later.

On a DSLR camera, the depth of field can be adjusted by increasing the apertures and reducing the focusing distance to create shallower depth of field which interns sharpens only a small part of the image and blurs majority of the image because the sharp zone around the focus point is smaller.

The aperture refers to the size of the hole that light passes through to get to the sensor. Wide apertures correspond to smaller f stop numbers so an f/1.6 is wide and so more light enters while and aperture of f/22 is small so less light enters the sensor.

Focusing distance also plays a role because wide apertures offer considerably more depth of field when focused on a subject far away than they do when focused on a subject that’s closer to the lens. So changing the focusing distance lets you control what you want to be in focus especially when taking close up shots. The focal length of the lens does appear to have a significant impact on depth of field, with longer lenses producing much more blur.

The reason longer lenses seem to produce a shallower depth of field (a larger part of the image is blurred out and a smaller part of the image is in focus) is thanks to their narrow angle of view as compared to a wide lens, a telephoto will fill the frame with a much smaller area of background, so any blur appears magnified too.

The larger the sensor, the shallower the depth of field will be at any given aperture. This is because you’ll need to use a longer focal length or be physically closer to a subject in order to achieve the same image size as you get using a camera with a smaller sensor

Shifting from a large aperture to a smaller one can produce less sharp photos overall because the aperture is small and less light enters so when you take a photo, the aperture has to be opened for a longer period than normal to let more light in to compensate for the small aperture size and make the image brighter. This length of time that the aperture has to stay opened is called shutter speed. This causes a slower shutter speed and is not suitable for shooting fast moving object if you want a sharp image.

Depth of field

The areas in green are within the depth of field zone. Those in blue are just slightly out of that zone so the blur is minimal. While the rest of the image is far from that zone of focus and the blur is maximum.

Now that a smartphone limited in both aperture sizes, sensor sizes and focus distances than a DSLR, how is it still able to take photos with a bokeh effect?

They do this by using a 2 camera rig.

Most smartphones come with a dual rear camera with a few even having 3 rear cameras such as the LG V40 ThinkQ. The secondary camera is mostly used to process depth. It creates a 3D map of the area and works like our eyes.

We can see depth because our eyes are at slightly different perspectives that helps us convey depth. So the system can now roughly tell how far the objects in front of it are with respect to each other. This information is then used to separate the foreground from the background and then applies a blur to the background.

This system is not perfect and it sometimes causes the edges of your subject in focus to be blurred to. Also this doesn’t create a realistic blur because the entire background has the same level of blur whilst with a DSLR, the intensity of the blur increases with the distance from the focus point.

The most common method now is using a telephoto camera as the secondary camera. This is the opposite of a wide angle camera. Its field of view is small.

The main camera is paired with a telephoto camera usually having a 2x factor for telephoto lens. This means the secondary lens has twice the focal length of the primary lens, giving you an instant 2x optical zoom.

This system is advantageous because it gives you a 2 optical zoom which gets you 2x closer to your subject. Zooming on smartphones has largely been digital until now but with this you get to quickly move 2x closer to your subject with very little quality loss. Any further zooming is done digitally still but because the digital zoom is now being applied on top of 2x optical zoom, it gives much better results.

Shooting with telephotos lenses have less lens distortion and is good for shooting portraits.

To create the depth of field effect on a camera rig with a telephoto lens, the main camera will now act as the depth sensor and in combination with the telephoto lens give far better depth of field results.

The main disadvantage with the telephoto lens is that it is limited in capabilities such as aperture sizes and OIS than the main lens. The Note 8 was the first phone to have OIS on the telephoto lens. Shooting with a telephoto lens in lowlight is still a problem because of the limited aperture sizes.

But as time goes on, these limitations and issues will be resolved.

 

How does the Google Pixel with just a single rear camera still creates Depth of field?

Portrait mode has actually been around in the past with android smartphone with single cameras until when Apple added it to the iPhone with the release of the iPhone 7 Plus that it got publicized.

Usually, manufacturers use two lenses to estimate the distance of every point and blurs the pixels that form the background. The foreground becomes the sole object in focus.

Since from the Pixel 2 and Pixel 2 XL, the google pixel has been able to create portrait mode with a single camera.

So how do the Pixel phones, having just one camera, give you that DSLR-like shallow depth-of-field effect?

For the front-facing camera, it’s pure segmentation. Though the pixel 3 has 2 front cameras, the second is a wide angle camera and is not used for portrait mode. Using machine learning, it creates a segmentation mask which is in essence a silhouette, a white and black image of the object in focus in white and the rest of the image in pitch black. Once it creates this segmentation mask, it applies a uniform blur to the background (the pitch black areas).

For the rear camera, the camera is a Dual Pixel camera. Which means that every pixel is subdivided into 2 smaller pixels. Though the dual pixel technology was originally purposed for increasing the focusing speed, Google took the tech one step further and used it to create the portrait mode.

Google uses the dual pixel configuration since each lens is in essence split into 2, 1 mm apart, it creates 2 different perspectives which enables it to compute stereo just like having 2 physical cameras.

The rear camera uses both stereo and segmentation. Firstly, the camera takes a photo with HDR+ and then uses machine learning to identify the main objects and then create the segmentation mask.

Then, using the Dual pixel technology, it is able to compute depth like having 2 physical camera
s and create a depth map of the scene which it uses to blur the image in proportion to how far the objects are from the depth of field zone.

Depth of field

Adding its image processing to the potion, it is able to produce a depth map of the scene in front the camera which it then uses to create the depth of field effect.

Because of dual pixel technology, it is able to blur the background in proportion to how far the object is away from the depth of field zone. With objects further behind or in front of the depth of field zone blurred more than objects closer to the depth of field zone where objects are in focus as a real DSLR and the eye will do.