Thu. Mar 28th, 2024

Despite everything going on with supply chains, lockdowns and chip shortages right now, Apple is by all accounts on track to launch the iPhone 14 and 14 Pro, just like clockwork in September this year. And it looks like we’re getting big changes to Selfies.

Step away from the keys that spell “Clickbait” and I’ll explain.

Until now, iPhone has featured Autofocus cameras, or AF cameras for the main shooters on the back, however the selfie camera, because of the small space it uses and the fact Apple can assume that the phone will be within arm’s length has had a fixed focal distance. That means it will almost always be in focus, wherever you are for selfies. It also meant however that Apple made sure basically everything was pretty much in focus, like with a webcam so that being just the wrong distance away didn’t ruin things.

This year however according to Ming Chi Kuo Apple is moving to a shallower depth of field in their selfie or FaceTime camera. You may have heard of shallow depth of field, its basically what gives that cool blurry background to photos while keeping the subject perfectly in focus. It looks great.

It looks so good that Apple used the depth mapping capabilities of the True Depth camera system that Face ID introduced to offer Portrait mode Selfies. Originally introduced with the iPhone 7 Plus, the first iPhone with dual rear cameras, the iPhone 7 used the stereoscopic data from the two cameras to build a depth map and artificially add blur to areas further from the camera. It was imperfect, but it was revolutionary none the less. Sometimes the edges wouldn’t be detected accurately, anything with transparency would really mess with the system, but for faces, it was surprisingly good from day one.

When the iPhone X brought this tech to the selfie camera the next year, it made use of the dot projector and infra red cameras to build a more accurate depth map and get pretty good at faces. While the camera itself in the iPhone 12 and 13 has a fixed focus f/2.2 aperture, the iPhone’s processor uses the data it has about the distances to different parts of the image to artificially add the lens blur or Bokeh to the image.

It’s still imperfect, hair for example is a challenge as the dot projectors are projecting just that, dots, rather than using the natural blur that the lens with a shallower focal point would see, so you often see sharp background through semi transparent areas while the rest of the background up to the edge are blurred.

I know, I know, this is a lot of background to say that the images from iPhone 14’s selfie cameras will most likely look more natural and have real bokeh on backgrounds and sharper focus on faces. For years, phone cameras have been getting closer and closer to what you can do with a real large sensor camera, and often through computational photography. Taking data and simulating things like better dynamic range by taking multiple images and combining them. Using data to simulate depth of field. But this is a little different. Just like when Apple added in sensor shift image stabilisation to the main cameras, where tiny motors move the sensor to compensate for your weak shaky human hands, this is an improvement to give the computer better data right from the sensor to work with. As good as computational computing gets, starting with better data to process will always give better results, so when Apple can squeeze in a better lens or a more capable sensor, its always going to help. So, if you want Blurry, AF Selfies, iPhone 14 may well be the iPhone for you.

Thanks so much for watching and we’ll be answering your questions again soon, so leave your iCaveAnswers down in the comments, send me your Apple desk setups over on Twitter to be featured in the future on the show. Thanks to the Patrons, join them at iCaveDave.com/Patreon See you in the next one

Sources:

https://appleinsider.com/inside/iphone-14-pro