We are staring a new series where we reveal technology used inside various gadgets that manages to always catch everyone's eye named Deep Look and for our first topic, we chose the technology, people today are most interested in: Smartphones Cameras.
Smartphone cameras got better slowly year by year, thanks to engineers building those smartphones and one of the names you can easily recognize today for the best smartphone camera is A Google Pixel Smartphone. The Google Pixel is the best camera smartphone you can buy today, thanks to its killer camera. But its photo smarts go beyond the specs. Like many of Google's greatest hits, the magic's in the algorithms.
Image courtesy: google.com |
So, what exactly goes inside a Google Pixel after hitting the shutter button? Let's take a Deep Look!
According to independent analysis from DxO Mark, the Pixel devices feature the best smartphone cameras yet—despite having specs that, while competitive, don’t scream "Best Camera Ever." The secret to this amount of camera prowess lies deep inside the software named by Google as HDR+. HDR+ does more than just providing greater amount of dynamic range. Instead of taking a single photo, HDR+ takes several photos of the subject in rapid sequence, analyzes their attributes, works some tone-mapping and produces a single optimized image. This all happens in the background so you don't even have to think about it.In an article for the Graduate Seires exploring the tech that comes out of the Division X, Google says the camera tech inside Pixel smartphones was actually initially intended for Google Glass. The tech named GCam, first surfaced in 2011 when researchers began looking for a high resolution camera that could fit inside a pair of eyeglass frame for Google Glass. Since adding a giant camera on the side of the glasses was out of question, the team instead started looking at computational photography.
So, What is this computational photography that everyone is so crazy about?
Image courtesy: wikipedia.com |
It is digital processing to get more out of your camera hardware - for example, by improving color and lighting while pulling details out of the dark. That's really important given the limitations of the tiny image sensors and lenses in our phones, and the increasingly central role those cameras play in our lives. Google isn't the only one using computational photography to improve camera shots, Apple does that too. Apple's marketing chief Phil Schiller boasted that iPhone 11's new computational photography abilities are "Mad Science."
Creating a small camera doesn't just introduce problems in resolution, a smaller camera lens captures less light, which makes them struggle with both bright and dim areas in a scene. Google's HDR+ does all this by not just capturing and combining taken at dark, ordinary and bright exposures, it captures a large number of dark, underexposed frames too. Artfully stacking all these shots together to build up to the correct exposure and it helped cut down the color noise that can mess up an image. Apple embraced the same idea from iPhone XS generation called Smart HDR.
Another major computational photography techniques is seeing in 3D. Apple uses dual cameras to see the world in the same way a normal human sees. Google, however, uses only their main camera module by using image sensor tricks and AI algorithms to figure out how far away elements of a scene are? The biggest benefit this trick provided, is called Portrait Mode or Bokeh Effect, a photography technique famous among SLR enthusiasts.
Image courtesy: google.com |
The third most talked feature in Google Pixels has to be Night Sight or Night Mode among Apple fans. These techniques allow user to take photos in low light without a flash. The picture appears a lot brighter when compared with other smartphones but it does require user to hold the smartphone steady for a few seconds. It works just like HDR+, taking several shots and combining them but this time, with long exposures allowing more light to hit the camera's sensor. Granted that Night Sight has less detail and more noise compared to daylight photos but it's a huge step forward as compared to other smartphones.
Another feature that Google calls "Super Res Zoom" benefits from computational photography as well. It all started out with "Super Resolution", that relied on a fundamental improvement to a core digital camera process called Demosaicing. It captures only red, green and blue (RGB) data for each pixel. Demosaicing filled in the missing color data so each pixel has values for all three color components. Computational photography also helped overcome a perceived Pixel camera deficiency, namely OIS, as user's hands can wobble a bit when taking photos. This helped Pixel cameras to achieve digital zoom in to photos better than other usual methods. Google also added a technology called RAISR to squeeze out even more image quality by training AI ahead of time using countless pre-captured photos so that it can use patterns spotted in other photos to zoom in farther than a camera can physically.
iPhone 11 launch also brought Apple's Deep Fusion, a more sophisticated variation of the same multiphoto approach in low and medium lighting conditions. It takes four pairs of images - four long exposures and four short ones and then, one longer exposure shot. It finds the best combinations, analyzing the shots to figure out what kind of subject matter it should optimize for, then merges all frames together.
Image courtesy: apple.com |
The computational photography behind the Google Glass camera and HDR+ is now inside the Google Pixel, as well as the Google Photos app, YouTube, and Jump, a virtual reality device. Computational photography is useful, but the limits of hardware and the laws of physics still matter in photography. Stitching together shots into panoramas and digitally zooming are all well and good, but smartphones with multiple cameras have a better foundation for computational photography.
0 Comments