Traditional Culture Encyclopedia - Photography and portraiture - Analyze the principle of Google Pixel camera. Why are the single-lens cameras of Pixel 3 and Pixel 3a so strong?

Analyze the principle of Google Pixel camera. Why are the single-lens cameras of Pixel 3 and Pixel 3a so strong?

Since Google's Pixel mobile phone entered Pixel 2 generation, it has subverted the extreme of single camera lens image with its unique visual core and AI technology, and the subsequent Pixel 3 family still has excellent image performance under the siege of multi-lens flagship machine. Natural low-light shooting and single-lens shooting are more outstanding even in the face of models with more complex hardware. Recently, the cheaper Pixel 3a family used AI photography technology, which was enough to commit the following crimes and challenge high-end models. ?

Software is defined as bringing innovation and breakthrough to existing hardware. Google Taiwan Province Province also invited Marc Levoy, an outstanding engineer of Google, who has a senior background in digital imaging technology and was also founded by? The emeritus professor of computer science at Stanford University supported by the founder of VMWare solved the key to the excellence of Pixel camera for the media in Taiwan Province Province. At the beginning, he pointed out that the imaging technology of Pixel mobile phone is the technical crystallization of subverting the hardware-defined camera in the past and transforming it into a software-defined camera.

The practice of Pixel mobile phone is to break away from the past practice of relying on fixed function hardware, take advantage of computational photography and continuous shooting, replace traditional algorithms with efficient machine learning technology, build a model based on Google's huge data and cooperate with Google's machine learning, so as to achieve amazing computing in contemporary high-performance Pixel mobile phones. At the same time, Google does not hide image technology, so as to drive innovation and attract more talents.

Marc Levoy pointed out that camera applications on mobile phones need to abide by several basic principles: they need to be executed quickly, the default mode cannot fail, the special situations that consumers encounter in photography can be reproduced, and occasional failures can be accepted in special modes. In terms of real-time speed, the real-time monitoring information needs to be higher than 15fps, the shutter delay needs to be lower than 150ms, and the imaging time needs to be less than 5 seconds.

Advanced HDR+ combining continuous shooting and AI relies on the closed exposure method used in cameras in the past. This technology can capture images with different exposure levels and superimpose multiple images to achieve a clear image with clear details from low light to high light. However, due to the need for accurate image superposition, it is difficult to successfully shoot HDR images without tripod and mobile phone jitter.

In Pixel 3+, based on the underexposed continuous shooting image with the same exposure, the method is to change the photos taken with different exposures and synthesize them, so that the images are more similar, easy to calibrate, have a more disgusting signal-to-noise ratio and reduce the noise of shadows. At the same time, using tone map to strengthen shadows and reduce bright spots will sacrifice the overall tone and contrast, but retain the local contrast. Through the breakthrough of concept, the success rate and image quality of HDR+ image shooting of Pixel mobile phone have surpassed the traditional HDR mode.

Single-lens portrait mode, which combines machine learning and dual-pixel technology, is difficult to capture the depth of field like a professional camera if the camera technology is defined by hardware because of the inherent size of the photosensitive element of the mobile phone, and portrait mode is the way used by contemporary mobile phones; In the early stage, in order to realize portrait mode, we used two lenses to capture two images in similar focus, calculated the depth through stereo matching calculation, then chose a plane as a clear benchmark, and finally blurred the image beyond the benchmark, but this would increase the hardware complexity and need to process the information of the two lenses.

The pixel phone with only one lens is based on machine learning, and the depth information is obtained through the combination of the main lens and the dual-pixel focusing element, while the front lens is analyzed by machine learning, and the single-lens portrait can still be realized; Google estimates the figure of each pixel in the image through convolutional neural network, and trains with more than one million marked figures and accessories, so that the AI model can outline the figure in the image.

Excellent high-resolution zoom of AI technology In the past, because the mobile phone only had a single-focus and single-lens design, it was only possible to shoot distant images through digital zoom, that is, digital cropping, but this was equivalent to only partially cropping the image, and the image quality would also decline. Although Google doesn't use AI-related technology in Pixel's high-resolution zoom, it synthesizes higher pixel images through high-speed continuous shooting technology, making it comparable to the current models that use twice the lens.

The key is the Bayer array structure used in contemporary digital camera components, which forms an image through the combination and arrangement of R, G and B pixels in the matrix. When high-speed continuous shooting is performed by hand, the original, horizontal, vertical and vertical images are combined, so that the color information of multiple images after continuous shooting is complementary to each other, and the high resolution is improved by reconstructing the color information. If it is in a perfect environment without shaking, it can be made by anti-shake technology in turn.

Night vision mode of turning darkness into day Google's night vision mode is still based on the technology of high-speed continuous shooting. After pressing the shutter, it captures 65,438+05 images at high speed. In addition, the shooting time will be evaluated according to the shaking degree of the hand and whether there is a moving body in the detection picture. If there is shaking, the framing time will be prolonged, otherwise it will be shortened. On the other hand, the technical concept of high-resolution zoom function is introduced from Pixel 3, and finally the tone is adjusted by learning white balance.

The starting point of night vision technology is to realize a colorful night scene that is difficult for human eyes to recognize or even see. Through the three characteristics of classical painting, Google takes the principle of strengthening contrast, projecting shadows into black and surrounding scenes through darkness, restores colors through tone mapping, and presents excellent night vision mode with classical painting as the teacher.

Without VisualCore, there is only a speed difference between Pixel 3a and Pixel 3. Some consumers may think that the image performance of Pixel 3a will be different without VisualCore. However, Marc Levoy's hanging chest guarantees that in addition to the high processing efficiency brought by VisualCore, because the camera is the same and the AI model is the same, the Pixel 3a series only has slow processing speed and there will be no difference in image quality.