Traditional Culture Encyclopedia - Photography and portraiture - From RGGN to RYYB, can mobile phone sensors attack SLR?

From RGGN to RYYB, can mobile phone sensors attack SLR?

[PConline Miscellaneous Talk] Originally, I didn't disassemble my smartphone, but last time, in order to prove whether Meizu 16s was a bit sticky, I really had an addiction and disassembled a mobile phone backwards. Sisan believes that the camera should be one of the most complex components in the current smart phone, which usually consists of PCB motherboard, CMOS sensor, bracket and lens component. But in fact, CMOS is mainly responsible for taking pictures and videos. Writing this article is mainly about the influence of the most basic pixel arrangement on CMOS under popular science.

First of all, we all know that photography is the art of "light", and light needs to be refracted through the lens to reach the sensor. The lens is generally composed of lenses (5 pieces, the more the better, glass > resin), and then the CMOS (mainly RGB primary color separation method and CMYK complementary color separation method, which will be explained in detail later) is used to shoot the picture you want to shoot. As for other different apertures and focal lengths, it is convenient for you to take beautiful photos of depth of field and focal length in different scenes. OIS optical image stabilization is the main product of many manufacturers, which needs to be assisted by integrating additional motor and gyro components.

For smart phones, the ultimate goal of many manufacturers is to have the imaging quality comparable to that of SLR cameras. However, due to the physical bottleneck of CMOS, the aperture of mobile phone lens is generally only 8 mm. How does this compete with CMOS dedicated to SLR? Therefore, there is only one way to narrow the gap with SLR cameras, which is to continuously improve the quality of CMOS imaging. Anyone who plays photography knows a saying, "The first step is to crush people". So since we can't break through the physical limitations, is it feasible to increase the size of CMOS sensors?

So, here's the problem. At present, the thinnest smartphone is contrary to the development direction of increasing CMOS size. For example, the first11.2 inch Nokia 808PureView installed in history. In order to fit this11.2 inch CMOS sensor, the Nokia 808PureView camera has a partial thickness of 17. Therefore, this year's main IMX586/IMX600 or IMX650 are only 65,438+0/2.0 inches and 65,438+0/65,438+0.7 inches, which is considered as a "big sole" compared with the CMOS sensor of 65,438+0/2.x inches commonly used in ordinary mobile phones, but

So let's talk about the second way, is it feasible to increase the light input of CMOS sensors? Because increasing the light input of the sensor can take photos with higher brightness, clearer definition and less noise in the same scene. For CMOS, there are still many ways to increase the light input, such as increasing the sensor size, increasing the lens aperture, increasing the photosensitive area of a single pixel, and introducing an UltraPixel super-pixel camera (a cold ham ONEM7).

However, the market still proves that the UltraPixel super pixel camera is not recognized, and the lens aperture and sensor size are the same. The aperture of F/ 1.6 is almost the limit for the poor 8mm mobile phone. IMX586 can realize the equivalent single-pixel photosensitive area 1.6μm through QuadBayer array (Sony's "four pixels in one" technology), and the higher-end IMX600 sensor is only 2.6 μm, and there is a ceiling.

Under the current technology tree, if you want PK SLR, you can only start with the underlying architecture of CMOS sensors.

Before we begin to introduce the underlying architecture of CMOS, let's talk about Bayerarray. We can see different colors with our eyes, mainly because there are cells in our eyes that can perceive different frequencies of light. CMOS is a "cell" in a camera that can feel different colors, but we call it pixels, which are arranged in the form of Bayer array.

Historically, BryceBayer, an imaging scientist of Kodak Company, first discovered that human eyes are most sensitive to green among the three primary colors of red, green and blue, so he tried to add a filter above CMOS, and adopted the arrangement of 1 red, 2 green and 1 blue (RGGB) to convert black and white information into color information, so that the color displayed on CMOS is closer to the visual effect of human eyes. So almost all CMOS sensors are arranged in RGGB, which is what we often call "Bayer array" or "Bayer filter".

However, Bayer array is also defective, because CMOS can not obtain color information during photoelectric conversion, and Bayer array plays the role of color separation. When light passes through the filter, only the same color light is allowed to enter, red light enters the red pixel, green light enters the green pixel, and other colors of light are blocked out. So when we image, we not only get the color, but also get some light and dark information. Color separation will also lose some light intensity when filtering light, and all other color information will be lost. Remember this is the test center: all. Therefore, CMOS under the blessing of Bayer array can never completely restore the color of the real scene, and can only be infinitely close to reality. In order to get the closest color, people need to "guess" the information of other colors in this position according to the color information of adjacent pixels. This "color guessing" link is called "anti-Bayer operation". This also explains why the photos taken have the phenomenon of "color cast", which is a mistake in the "color guessing" link.

Having said that, how should mobile phone manufacturers improve? The first is our RGBW CMOS structure.

As mentioned above, the human eye is most sensitive to green, which is why Bayer uses two green pixels on RGGB array. So can we replace one of the green pixels (G) with a completely transparent white pixel (W)? Thus, the first four-color sensor was born. Omnivision(OV) is the first manufacturer to introduce RGBW CMOS in history. Motorola's motoX, MotoDroidMIni and other products all used this CMOS, but at that time Motorola only called this technology "clearPixel" technology. Unfortunately, OV is declining in the sensor market and Motorola, and this RGBW CMOS sensor is not known by many people.

What really pushes RGBW is Dafa's IMX278 sensor (Huawei Mate8, One Plus Three, vivo X7Plus all use this sensor, and IMX298 is also a CMOS with RGBW structure). Now, the slogan of mobile phone manufacturers in the past, "Sensitivity can be improved by 32% and noise can be reduced by 78% in low brightness", has been armed by Huawei P8 and Charm Blue 6T.

It says that a white pixel is as strong as a green pixel. Why not go further? Wouldn't it be nice to replace two green pixels with two white pixels? In 20 15, when MediaTek released HelioP 10, it launched an image engine named "TrueBright". The main core of this technology is the CMOS sensor with "RWB" structure, which will have a larger optical input than the RGBW sensor. Unfortunately, MediaTek also focused on this technology when it released HelioX20, and there is still no RWB until now.

Although the RWWB sensor in MediaTek still exists in theory, we can learn from it, for example, why not throw away the Bayer filter? Wouldn't it greatly increase the light entrance area of CMOS to realize full light transmission? Therefore, Sony introduced a professional IMXMono black-and-white camera, which has a very high light input and can record more dark details in a dark environment.

Of course there are advantages and disadvantages. Monochrome black-and-white lens can't record color information without color separation system, so it must be used with another color CMOS. The double-shot+algorithm can obtain better night shooting effect than the traditional RGGB and RGBWCMOS single-shot. Today, this black and white+color dual-camera combination is still very popular, and the single-camera RGBW sensor has been forgotten in the long river of history. . .

As the pioneer of dual camera, or the market chose multi-camera matrix module, RGBW has been eliminated by the market. Single-lens CMOS has not changed its goal of pursuing higher light input. How to further narrow the imaging gap between mobile phones and professional SLR (other competing mobile phones) in night shooting is the key development direction of smart phones in the future. Huawei P30 series and Glory 20 series should be regarded as the best smartphone "bands" for night shooting. If we don't talk about the sensor size, aperture and other parameters, the photosensitive area of a single pixel, then these mobile phones are almost gambling on their own reputation, because RYYB just replaced two green pixels (G) with yellow pixels (Y).

Compared with RGGB, RYYB can reduce the optical input loss caused by Bayer filter and increase the optical input by up to 40%. Take Huawei P30Pro as an example. The ISO of this mobile phone can actually reach 409,600, which is 64 times that of iPhoneXSMax! Therefore, only a little light is needed to capture the color details in a pure black environment.

Now it's time for the exam. When talking about RGBG, the original Bayer array, some people say that light is composed of three primary colors: red, green and blue, and yellow is just one of them. How can we restore our true colors without the most critical green? So the sum of red and blue can get yellow (R+G=Y), which also shows that yellow is a combination of green and yellow, and the brightness is the superposition of the two. Therefore, after remolding the three primary colors, the CMOS sensor of RYYB will have a fundamental change in color compared with the sensor of RGGB. RGGB's three optical primary colors are additive method, which means that it absorbs light and what color pixels absorb light. R+G+B is white, and white will absorb all the light. RYB is a subtractive method, which shows reflected light, yellow reflects red and green, R+Y+B is white, and black does not absorb light, but reflects all light.

Theoretically, although RYYB filter improves the light input, its essence is to increase the light input of red in disguise, thus improving its performance in night scene shooting. At the same time, because there are many yellow pixels, there will be color cast, and less green pixels will also affect the imaging quality, and the saturation will be problematic.

Huawei needs more powerful hardware ISP and more powerful algorithm if it wants to control RYYB sensor perfectly. The president of Huawei's terminal mobile phone product line said that in order to ensure the color matching accuracy of RYYB filter, Huawei spent three years adjusting it. However, when we initially evaluated P30 photos, we found that there would be color cast, and later this phenomenon gradually decreased through firmware upgrade.

On the image road of smart phones, the advantages of specialization will bring significant competitiveness. The customized CMOS filter structure reflects the strong technical strength of the manufacturer. Fourth, third, I also hope that some manufacturers will come up with their own unique views on taking pictures, challenge the tradition and realize the dream of mobile phones and SLR.