Traditional Culture Encyclopedia - Photography and portraiture - Photographic gamma ray

Photographic gamma ray

Is the photo taken by a digital camera straight out without any post-adjustment on the computer?

The answer is: no!

Because the camera manufacturer has set various parameters in the program, as long as we pick up the digital camera to take pictures, even if we don't go to the computer for post-processing, the camera has made post-processing adjustment settings and algorithms in the program. From the moment you press the shutter to shoot, the preset adjustment parameters on the camera have been involved in your creative process, such as color correction, gamma correction, saturation adjustment, sharpening and so on. Use data to help you maximize the simulation of the scene seen by the naked eye.

The picture you see is completely different from what you see with the naked eye. Some people still don't understand that you just took a photo according to the post-parameters set by the camera manufacturer.

In fact, the imaging principle of digital photos is not so complicated. Knowing how digital cameras record scenes and simulate colors in scenes, you should be able to understand this truth.

This requires understanding the related topic of digital photo imaging: color. This topic is relatively broad, so the article involves many aspects, including color generation, color gamut, color space and so on. ?

Let's start with the first keyword: What is color?

Here is a quote from Baidu Encyclopedia: "Color is the visual effect on light through eyes, brain and our life experience."

This explanation is rather general, so I don't consider so much. Let's discuss the colors in digital photos within the framework related to photography. What is this? How on earth did these colors come from? I try my best to express my understanding in common words of ordinary people for discussion.

As we all know, color is a characteristic of light. When light shines on a colored object, the object reflects light and enters people's eyes. At the same time, the same light shines on people's eyes The human brain will produce the concept of a certain color in the brain according to the current ambient light and the reflection of objects.

Different illuminators can emit different colors of light. The essence of these colors is actually "electromagnetic waves".

In nature, there are electromagnetic waves with different frequencies. Since it is a wave, there will be the concept of wavelength. Different frequencies of light have different wavelengths. Here are some terms we often hear: broadcast signal, TV signal, radar wave, infrared ray, visible light, ultraviolet ray, X ray, gamma ray and so on.

Our eyes can't recognize all wavelengths of light, but only a certain range of wavelengths, about 380-780 nanometers. This range is called "visible light", which is what we can see with the naked eye. In fact, it is also a kind of electromagnetic wave. From another point of view, visible light floating in the air and broadcast signals are the same kind of substances, but the frequencies are different. ?

In the spectrum of visible light, electromagnetic waves with different frequencies show different colors. The lowest frequency visible light will appear red, and the highest frequency visible light is purple, so let's take a look at this simple spectrum table. We can't see light with a lower frequency than red. Spectrally, it is outside of red, so it is called infrared. We can't see light with higher frequency than purple. It is outside the purple in the spectrum, so it is called ultraviolet.

So color itself is an attribute of light, in short, it is the frequency of light. ?

When different frequencies of light enter the human eye, different colors will be mapped in the brain. The colors that appear in this brain are human feelings. Therefore, some scientists put forward the "color blindness paradox", saying that there is no color in this world at all, but light of different frequencies dyes our brains.

First of all, we know that different colors are light with different frequencies, which is for our eyes. So can digital cameras perceive these colors? ?

Can't!

Because our commonly used digital cameras are far less powerful than human eyes!

The camera has no ability to sample the color of visible light, so all the camera can do is sample the intensity of visible light.

Specifically, how to sample, let's simply say: camera manufacturers have added a lot of very small red, green and blue filters to the fuselage, and these filters form an array according to the law and are placed in front of CMOS.

In this way, after the visible light enters from the lens, it needs to pass through an array composed of red, green and blue filters. This array will divide the visible light intensity of three colors into 256 levels from 0 to 255, and make corresponding numbers for each brightness level of each color, and then transmit these visible light numbers with different intensities to CMOS.

CMOS will number the obtained color numbers and record them in the RAW file to obtain a digital negative. There is no image content on this original film, only numbers, and finally it is transferred to a memory card for storage. ?

It can be said that the photos we took were not recorded scenes, mountains and rivers, flowers and trees. What the camera recorded was just a string of numbers.

With RAW digital negatives, it is useless, because there is no software that can directly display RAW files. Even if the shooting format set by your camera is RAW, when you press the shutter to finish shooting a photo, the shooting result displayed on the camera LCD screen is not RAW, but a JPG file temporarily generated for you by the camera through its own algorithm, which is convenient for you to preview and look back. ?

Convert invisible RAW files into visible JPG format. In the process of format conversion, the camera will add many necessary algorithms and beautification algorithms according to the requirements set by the manufacturer's program, such as color space conversion, gamma correction, color correction, sharpening, saturation adjustment and so on. ?

So from the moment you press the shutter, the digital camera only records the intensity of visible light, not the color. It simulates the color of visible light entering the fuselage through three color filters, and numbers each color with different intensities. CMOS saves these numbers in the original format in the memory card. After a series of complicated algorithms in the fuselage program, this file is converted into a temporary JPG format for you to watch. ?

Have you ever found that a photo is taken from the time the light enters the camera to the time the photo appears on the LCD screen? Is it full of words and processes such as "simulation", "digitalization" and "algorithm"? Which link is not the later category? Aren't all the parameters preset by the camera manufacturer and put into the camera program, and then we use these parameters to shoot? ?

If you don't believe me, please look at the white balance setting of your camera. Take Nikon camera as an example. In the white balance menu, the settings such as incandescent lamp, fluorescent lamp, sunny day, flash lamp, cloudy day and shading are preset by the manufacturer.

Let's take a look at the standard, natural, brightness, monochrome, portrait, landscape and other settings in the "Set Optimization Calibration" menu. Are they also preset parameters?

In the process of shooting, you can never leave the data and algorithms written by these manufacturers in advance, and you have adjusted the realization of various parameters, so you must choose one of them as the shooting parameter, right? No matter which parameter is selected, isn't this the PS inside the fuselage? ?

Tampering with a poem by Cangyang Jiacuo: Whether you recognize it or not, it will be there later, and it will follow you.

Whether you know it or not, at the moment you press the shutter, the camera's internal program is already doing post-processing for your photos. Since so many camera internal algorithms have been added, are your photos still straight out? Can this concept stand on a digital camera? ?

It's not the photos taken later that are operated on the computer. Post-processing has quietly started when you press the shutter, which is inevitable. ?

Because the camera can't sample the color, it must be simulated by the intensity of visible light, and then restored to the color closest to what your eyes see through various complicated algorithms. So far, are the colors in the scene in front of you the same as those taken in your camera? ?

In other words, when you take a picture, what you see with your eyes is the relatively real color that can be perceived by the human eye in nature, while what the camera takes is the color simulated by its own algorithm. Are these colors with color numbers true? ?

This is the process and principle of color generation in digital photos, simulation, numbers and algorithms. Remember these words, and we will talk about them later. ?

After talking about the color generation of digital photos, let's talk about the software and display equipment for viewing photos, and what impact these will have on the so-called photo straight out. The content includes the concepts of "color space" and "color gamut".

"Color space": Simply put, color space is the way we name colors mathematically.

For example, Adobe RGB, sRGB, Prophoto RGB, etc. , which is common in our photography and later work, belongs to RGB color space.

The closest relationship with the photography industry is RGB. In daily shooting, we will set the color space for shooting on the camera, among which the most common ones are sRGB and Adobe RGB. When importing photos into computer retouching later, the color settings of retouching software and display are mostly related to RGB, so we will focus on RGB color space. (The application rate of CMYK in the later stage of photography is not as high as the former, so it is skipped)?

"Color gamut": refers to the color coverage of each color space.

Let me show you a chart first:

This is a color gamut diagram with irregular bottom, which covers the range of color information and brightness information of all light visible to our human eyes. This range is called CIE color space, which is all colors that human eyes can recognize. ?

As can be seen from the color gamut diagram, I marked the coverage of several different RGB color spaces with triangles and characters of different colors. Among them, the Prophoto RGB color gamut of the yellow triangle is the widest, and even some edges are beyond the color range that the human eye can recognize, so in the RGB color space, the Prophoto RGB color gamut is the largest. ?

Next comes Adobe RGB with a white triangle. The color gamut is smaller than the former, but it can almost contain the colors that the human eye can see in life. In most cases, the human eye can't tell the difference between inside and outside the color gamut. ?

The smallest is the pink triangle sRGB, which is the smallest, but it is the most common and practical. Jointly developed by Microsoft, Hewlett-Packard, Epson and other manufacturers. It covers almost all the colors needed for online pictures, web pages and games, and has the widest spread. ?

These are three kinds of RGB color spaces that are often seen in photography and later period. Let's first understand a concept. Prophoto RGB has the largest color gamut, followed by Adobe RGB, and the smallest is sRGB.

The function of the monitor is to convert these photo data in the computer into colored light that we can see. That is, virtual digital information is simulated as color information in the real world.

In fact, in the process of this conversion, the software on the computer will also use its own algorithm to analyze the color that you have been simulated by the camera.

At present, the monitors we use are all red, green and blue to mix more colors. Therefore, the limit of a display is to reach the maximum brightness of three colors: red, green and blue, while other colors mixed with red, green and blue16 million can only be in the internal area where they are connected at three points. This triangle area represents all colors that can be displayed in the current display. This is the color gamut of the display, which refers to the indicator of the color richness that the display can display.

At the shooting scene, the real scene you see with the naked eye, through the simulation of the camera program, gets a bunch of data and forms an electronic negative. Then through the calculation of computer software, it is converted into light that we can see, which is displayed by different display devices and different algorithms. There are too many artificial and preset parameters in it, so, dear friends, do you still believe in anything?