Traditional Culture Encyclopedia - Photography and portraiture - If the photos taken by a digital camera do not undergo any post-production adjustments on a computer, are they considered direct outtakes?

If the photos taken by a digital camera do not undergo any post-production adjustments on a computer, are they considered direct outtakes?

If the photos taken by a digital camera do not make any post-production adjustments on the computer, are they counted as direct outtakes?

The answer is: No!

Because the camera manufacturer has already set various parameters in the program, as long as we pick up the digital camera to take pictures, even if we do not go to the computer to do post-production, the camera has already made post-production adjustments in the program. With algorithms, from the moment you press the shutter button to shoot, the adjustment parameters preset on the camera have already been involved in your creative process, such as color correction, gamma correction, saturation adjustment, sharpening, etc., using The data is used to help you simulate the scene seen by the naked eye to the greatest extent.

The photos you see are completely different from what you see with your naked eyes. Some people still don’t understand that you just took a photo according to the post-production parameters set by the camera manufacturer.

In fact, the imaging principle of digital photos is not that complicated. If you know how a digital camera records a scene and how it simulates the colors in the scene, you should be able to understand this principle.

This requires understanding a topic related to digital photo imaging: color. This topic is relatively broad, so the article covers many aspects. Things related to our photography include color generation, color gamut, color space, etc. ?

Let’s talk about the first keyword first: What is color? Here is a quote from Baidu Encyclopedia: "Color is a visual effect of light produced through the eyes, brain and our life experience." This explanation is relatively general. We don't think about it so much, just related to photography. Within the framework, let’s discuss, what is the color in digital photos? Where did these colors come from? I tried to express my understanding using common people's common language for everyone to discuss. It is well known that color is a property of light. When light shines on a colored object, the object reflects the light and enters the human eye. At the same time, the same light also shines on the human eye. The human brain will create a picture in the brain based on the current ambient light and the reflection of the object. Produce the concept of a certain color. Different luminous bodies can emit different colors of light. The essence of these colors is actually "electromagnetic waves". In nature, there are electromagnetic waves of different frequencies. Since they are waves, there is the concept of wavelength. Light of different frequencies also has different wavelengths. Here are a few terms that everyone often hears: broadcast signal, television signal, radar Waves, infrared rays, visible light, ultraviolet rays, X-rays, gamma rays, etc.

Our eyes cannot recognize all wavelengths of light, but can only recognize wavelengths within a certain range. This range is about 380-780 nanometers. This range is called "visible light", which is what we see with the naked eye. As you can see, it is actually a type of electromagnetic wave. To understand it from another angle, visible light and broadcast signals floating in the air are the same type of material, just with different frequencies. ?

In the spectrum of visible light, electromagnetic waves with different frequencies present different colors. The visible light with the lowest frequency will appear red, and the visible light with the highest frequency will be purple. So if you look at this simple spectrum table, we cannot see the light with a lower frequency than red. It is outside the red on the spectrum. That’s why it’s called infrared. We can't see light with a higher frequency than violet. It's outside violet on the spectrum, so it's called ultraviolet light.

So color itself is an attribute of light. Simply put, it is the frequency of light. ?

When light of different frequencies enters the human eye, different colors will be mapped in the brain. The colors that appear in the brain are human feelings. Therefore, some scientists proposed the "color blindness paradox", saying that there are no colors at all in the world, but light of different frequencies coloring our brains.

First of all, we know that different colors are different frequencies of light, which is for our eyes. So can digital cameras perceive these colors? ?

No! Because the digital cameras we commonly use are far less powerful than the human eye! The camera does not have the ability to sample the color of visible light, so what the camera can do is sample the intensity of visible light. Let’s briefly talk about how to sample specifically: the camera manufacturer adds a lot of very small red, green, and blue filters to the body. These filters form an array according to rules and are placed in front of the CMOS.

In this way, after visible light enters from the lens, it first needs to pass through an array composed of three color filters: red, green, and blue. This array will respectively change the intensity of the visible light of the three colors from 0 to 255.** *It is divided into 256 levels, and corresponding digital numbers are made for each brightness level of each color, and then these visible light numbers of different intensities are transmitted to the CMOS.

CMOS will record the obtained color number on the RAW file, thus obtaining a digital negative. There is no image content on this RAW negative, only numbers, which are eventually transferred to the memory card for storage. ?

It can be said that the photos we take are not records of scenes, mountains, rivers, flowers and trees. What is recorded by the camera is just a bunch of numbers.

It is useless to have RAW digital negatives, because there is no software that can directly display RAW files. Even if the shooting format set by your camera is RAW, when you press the shutter to complete the shooting of a photo, The shooting result displayed on the camera's LCD screen is not RAW. It is just a JPG file temporarily generated by the camera through its own algorithm for you to preview and review. ?

Convert an unviewable RAW file into a viewable JPG format. During the format conversion process, the camera will add a large number of necessary algorithms and algorithms according to the requirements set by the manufacturer's program. Beautification algorithms, such as color space conversion, gamma correction, color correction, sharpening, saturation adjustment, etc. ?

So from the moment you press the shutter, what the digital camera records for you is only the intensity of visible light, not the color. It passes the visible light entering the fuselage through three color filters, simulates colors, and numbers each color with different intensities. CMOS saves these numbers to the memory card in RAW format. Then, through a series of complex algorithms in the fuselage program, this file is turned into a temporary JPG format for you to watch. ?

Have you discovered that from the time the light enters the camera to the photo appearing on the LCD screen, the completion of taking a photo is full of "analog", "digital", "algorithm", etc. Waiting for words and processes? Which link is not a later stage? Aren’t the parameters pre-PSed by the camera manufacturer and placed in the camera program, and then we use these parameters to shoot? ?

If you don’t believe me, please take a look at your camera’s white balance settings. Take a Nikon camera as an example. In the “White Balance” menu, there are “Incandescent”, “Fluorescent”, “Sunny”, “Flash”, Settings such as "cloudy" and "shady" are parameters preset by the manufacturer.

Looking at the "Set Picture Control" menu, settings such as "Standard", "Natural", "Vivid", "Monochrome", "Portrait" and "Landscape" are also preset parameters. Bar?

During the shooting process, you are inseparable from the data and algorithms pre-written by these manufacturers. Various parameter implementations have been adjusted, and you must choose one of them for shooting. Parameters? No matter which parameter you choose, isn't this the PS inside the fuselage? ?

Tampering with a poem by Tsangyang Gyatso: Whether you recognize it or not, the later period will be there, following you like a shadow.

Whether you know this or not, the moment you press the shutter button, the camera's internal program is already doing post-processing for your photos. Now that so many internal camera algorithms have been added, are your photos still accurate? Can the concept of "straight out" hold up in digital cameras? ?

Post-production is not about taking the photos and taking them to the computer for manipulation. Post-processing has quietly started when you press the shutter. This is unavoidable. ?

Precisely because the camera cannot sample color, it must simulate it through the intensity of visible light, and then use various complex algorithms to restore the color closest to what your eyes see as much as possible . At this point, are the colors in the scene in front of you still the same color as those captured in your camera? ?

In other words, when taking a photo, what your eyes see are relatively real colors in nature that the human eye can perceive, and what the camera captures is the color simulated by its own algorithm. , these colors with color numbers, are they real? ?

This is the process and principle of color generation in digital photos, simulation, numbers, and algorithms. Remember these words, we will talk about them later. ?

After talking about the color generation of digital photos, let’s talk about the software and display devices for viewing photos. What impact will these have on the so-called direct output of photos? The content includes the concepts of "color space" and "color gamut".

"Color space": Simply put, color space is the way we name colors mathematically.

For example, Adobe RGB, sRGB, Prophoto RGB, etc., which are common in our photography and post-production work, belong to the RGB color space.

The most closely related to the photography industry is RGB. When we take daily photos, we will set the color space used for shooting on the camera. The most common ones are sRGB and Adobe RGB. The photos are then imported into the computer. When retouching pictures, the color settings of the picture retouching software and the color settings of the monitor mostly involve RGB, so we will focus on the RGB color space. (The application rate of CMYK in post-photography is not as high as the former, so I will skip it)?

"Color gamut": It is the color coverage included in each color space.

Here is a chart for everyone:

This is a color gamut chart. The irregular color graphics at the bottom cover the range visible to our human eyes. All light color information and brightness information, this range is called the CIE color space, which is all the colors that the human eye can recognize.

?

As you can see from the color gamut chart, I used triangles and text of different colors to mark the ranges covered by several different RGB color spaces. Among them, the yellow triangle Prophoto RGB has the widest color gamut. , and even some edge parts have exceeded the color range that the human eye can recognize, so in the RGB color space, the color gamut of Prophoto RGB is the largest. ?

Next is the white triangle Adobe RGB. Its color gamut is smaller than the former, but it can almost include the colors that the human eye can see in life. In most cases, the human eye cannot tell the difference between within the color gamut and outside the color gamut. ?

The smallest one is the pink triangle sRGB. Although it is the smallest, it is the most common and practical. It was jointly developed by Microsoft, HP, Epson and other manufacturers. It covers almost all the colors needed for online pictures, web pages, and games, and it is the most widely spread. ?

These are the three RGB color spaces that are often seen in photography and post-production. Everyone knows a concept first. Prophoto RGB has the largest color gamut, followed by Adobe RGB, and the smallest is sRGB.

The function of the monitor is to convert the photo data in the computer into colored light that we can see. That is to simulate virtual digital information into color information in the real world.

In fact, during this conversion process, the software on the computer will also use its own algorithm to analyze the color that was originally simulated by the camera.

The monitors we currently use use three colors of red, green and blue light to mix more colors. Therefore, the limit of a monitor is to maximize the brightness of the three colors of red, green and blue, while the other more than 16 million colors mixed by red, green and blue can only be connected at three points. In the internal area of ????this triangular area represents all the colors that the current display can display. This is the color gamut of the display, which refers to the indicator of the richness of colors that the display can display.

At the shooting scene, the real scene you see with your naked eyes is simulated by the camera program, and a bunch of data is obtained to form an electronic negative.

Then through the calculation of computer software, it is converted into the light that we can see, and these lights are displayed through different display devices and different algorithms. There are too many artificial and preset parameters in it, so, Guys, do you still believe there is anything out there?