Traditional Culture Encyclopedia - Photography major - The working principle of digital camera.
The working principle of digital camera.
Lens article:
There are many colors of light in nature, but to sum up, these lights can be regarded as the collocation of red, blue and green with different intensities. Light can be simply regarded as an "electromagnetic wave", and different colors of light have different wavelengths.
Color is a state of the object itself, and we often say that what is what color. However, strictly speaking, there is a causal relationship between the color of objects in our eyes and the ambient lighting conditions. Different objects reflect different spectra, so we have different colors in our eyes. But this conclusion is based on the premise of using white light. If you use light sources with different colors, the results will definitely be different. For example, what we usually call red cloth, if illuminated by red light source, will become white cloth in our eyes! When light beams with various colors pass through the color filter, only a large number of light beams with the same color can pass through, and other light beams will be absorbed by the filter and converted into heat energy.
The role of the lens is to focus light and light in the photosensitive period. The photosensitive device of digital camera is very small, and the external light sometimes cannot generate enough intensity to make the photosensitive device obtain enough light source information. The lens refracts the light reflected from the external target object to the photosensitive device through its specific shape. A similar working state is a bit like burning ants with a magnifying glass in the sun when we were young.
The lens is made up of many lenses, most of which have different shapes, so each lens may not have the same function in the lens. Generally speaking, the use of multiple lenses can make the imaging of the lens closer to the real world without reducing the lens transmittance.
We mentioned a "lens transmittance" above. Simply put, it is how much light the lens can transmit. Lenses are made up of many lenses with smooth surfaces, which themselves reflect light. This will reduce the total amount of light entering the lens and affect the imaging of CCD/CMOS photosensitive devices. Nowadays, digital cameras usually use a special film on the lens to minimize the reflection of the lens. Because coating can only reduce the reflection of a certain color of light, it is impossible to let all the light enter the lens. So our general coating mainly focuses on reducing the reflection of green light, because the human eye is very sensitive to green light. There is also a coating to enhance the wear resistance of the lens and make the objective lens less prone to scratches.
The main function of using multiple lenses is to correct the "distortion" caused by a single lens. Because there are many kinds of light passing through the lens, the refractive index of the light itself in the same lens is different, and the aberration will be caused by the interference of the lens after passing through the lens. There are many kinds of aberrations, such as spherical aberration, halo, loss of light and so on. We can see that some photos taken by mobile phones or cheap cameras have a small circle in the center. Because a lens is used, the diffraction phenomenon of the lens cannot be corrected, resulting in aberration. There is also image distortion, which is also because the optical path is not corrected.
After determining the object to be photographed, we aim the camera lens at the target object. At this time, the objective lens or objective lens group will adjust the distance between the objective lens and the photosensitive device according to the control signal of the autofocus system (which is completed by the central controller of the camera, which will be introduced later), so that the image of the object just falls on the CCD/CMOS, so that a clear image can be formed. A very important index of a lens is the focal length. The focal length is the distance from the center of the "eyepiece" (the last lens) of the lens to the point where the passing light can just converge. Now some digital cameras have their own lenses, which can change the focal length. This type of lens can change the distance between the lenses inside the lens, so that the camera lens can zoom in or out of the object like a telescope. However, because the best working state of this kind of lens itself is the normal focal length, after zooming, the image will be deformed or distorted due to some unchangeable physical shapes of the lens itself.
In the light path, the intensity of light must be controlled to adapt to different shooting environments. This "light control" is accomplished by the aperture. Aperture is a set of "valves" inside the lens, which is surrounded by several opaque materials in a circle. The amount of light passing through the lens can be controlled by changing the diameter of this circle. The main functions of the aperture are: 1. Adjust the light and control the luminous flux; 2. Reducing the aperture can reduce the residual aberration of the lens; 3. Narrowing the aperture can increase the depth of field, make the incident light uniform and avoid the darkening of the four corners of the image; 4. Using a large aperture can reduce the depth of field to blur the image out of focus and highlight the theme. Generally speaking, the depth of field is whether the scene behind the target object can be clearly imaged. Aperture is generally represented by f, such as F8/F5.6, etc. The greater the value of the latter, the less light can be transmitted and the smaller the aperture.
The control of aperture is generally automatic, that is, the central controller gives the optimal aperture number at this shutter speed and sensitivity through the photometric system, and then drives the aperture to change the value. On some cameras, there is also a manual mode, where users can change the aperture number by themselves.
CCD/CMOS sensor;
CCD/CMOS sensor is one of the most important devices of digital camera, and it is also the fundamental difference between digital camera and traditional film camera. The full name of CCD is charge-coupled device, which translates to "photo-charge-coupled device", and the full name of CMOS is complementary metal oxide semiconductor, which means "complementary metal oxide semiconductor". The working principle of CCD and CMOS has a * * * connection point, that is, photodiode is used as photoelectric signal conversion element.
As mentioned above, the total amount of different colors of light passing through a color filter is different. When we install a green filter on a photodiode, it must pass green light, but their depths may be different due to the color of the incident light. Therefore, we use four photodiodes to obtain the reflected light of the object. R unit can get a red light; Unit b can get blue light; G units can get the green light. The color of the original light can be obtained by processing the signal of four units (50% for two G units).
The CCD sensor has an important working characteristic: the output of the CCD sensor is a continuous current signal. In the design of CCD, there is no signal amplifier around it like CMOS, but a buffer is set to connect one signal into a continuously changing current signal according to a certain clock period. At the output end, the physical position of the signal is determined by the image processor according to the period of the clock signal.
Photodiode is an analog component, which can output a constant and continuous current signal or voltage signal to the received optical signals with different intensities. Quantizing these signals, that is, "digitizing", is to classify current signals or voltage signals according to their intensities. For example, the voltage signal output by the photodiode when it receives (a certain value) the maximum intensity light is set as the 255th stage; The lighting time when there is no light is 1. In this way, there are 256 levels between the maximum value and the minimum value, and the image processor uses a method similar to "rounding" to classify the signal intensity, so as to finally turn the continuously changing analog current/voltage signal into a discrete and stable digital signal. Nowadays, digital cameras are generally calculated according to the fact that the output signal of each photodiode can be quantized into 256 levels. In this state, three photodiodes can have 256*256*256 colors. Because 256 is essentially a binary 8-digit number and 256 colors are an 8-bit channel, such a digital camera is 8bit * 8 bit * 8 bit = 24 bit.
CMOS sensor is also a kind of photosensitive diode, which is used to convert optical signals into electrical signals. The difference is that CMOS outputs voltage signals. Each photodiode of the sensor has an independent amplifier. This is because the sensor is made of materials that can't prevent electrons from moving freely on it like CCD, so the signals of CMOS sensors interfere with each other very much, resulting in a lot of parasitic interference. In order to amplify the extremely weak and easily disturbed voltage signal output by the photodiode as much as possible, it is necessary to set an amplifier near the photodiode to amplify and then output it, so that even if it is disturbed, the impact is weak. However, the parameters of these amplifiers are difficult to be completely consistent, and their differences lead to some differences in the final calculation results. This is also the reason. We see many images of cameras or low-grade digital cameras with CMOS as sensors, and there are many white noises or spots of other colors. This is the result that the amplifier can't amplify the signals correctly due to the mutual interference of signals.
In digital cameras, the adjustment of sensitivity is achieved by changing the magnification of photodiode amplifier. For example, in the case of insufficient light, we can increase the amplification factor of the signal amplifier, so that the following analog-to-digital converter can obtain a higher output voltage/current signal. Compared with not adjusting the magnification, this can obtain a picture with stronger brightness signal.
In the general application of digital cameras, sensors are generally made according to the above principles, and at most, some articles are made on the arrangement of photodiodes.
Central controller:
The center is the brain of the digital camera, and all actions of the digital camera, such as power-on self-check and error handling, are issued by the central controller. The central controller is a programmable DSP (digital signal processing), and there is a small-capacity FLASH in the periphery or inside, which is responsible for storing some program statements. According to these program statements, the central controller responds to various operations of the camera, such as judging the light intensity of the environment, adjusting the magnification of the photodiode amplifier, using or not using the flash, what shutter speed and aperture to use, etc.
Image processor:
In the image processor, in addition to calculating the color of each pixel, it must be arranged according to a certain clock cycle to form a complete image. In some cases, the image should be compressed in a certain format to make it smaller. The image processor is also a programmable DSP processor in essence. In fact, the quality of image processor algorithm has a great influence on the quality of processed images.
After quantizing the voltage/current signal, the image processor should calculate the color of the pixel. For example, the value obtained in R unit is 255, the value obtained in G unit is 153, and the value obtained in B unit is 5 1. Then the image processor substitutes the above three values according to its own defined algorithm, and obtains the color with R value of 255, G value of 153 and B value of 5 1.
In the process of image processing, "interpolation calculation" algorithm is usually used. The so-called interpolation is to supplement some data between discrete data so that this group of discrete data can conform to a continuous function. Using interpolation, we can estimate the value of the function in other places through the value of the function at a finite point, that is, through limited data, so as to get a complete mathematical description. Generally speaking, when we increase the pixel value of a picture, we will use interpolation algorithm. There are so many pixels in the picture, but we can use software to calculate the middle value of a certain two pixels and insert it between them. This method can't really increase the resolution details of the picture, but the pixels calculated by interpolation are usually not far from the real situation, which is still useful in some occasions (for example, you want to enlarge the picture but don't want mosaic sawtooth). Now some camera advertisements say that the maximum number of pixels that its products can shoot, we should pay attention to whether it is an effective pixel; If it is only interpolation, it doesn't make much sense, because in theory, interpolation calculation can be infinite.
In this way, the generated picture is arranged according to the physical position of the generated photodiode, so that a complete uncompressed picture can be obtained and stored in the random dynamic memory RAM. If there is no compression requirement, it will be written into FLASH storage or transmitted to other devices through the interface.
When compressing pictures, JPG is the preferred compression format for digital cameras, because JPG has a very high compression ratio, and the image quality can be set according to the user's capacity requirements. In reality, the capacity ratio of a TIFT picture with complex content and a JPG with the same content but the difference between them is difficult to be detected by naked eyes can probably reach 5: 1 or even higher.
The compression method of JPG can be roughly divided into three steps (note that the discrete cosine transform is aimed at one value of R, G and B, not the processed value of R, G and B, so the coefficient of discrete cosine transform is color component code, ranging from 1 to 255): 1, discrete cosine transform (DCT), and the image is removed. 2. Quantize the image. Quantization is to make specific structural arrangements according to the physiological characteristics of human eyes, and the quantification table is a standardized table to determine these arrangements; 3. Coding: statistically compress the data itself to minimize the data stream of the compressed image. In the process of discrete cosine transform, the image is first divided into 8*8 small image blocks, and then each image block is transformed by DCT. DCT transform is an orthogonal transform, which has the following characteristics: first, there is no distortion and the whole process is reversible; Second, correlation can be removed; Third, the energy is redistributed and concentrated in the upper left corner of the image, showing an inverted triangle distribution. Take an 8*8 small image block as an example. Its * * contains 8*8=64 sample values, and it is still 64 sample values after DCT transformation, which can not achieve the purpose of rate compression. However, in quantization rounding, the quantization table conforms to the characteristics of human eyes, that is, fine quantization is set for the low-frequency component in the upper left corner of the image, and coarse quantization is set for the rest, that is, the high-frequency component. At this point, most of the coefficients in the grid are zero; Then, after Z-shaped data is read by zigzag scanning, only the front part of this string of data is larger, and the rest is smaller or even zero. At this time, zero run coding can effectively compress the digital rate. In some places with strong contrast, such as some boundaries, we will find that the pixels of those blocks are not aligned at all; There are also some "halo" and "phantom" phenomena, which appear in the process of quantizing small image blocks, but if the compression rate is low, these distortions are very small and we generally will not notice them. After quantization, it is necessary to encode the image, that is, queue a series of data and use the probability principle to lossless compress the data. Huffman encoding is the most widely used coding method and a statistical coding. Generally speaking, variable word length coding refers to huffman encoding. Huffman encoding needs to be agreed in advance and stored in a coding table for later comparison. Only by decoding can we correctly find out what the code represents. The specific method is to queue a data string according to the probability of symbol occurrence, and then add the two minimum probabilities as new probability and residual probability to queue again, and so on until the sum of the final probabilities is 1. Give "0" and "1" two addition probabilities at a time. When reading, they start from the symbol and continue to the last "1". The "0" and "1" encountered on the route are arranged in the order from the lowest bit to the highest bit, which is the huffman encoding of the symbol. The binary number generated in this way is the substantive data of JPEG. But we usually not only transmit images, but also organize data flow and packaging. Organizing data stream is to combine all kinds of logo codes and coded image data into frame-by-frame data, which is convenient for transmission, storage and decoding by decoder. Packaging is to interpret the binary number generated by encoding so that the decoder can decode the image correctly. The general package also includes some data of the camera when this photo was taken, such as the model/aperture/shutter/resolution/date of this camera. Then, these data can be transmitted to the interface circuit, or written into FLASH or transmitted to other external processing devices.
Memories:
Memory is usually a peripheral in a digital camera, and only a small FLASH chip is installed in it, which is not enough to take high-resolution photos. General memories include CF(Compact Flash), SM(Smart Media), MMC(Multi Media Card), SDC(Secure Digital Card), MSD(Memory Stick Duo), IBM's micro hard disk, etc. But in general, except for IBM products, these memories all use FLASH as storage elements. Let's look at how flash memory saves data from its internal microstructure.
We know that the storage of binary numbers is mainly realized by a simple switch. The same is true of flash memory, which is internally a series of "switches" that are not afraid of power failure. The on-off of these "switches" represents a binary number 0, 1, so a series of switches can represent many binary numbers, and then we can get the meaningful data we usually see by converting these binary numbers.
FLASH chip is composed of many insulated gate MOS tube arrays in a certain arrangement order. The "on/off" of FLASH chip is mainly realized by these MOS transistors. The bottom layer of the insulated gate MOS tube is the NP junction of a transistor, and a polysilicon is floating on this NP junction, surrounded by a field oxide layer.
Grid. The "floating" of the floating gate constitutes a conductive trench between the source and the drain of the MOS transistor. If there is enough charge on the floating gate without depending on the power supply, the source and drain of MOS transistor can be turned on, and the purpose of saving data can also be achieved in the case of power failure. Direct voltage is applied between the source and the gate of the MOS tube, so that the charge on the floating gate diffuses to the source, and the source and the drain are not conductive; If direct voltage U- 1 is applied between the source and the gate, but direct voltage U-2 is applied between the source and the drain at the same time, and U-2 is always less than U- 1, the charge on the source will diffuse to the gate to charge the floating gate, so that the source and the drain can be turned on. Because the floating gate is "floating" and there is no discharge circuit, the charge on the floating gate cannot spread to other places for a long time under the condition of power failure, so that the source and drain remain "on/off".
In this way, the controller is connected with the graphics processor through a certain interface. After receiving the write command, the power supply of the source and gate, source and drain of a MOS tube is controlled to be turned on or off, so that the MOS tube is turned on or off, thus achieving the purpose of storing data.
Through the above analysis, we have a general understanding of the working principle of each part of the digital camera. Although some products on the market now claim to have adopted many so-called new technologies, their performance is better than other products. However, the basic working principle of digital cameras is still similar. Most of those new technologies are minor "improvements" and have not really changed the basic working principle of digital cameras.
The popularity of digital cameras is the gospel of modern people. The appearance of digital cameras and digital video cameras makes more people enjoy the pleasure of art, and art is no longer the patent of those with expensive SLR cameras and strong economic strength. With the arrival of the price reduction tide, more and more people begin to use high-quality digital cameras to record fleeting stories with high speed and high quality. It is these random stories that keep the breath of our times in people's memory forever. We have to say: technology has changed the world.
- Related articles
- Where is the camera wholesale market in Zhengzhou?
- The Classification System of General Bibliography in the Republic of China
- About photography 1. Why not buy Sony's SLR and buy Sony's single battery? Established? Reason two. Canon EOS 6D is a cheap SLR camera.
- Summer Palace scenery photography tutorial video
- What are the advantages of photography studio?
- What about Handan Modern Media School?
- Art test training recommendation
- Behind "Tang Detective 3" is the crisis of Wanda Film and Television
- Extreme Edition Intel Quad-Core Extreme Edition Processor
- Does Yantai University have a major in performing, broadcasting, hosting and directing photography?