Traditional Culture Encyclopedia - Photography and portraiture - There are many high-definition movie programs, all of which are only a few hundred meters, and the video taken by my camera is only a few grams. Why?

There are many high-definition movie programs, all of which are only a few hundred meters, and the video taken by my camera is only a few grams. Why?

Hundreds of megabytes of high-definition TV programs (such as American TV series) are hundreds of megabytes, which is the amount of compressed data.

And if you shoot yourself (for example, shoot an American TV series at the same time), there will be several G's, because the compression method used is different.

At present, most photographic cameras adopt broadcast-level H.264 coding, and H.264 is also divided into several levels, and the amount of data at the same time is different at different levels.

Back to your question, using Mpg (a format name of video processing) and Avi (also a format name of video processing, but Avi is an open source format and can be used at will) to process the same video, and the amount of data obtained is different.

So, will the difference in data volume affect the picture? The answer is yes, because the greater the amount of data per unit time, the more content (pixels) a single frame picture can display, and the picture is rich in color and distinct in layers. The smaller the blur, the smaller the tolerance and the more distorted it is.

If the original source of your machine is Mpg format, then post-processing (such as editing and special effects) should use the most original picture. After the whole film is finished, it is compressed into other smaller formats by video compression software (such as the format factory mentioned downstairs).

Finally, compression and non-compression are relative, and both wide-area understanding and narrow-sense understanding are possible, but there is a law that remains unchanged. The larger the code rate (that is, the amount of data), the better the picture.