Traditional Culture Encyclopedia - Photography and portraiture - Influencing factors of geometric deformation error of remote sensing image

Influencing factors of geometric deformation error of remote sensing image

Geometric deformation errors of remote sensing images can be divided into static errors and dynamic errors. Static error refers to all kinds of deformation errors when the sensor is stationary relative to the ground during imaging. Dynamic error is mainly the image deformation error caused by the rotation of the earth during imaging. Static error can be divided into internal error and external error. The internal error is mainly caused by the deviation between the performance and technical index of the sensor itself and the nominal value, which varies with the structure of the sensor, and the error is small, so this book will not discuss it. For example, for frame aerial camera, there are errors such as lens focal length change, image principal point deviation, lens optical distortion and so on. For multi-spectral scanner (MSS), there are imaging time difference between the first and last points of scanning line, imaging time difference of the same scanning line in different bands, uneven rotating speed of scanning mirror, non-linearity and non-parallelism of scanning line, and misalignment error of photoelectric detector, which are not discussed in this book. External deformation error refers to the error caused by factors other than the sensor itself under normal working conditions. For example, the external orientation (position, attitude) of the sensor changes, the sensing medium is uneven, the curvature of the earth, the topographic relief, the earth's rotation and other factors cause deformation errors.

1. Deformation caused by imaging geometry of sensor

The general geometric imaging methods of sensors include central projection, panoramic projection, oblique projection and parallel projection. Among these different types, the central projection of vertical photography in flat areas and the parallel projection under vertical conditions have no geometric deformation, because the central projection image itself maintains a similar relationship with the ground scene, while the results of panoramic projection and oblique projection produce image deformation. Generally, the central projection and parallel projection (orthographic projection) images of vertical photography are taken as reference images, and the deformation laws of panoramic projection and oblique projection can be obtained by comparing them with the central projection or orthographic projection images. Therefore, the analytical theory of aerial photos is the analytical basis of various remote sensing images.

2. The general process of geometric correction of remote sensing digital images.

The purpose of geometric correction of remote sensing digital image is to correct the geometric deformation of the original image and generate a new image that meets the requirements of some map projection or graphic expression. The general steps of geometric correction are shown in Figure 5- 1.

Figure 5- 1 General process of remote sensing digital image correction

(1) preparatory work

Including the collection and analysis of image data, map data, geodetic results, spacecraft orbit parameters and sensor attitude parameters, the selection and measurement of required control points, etc. If the image is a film image, it needs to be scanned and digitized.

(2) the original digital image input

Remote sensing digital images are read into the computer with special programs according to the prescribed format.

(3) establishing a correction transformation function

The calibration transformation function is used to establish the mathematical relationship between image coordinates and ground (or map) coordinates, that is, the coordinate transformation relationship between input image and output image. The correction methods vary according to the adopted mathematical models, generally including polynomial method, * * linear equation method, random field interpolation method, etc. The correlation coefficient in the correction transformation function can be solved by using the data of control points, or directly formed by using satellite orbit parameters, sensor attitude parameters, internal and external orientation elements of aerial images, etc.

(4) Determine the output image range.

Improper definition of the output image range will result in that the corrected image cannot be completely included in the range, and it will also cause too many blank spaces in the output image, as shown in Figure 5-2. If the defined output image range is appropriate, all corrected images are included in the defined range, and the blank image area can be as small as possible, as shown in Figure 5-3.

Figure 5-2 Improper boundary range of output image

So, how can we get the appropriate image boundary range?

The four corners of the original image Abici d are projected into the map coordinate system according to the correction transformation function, and eight coordinate values (four pairs of coordinates) are obtained, and the maximum and minimum values of X and Y are calculated respectively, thus determining the range of the output image.

(5) the transformation of the geometric position of pixels

Pixel geometric position transformation is to transform the original digital image to the corresponding position of the output image pixel by pixel according to the selected correction transformation function. Conversion methods are divided into direct correction and indirect correction (or positive and negative solutions), as shown in Figure 5-4.

The two methods not only consider the different starting points of image coordinates, but also give different gray values to the corrected image pixels. In the direct method, the method of obtaining corrected pixels is called gray reconstruction, while the indirect method is called gray resampling. In practice, indirect methods are often used to correct it.

Figure 5-3 Suitable output image boundary range

Figure 5-4 Correction Scheme of Direct Method and Indirect Method

(6) resampling the gray level of the pixel

Because digital images are discrete samples of an objective continuous world or photos, when we want to know the gray value of non-sampling points, we need to interpolate from sampling points (known pixels), which is called resampling. When resampling, the influence (weight) of gray values of nearby pixels (sampling points) on sampling points can be expressed by resampling function.

If the coordinate value of the corresponding position of any pixel in the output image array in the original image is an integer, the gray value of the pixel in the original image is directly assigned to the gray value of the corresponding output image pixel. Otherwise, we must use the gray levels of several pixels near this point in the original image, and consider the image level of each pixel nearby, and use an appropriate method to calculate the gray levels of the corresponding pixels in the output image. Gray resampling is also done pixel by pixel.

According to the sampling theorem, when the sampling interval δX is less than (1/2)f 1, the gray value formula of any point (one-dimensional case) is

Recognition and Extraction of Alteration Information of Hyperspectral Remote Sensing Target in Central Asia

Where: g(k, Δ x) is the sampling value, which is calculated accurately, that is, the convolution of the sampling function value and the sinc function. This is an ideal situation. Because the sinc function is defined on the infinite field and involves the calculation of trigonometric function, this operation is more complicated. For convenience, it is often replaced by some approximate functions as convolution kernel functions, such as trigonometric functions and cubic spline functions. The simpler methods are nearest neighbor pixel method and double pixel method. The commonly used pixel gray resampling methods are as follows.

1) bilinear interpolation method. The convolution kernel of bilinear interpolation is a trigonometric function, which is expressed as

w(x)= 1 -| x |,0≤| X |≤ 1 (5-2)

It can be proved that the convolution kernel of this function is similar to that of sinc function. As shown in figure 5-5, if any pixel p(x, y) is located between four pixels pi, j, pi, j+ 1, pi+ 1, j, pi+ 1, j+ 1, then the gray level of this pixel is interpolated by bilinear.

xg(x,y)=gy,x=( 1 - dx)( 1 - dy)gi,j+ dx( 1 - y)gi,j+ 1+( 1-dx)dygi+ 1,j+ dxdygi+ 1,j+ 1 (5-3)

Where dx=x-INT(x)? Dy=y-INT(y), where INT is the integer part.

Figure 5-5 Bilinear Interpolation Method

2) Bicubic convolution method. Using cubic spline function:

Recognition and Extraction of Alteration Information of Hyperspectral Remote Sensing Target in Central Asia

For the convolution kernel, it is closer to the sinc function, and at this time, 16 original pixels are needed to participate in the calculation (Figure 5-6). At this moment,

Figure 5-6 Bicubic Convolution Method

Recognition and Extraction of Alteration Information of Hyperspectral Remote Sensing Target in Central Asia

formula

ωij=ω(Xj)ω(yi)

ω(x 1)=-dx +2dx2-dx3

ω(x2)= 1 -2dx2+dx3

ω(x3)=dx +dx2+dx3

ω(x4)=-dx2+dx3

ω(y 1)=-dy +2dy2-dy3

ω(y2)= 1 -2dy2+dy3

ω(y3)=dy +dy2+dy3

ω(y4)=-dy2-dy3

dx=x -INT(x),dy=y -INT(y)

gij=g(xj,yi)

3) Nearest neighbor pixel method. Directly take the gray value of the pixel n (xn ′ yn) closest to the point p(x, y) as the resampling value.

g(P′)= g(N)(5-6)

formula

XN=INT(x +0.5)

yN=INT(y +0.5)

Neighboring pixel method is the simplest of the above three methods, with fast calculation speed but poor accuracy. Bicubic convolution method: The sampling error is smaller than that of bilinear interpolation method, but the calculation workload is large and time consuming, so bilinear interpolation method is usually used more.

Figure 5-7 Two-pixel resampling method

4) Double-pixel resampling method. From the spectrum analysis, the above bilinear and bicubic interpolation methods are low-pass filtering, which filters out the high-frequency components in the signal and makes the image smooth (blurred). With the increasing of computer capacity and external memory capacity, it is suggested to double one pixel of the original digital image in X and Y directions, and then resample the image enlarged by 1 time. As shown in Figure 5-7, it is equivalent to rotating a straight line with the gray level of 100. Fig. 5-7(b) is the result of bilinear interpolation of the original image 5-7(a); Fig. 5-7(d) is the result of bilinear interpolation of image (c). As can be seen from the figure, resampling-double pixel image resampling can better maintain the "sharpness" of the image.

Resampling is not only needed in geometric correction of digital remote sensing images, but also widely used in digital remote sensing image processing.

(7) output that corrected digital image.

The output image data obtained by pixel-by-pixel geometric position transformation and gray resampling is written into the corrected output image file according to the required format (or image file format commonly used in application).