Traditional Culture Encyclopedia - Photography and portraiture - data processing
data processing
4.3. 1. 1 satellite image data
The data source of this project is the 2005-2007 SPOT 5_2.5 m resolution image data provided by the Information Center of the Ministry of Land and Resources. There are 79 scenes in SPOT 5 satellite image data covering the work area (Figure 4-2), and all the received images have an overlapping area of more than 4%. The image is rich in information, without obvious noise, spots and bad lines; The coverage rate of cloud and snow is less than 10%, which does not cover key areas such as urban-rural fringe; Most of the images in the eastern plain are covered with fog or haze in different degrees, but the overall land information can be distinguished; The viewing angle of image data receiver is generally less than 15, less than 25 in plain area and less than 20 in mountainous area, which basically meets the requirements of image receiving technical specifications.
Figure 4-2 Schematic Diagram of the Distribution of SPOT 5 Image Data in Henan Province
Figure 4-3 Image Receiving Time Distribution
Due to the long receiving time span of SPOT 5 satellite images and the large difference in receiving phases, 79 images are mostly concentrated in spring and autumn (Figure 4-3), but some images have some problems in one way or another because the receiving time is not the best season in Henan, as shown in Table 4- 1:
Table 4- 1 Image Data Receiving Information and Data Quality Review Table
sequential
4.3. 1.2 DEM data
There are 464 digital elevation models (DEM) in Henan province, covering 1 ∶ 50000.
First of all, comprehensively check the integrity and present situation of DEM. Secondly, check whether there are overlapping areas in adjacent framing DEM, whether the heights of overlapping areas are consistent, and whether cracks appear after edge splicing. Thirdly, the project team conducted a comprehensive inspection on whether each DEM has complete metadata, geographical basis, accuracy and grid size of data.
Because the original data of 1∶50000 DEM is in grid standard format, the mathematical basis is 1980 Xi 'an coordinate system, 1985 national elevation datum, and 6 divisions. In view of the above data format and project implementation requirements, the project team carried out mosaic and coordinate system transformation on 464 DEM related to the work area according to 19 and 20 bands respectively, and then carried out mosaic, band change and projection transformation. Obtained Henan province 1∶50000 DE(M, covering the whole province of Henan province, meeting the requirements of orthophoto correction in the project area, central longitude 1 14, Beijing coordinate system 1954, national elevation datum1985 (mFigure 4-4
Figure 4-4 Henan Province1:50000 DEM
After a comprehensive inspection of the spliced DEM, the DEM data used in this project covers the whole province of Henan, with no missing items and black edges, which basically meets the needs of orthorectification of image data in this project.
Data registration
At present, image registration technology can be roughly divided into two categories, gray-based method and feature-based method. Most methods based on gray scale are realized by cross-correlation or Fourier transform. Automatic registration module (AutoSync) in ERDAS 9. 1 is used for image registration. After automatic detection, a lot of work is needed to find it on the reference image. If the matching can't be realized completely automatically, and if the areas that need to be searched and adjusted accurately can be roughly calculated, it can also reduce a lot of workload. This problem can be solved by using polynomial to roughly calculate the corresponding relationship between two images.
According to the requirements of ERDAS system, we need at least three points to establish a rough correspondence between two satellite images. After establishing the forward polynomial model with at least three points, the automatically detected control points can be quickly mapped to the reference image, and their positions on the reference image can be accurately marked only by adjusting in a small range. Figure 4-5 shows the automatic detection point on the left side of the original image and the coarse positioning point on the right side of the reference image, which needs to be adjusted.
Figure 4-5 Registration
Although the introduction of computers can save a lot of labor, due to technical limitations, it can not solve all the problems in all aspects of correction and registration, thus completely liberating surveying and mapping workers.
In the production process of this project, the multispectral data of SPOT 5_ 10 m were resampled at an interval of 2.5 m, and the resampling method was bilinear interpolation. Taking the scene as the registration unit and the panchromatic data of SPOT 5_2.5 m as the registration basis, the multispectral data of SPOT 5 are registered. Randomly select the same name points on the registered panchromatic and multispectral data, and the registration error in plain and hilly areas is less than 0.5 pixel, and the mountain area is appropriately relaxed to 1 pixel. The registration control point file is named "scene number+multi and pan", such as "287267MULTI". The registration file is named after "Scene Number+Match", such as "287267 Match".
Automatic registration module (AutoSync) in ERDAS 9. 1 is used for image registration. Firstly, four control points with the same name are manually selected at the four corners of a single scene image, and then automatic registration control points are generated by software, and control points with large errors are eliminated for automatic registration (Figure 4-6). After the registration is completed, the registration accuracy of the whole image is checked from top to bottom and from left to right by the way of "drawing curtains" provided by the software (Figure 4-7).
Summarizing the registration work, we can see that it is basically divided into the following steps: ① marking at least three rough matching control points; (2) setting detection parameters; ③ Automatic detection; ④ Manually adjust and save the control points; (5) registration. The fourth step still needs manual participation, and the main problems lie in two points: first, whether the accuracy is really a problem with the characteristic points of human senses; Second, the control points on the reference image are only rough marks, which cannot be manually adjusted to the exact corresponding position. Therefore, the temporary registration work only partially reduces the manual workload, and it is impossible to completely complete the registration work by computer.
Figure 4-6 Image Registration
Figure 4-7 "Curtain-pulling" Inspection of Image Registration Accuracy
4.3.3 Data fusion
Data preprocessing before fusion
When the satellite image data of a complete project area is obtained, the spectral and texture characteristics of the image are quite different between scenes due to long receiving time span, large data phase difference, interference of clouds, fog or haze in the air and uneven ground illumination. In order to make the image texture clear and the details prominent, and improve the visual interpretation accuracy, data must be preprocessed before data fusion.
The purpose of SPOT 5 panchromatic band data processing is to enhance local gray contrast, highlight texture, strengthen texture energy and improve texture details through filtering.
(1) linear transformation. The image data processed by linear stretching not only enhances the contrast of local gray levels, but also maintains the relative relationship between the original gray levels.
Figure 4-8 Linear Transformation
Let A 1 and A2 be the embedding control values of the input image, and B 1 and B2 be the lowest and highest brightness values of the transformed image (Figure 4-8), and stretch the brightness values of the input image to the range of B 1 ~ B2, where the input brightness is 0 ~ a 1 and a2 ~. The displacement A 1 is transformed into 0 by linear stretching, and A2 is transformed into 255; In this way, the relative relationship between A 1 and A2 is not changed, the dynamic range of histogram is expanded, and the subtle mutation information of image structure is enhanced.
(2) Texture enhancement. At present, texture energy enhancement is mainly realized by Qualcomm filtering, and filter selection is the key to spatial enhancement. Different images, landforms and features have different filtering kernels. Generally speaking, in high-lying areas, geographical units are macroscopic, and the filters used are generally large, which can reflect the macroscopic characteristics of geographical units. Choosing a smaller filtering kernel will destroy the overall landscape. The geographical units are finely distributed and the landform is exquisite, so the filter is relatively small, otherwise the fine texture structure cannot be expressed. Excessive enhancement should be avoided when texture energy is enhanced, otherwise the image details will be too saturated and the texture will be lost, and the purpose of enhancing details will not be achieved. The following filter kernel is the edge enhancement filter operator used this time, and the application effect is good. As shown in Figure 4-9.
Figure 4-9 Filter Enhancement
(3) Multispectral data processing. In the fused image, the contribution of multispectral data is its spectral information. Before fusion, color enhancement is mainly used, and the color contrast between different land types is widened by adjusting brightness, chromaticity and saturation, which does not require high local texture, and sometimes it is allowed to weaken some texture information in order to ensure spectral color.
4.3.3.2 image fusion
At present, there are many methods of multi-source remote sensing data fusion, which can be divided into three levels from the technical level: pixel level fusion, feature level fusion and decision level fusion. Pixel-level fusion includes HIS transform, principal component transform, false color synthesis, wavelet transform and weighted fusion. The methods of feature level fusion include Bayesian method, decision method, neural network method, ratio operation, cluster analysis and so on. Decision level fusion includes knowledge-based fusion, neural network and filtering fusion. Judging from the fusion algorithm, it can be divided into three methods, such as weighted fusion method, product fusion method and Brovey transform fusion method. The second method is based on various spatial transformations, such as HIS transformation fusion method, PCA transformation fusion method and Lab transformation fusion method. The third is the fusion method based on pyramid decomposition and reconstruction, such as Laplacian pyramid fusion method and wavelet transform fusion method.
The data used in this project is SPOT5 data, which lacks blue band multispectral. The data were simulated by natural color method. In the investigation of land use resources, multi-spectral information can highlight the essential information of land use types, improve the interpretability of images, and facilitate comprehensive discrimination and analysis from graphic, texture and spectral characteristics. Generally, the spectral range of multi-spectral sensor of remote sensing satellite covers the whole visible light, that is, blue, green and red bands. However, the multi-spectral coverage of SPOT series remote sensing satellites is only in the green to red band of visible light, but it lacks the blue band. When using remote sensing satellite images to investigate land use resources, multi-spectral information must be expressed in natural colors visible to human eyes, and non-remote sensing investigators are not allowed to use pseudo-color and infrared color simulation in interpretation and field investigation. For the usual natural color simulation methods of SPOT series remote sensing satellites, the color tone is usually adjusted only by the combination of different bands and the visual resolution and perception of human eyes. Using the operator's prior knowledge to tune, when the operator lacks experience, the tuning distortion is great; Second, it is difficult to quantify the unified standards, and it is difficult to reach the same or similar standards due to the perceptual differences caused by different adjustment time, personnel and scene images. By analyzing the characteristics of SPOT5 data in the whole province, this image fusion processing mainly adopts product transformation fusion and Andorre fusion.
Andorre fusion adopts the Andorre fusion method provided by Shibao Company, and the specific steps are as follows:
Step 1 First normalize the panchromatic image. Equivalent to Wallis filtering, enhancing local (texture enhancement) and global contrast.
Step 2: fuse according to the following formula (P is the normalized panchromatic image, B 1 is the green band, B2 is the red band, and B3 is the near infrared band).
Module calculation formula in ERDAS:
Formula one racing car (blue channel):
Equation 2 (green channel):
Equation 3 (red channel):
Step 3: Complete the pseudo-natural color conversion according to the following formula:
Module calculation formula in ERDAS:
Formula one racing (red channel):
Equation 2 (green channel):
Equation 3 (blue channel):
Step 4: Stretch the histogram of each channel generated in step 3. Usually linear histogram stretching can meet the adjustment of this color image, and the threshold needs to be defined according to the visual effect of the image. The threshold should be selected to avoid pixel oversaturation caused by balancing other colors. Or adjust the tone, brightness and contrast of the image in Photoshop until it meets the requirements.
The algorithm is realized by the model in ERDAS (Figure 4- 10).
Post-processing of 4.3.3.3 fused image
Post-processing mainly adopts the following five methods:
(1) histogram adjustment. For the fusion image with low contrast and dark brightness, the input and output range are adjusted, and the contrast coefficient is changed for linear stretching, so that the histogram of each color reaches a near normal distribution. The output range is generally set at 0 ~ 255, but in the selection of input range, the truncation of low brightness end should be careful, which can eliminate some noise.
(2)USM sharpening. The edge features of ground objects are enhanced by changing the threshold, radius and sharpening degree. Note that the threshold and radius should not be set too large, and the sharpening degree can be appropriately selected according to the image characteristics of different regions. Whether the parameter selection is appropriate can be judged by the preview function of the software. Urban-rural fringe, residential areas, roads and cultivated land boundaries are characteristics that need to be highlighted, and they must be clearly identified to further improve the overall effect.
(3) Color balance. The fused image will have a certain degree of color cast, which needs to be corrected by adjusting the color balance.
(4) Adjustment of chroma saturation. Because there is a lot of magenta in the fused SPOT 5 image, which is inconsistent with the real color, it can be turned into khaki by changing chromaticity, saturation and lightness, so as to make it closer to the real color.
(5) Contrast enhancement. By adjusting brightness and contrast, the contrast between ground objects can be enhanced, which makes it easier to distinguish different land types.
Through the post-processing of the fused image, the visual effect of the image is further improved, so that the color of the whole scene image is true and uniform, the brightness is moderate and clear, and the thematic information, especially the texture information, is enhanced.
Figure 4- 10 fusion processing algorithm
4.3.4 Selection and processing of orthographic correction model
4.3.4. 1 Basic model of orthorectification
There are generally two types of orthorectification of pushbroom remote sensing satellite images: strict correction model and conversion relationship correction model. According to the satellite orbit parameters, camera characteristics and imaging characteristics, the rigorous calibration model establishes the * * * line relationship between the image point and the ground by obtaining the position and orientation of the sensor at the image moment, solves the * * line equation, and corrects the image point or the ground point. Transformation relation correction model is a traditional geometric correction method, which does not consider the characteristics of imaging. It calculates the transformation coefficients of different transformations through the ground control points and image homonyms, thus fitting the deformed original image to the ground coordinates.
Strict correction model includes linear equation based on polynomial, correction method based on satellite orbit parameters, and regional network leveling method based on beam method. The transformation relation correction model includes polynomial correction, rational function polynomial, rational function polynomial regional network leveling and other methods. Among them, the adjustment of regional network is a correction method that uses fewer control points and multi-scene images to form regional network.
(1) A polynomial-based correction method for * * linear equations. The geometric distortion of the original image is corrected, and the pixel coordinate transformation is adopted to make the image coordinates conform to some map projection and graphic representation method, and the pixel brightness value is resampled. At the moment of photography, the linear equation between sensor, image and ground reflects the one-to-one correspondence between ground points and image points during imaging.
Because push-broom imaging is the mainstream imaging method used by most remote sensing satellites at present, the whole scene image is a multi-center projection, and each scanning line is a central projection. Expressed as * * * line equation
The azimuth elements outside each scan line of pushbroom imaging are different, and the y value is always 0. In orthorectification, the external orientation elements of each line must be solved, and the coordinates of the image points corresponding to the ground points can be obtained by using the * * line equation, and the image can be corrected after adding DEM.
Generally speaking, it can be considered that the change of space attitude is stable after the remote sensing satellite is in orbit for a certain time, so the change of six external orientation elements is a function of time. Because the y coordinate of the pushbroom image has a fixed correspondence with time, that is, the scanning time of each line is the same, the external orientation element of the ith line can be expressed as a function of the initial external orientation element (φi, wi, ki) and the number of lines y, and this function can be expressed as a quadratic polynomial function, that is,
The initial external orientation elements required by this method can be obtained from ephemeris files, such as SPOTS image ephemeris files in DIM and CAP formats.
(2) Polynomial correction method. Polynomial correction method is a traditional transformation relation correction method. Polynomial uses two-dimensional ground control points to calculate the transformation relationship with image points, and sets the coordinates of any pixel in the original image and the corresponding ground point coordinates as (x, y) and (x, y) respectively, which are expressed by mathematical expressions x=Fx(x, y) and y=Fy(x, y). If this mathematical expression is expressed by a polynomial function, the coordinates of the image point are (x, y).
Where (:a0, a 1, a2, a3, ..., an) (,b0, b 1, b2, b3, ..., BN)- transform coefficient.
General polynomial order is 1 to 5, and the expression is 3. The relationship between the number of control points n and the polynomial order n is N(=n+ 1)(n+2)/2, that is, the order of 1 needs 3 control points, the order of 2 needs 6 control points, and the order of 3 needs 10 control points.
Polynomial correction takes into account the difference of two-dimensional planes, so the projection error caused by topographic relief can not be corrected in areas with large height difference, and the accuracy after correction is not high. In addition, considering the influence of incident angle, polynomial correction is not suitable for areas with large topographic relief.
(3) Rational function correction method. The rational function correction method is a geometric correction model of transformation relation, and the ground points P(La, Lb, Hc) are connected with the points (pIi, s a) on the image through rational function coefficients. For the ground point P, the calculation of its image coordinates (pIi, Sa) begins with the regularization of latitude and longitude, that is,
The normalized image coordinates (x, y) are
The obtained image coordinates are
Rational function correction not only carries out the spatial transformation of objects and images with high precision, but also considers the ground elevation compared with polynomial correction method, which simplifies the complex actual sensor model compared with the model based on * * * line equation and is easy to realize.
(4) Correction method of regional network adjustment. Adjustment of regional network: firstly, the three-dimensional space model is simplified to the image space through similarity transformation, then the parallel light is projected on the horizontal plane passing through the center of the original image, and finally it is transformed to the original oblique image, and the error equation is established through affine transformation, including the correction of the parameters of each scene image and the coordinates of the ground image, to form the normal equation, and the adjustment calculation and correction are carried out. Model-based regional network adjustment is to compensate the systematic error of rational function model through the constraint relationship between images. In the adjustment of regional network, control points should be arranged reasonably, and there should be a certain number of connection points between scenes, and the number of control points needed is small.
4.3.4.2 orthorectification
This time, we use the LPS orthophoto module provided by ERDAS, a professional remote sensing image processing software, to correct the orthophoto of the remote sensing image. The correction process is shown in Figure 4- 1 1.
Figure 4-Orthographic correction process of11
In order to connect with the previous county-level land use database, the plane coordinate system still adopts Beijing coordinate system 1954, and height system adopts national elevation datum 1985. The projection method is Gauss-Kruger projection, and the partition method is 3 partitions.
This project involves 79 consecutive homologous image data, so the whole area correction is adopted, with the work area as the correction unit, and the LPS module with the function of area network correction in ERDAS is used to adjust the area network, and the area network file is established according to the image distribution, so as to quickly generate the seamless orthographic image with accurate stitching, as shown in Figure 4- 12. As this work area involves three areas, namely 37, 38 and 39, considering the problem of data splicing in the whole province, the whole project adopts area 38, with the central meridian of 1 14.
This correction adopts SPOT 5 physical model, and the control points are evenly distributed in the whole scene image, with a total of 25 control points, and there are more than two control points in the overlapping area of adjacent scene images.
The distribution of control points in the workspace is shown in Figure 4- 13.
Image orthorectification is based on the measured control points and1∶ 50,000 DEM, with the workspace as the rectification unit, and the sampling interval is 2.5 m. ..
If the control points and connection points are out of tolerance, they should be checked and eliminated. If a point where the error exceeds the tolerance is found, it should be recalculated by setting it as a checkpoint. If the calculations pass, they should be solved by adjustment. If the accuracy exceeds the limit, find out the reason, and consider changing or adding points near the point with large error to solve it, and make necessary rework until it meets the requirements. Control point acquisition is shown in Figure 4- 14.
For the whole scene, we use DEM data to select SPOT 5 orbital push-broom sensor model in LPS, Gauss Kruger in projection and Krasovsky in ellipsoid for orthorectification. The correction accuracy meets the correction accuracy requirements of SPOT 5_2.5 m digital orthophoto map. The corrected map point error is shown in Table 4-2.
Fig. 4- 12 schematic diagram of selection of full-area correction control points
Figure 4- 13 Adjustment and Correction Engineering Drawing of Regional Network
Figure 4- 14 Control Point Acquisition
Table 4-2 Error of Orthophoto Correction Control Points
sequential
mosaic
Check the edge stitching accuracy of orthographic images of adjacent scenes with the projection area as the unit. After checking that the edge joining accuracy is qualified, the orthographic image will be spliced in the unit of projection area.
As the area of the project adopts the adjustment and correction of the area network of LPS orthophoto image module provided by ERDAS, more than two * * * control points are collected for two adjacent images, which correspondingly improves the mosaic accuracy of the images.
In the overlapping area of adjacent landscape images in the project area, 30 pairs of evenly distributed inspection points were randomly selected from plains, hills and mountains respectively to check the edge stitching accuracy of the images. According to the point coordinates of the checkpoint, calculate the point-to-point error of the checkpoint. See Table 4-3.
Table 4-3 Image Mosaic Error
In this project, the image stitching takes the work area as the unit, and the obvious boundaries such as linear elements or plot boundaries are selected as the stitching lines between scenes, so as to eliminate stitching seams in the stitched images as much as possible, and avoid areas with relatively poor quality such as clouds and fog as much as possible, so that there are no cracks, blurs and ghosting at the stitching, which makes the image color transition naturally, ensures the consistency of texture features in the same plot when images are stitched at different times, and is convenient for category interpretation and boundary drawing. The mosaic image is shown in Figure 4- 15.
- Previous article:How to face a public relations crisis
- Next article:Nanning tourist attractions introduction video copy Nanning tourism advertising words
- Related articles
- How about Sichuan Tiffany Culture Communication Co., Ltd.
- What is the introduction and information of coser Zhen Chen?
- Why do the characters sometimes appear deformed when taking photos?
- Tourist guide words of scenic spots and historical sites
- Is Bo Huang, the actor in the black humor film Bullfighting, as good as before?
- Where will the Air Force Early Warning Academy go after graduation?
- What mode and setting is better for Canon 600D to shoot underwear and bags?
- Help translate the following contents into English, with high score ~ ~ ~ ~ ~
- What is the viewing order of Godzilla films?
- Chen Shimei is a loyal minister and husband and wife love each other. Is he being framed to make a play to discredit him?