Traditional Culture Encyclopedia - Photography major - Longbo's phone

Longbo's phone

22. Forward intersection of stereo image pairs This method of determining the ground coordinates of corresponding ground points from the internal and external orientation elements and image point coordinates of two photos in stereo image pairs is called spatial forward intersection.

The calculation steps of space forward intersection are: 1. The auxiliary coordinates of image space are calculated from the known external orientation elements and image point coordinates; 2. Calculate the photographic baseline components Bx, By and BZ according to the foreign bit line elements; 3 calculate the projection coefficient N 1, N2; Finally, the photogrammetric coordinates of ground points are calculated by using the forward intersection formula. Since N 1 and N2 have been calculated, YA should take the average value when calculating the ground coordinates to eliminate the influence of residual in relative orientation.

23. Space post-intersection-forward intersection method

On-site photo control survey

Measure the coordinates of image points with a stereo coordinate measuring instrument.

Calculation of external orientation elements of photos by spatial rear intersection

Calculating the Ground Coordinates of Unknown Points by Space Forward Intersection

24. Relative orientation of analytical methods

Regardless of the absolute position and posture of the photos for the time being, only the relative position and posture between the two photos are restored. The stereo model established in this way is called relative stereo model, and its scale and orientation are arbitrary. Then on this basis, the two photos are scaled, translated and rotated as a whole to reach the absolute position. This method is called relative orientation-absolute orientation.

The process of solving relative orientation elements by analytical calculation is called analytical relative orientation. There is no need for control points because the absolute position of the photo is not involved.

The intersection of homologous ray pairs is the theoretical basis of relative orientation.

The relative orientation of consecutive image pairs is based on the left image, and the relative orientation elements of the right image with respect to the left image are obtained.

Left: XS 1 = 0, YS 1 = 0, zs1= 0j1= w1= 0.

Right: xs2 = bx, ys2 = by, zs2 = BZ, j2, w2, k2.

The relative orientation elements are by, bz, J2, w2 and k2.

The relative orientation of a single image pair takes the photographic baseline as the X axis of the image space auxiliary coordinate system, the left photographic center S as the origin, and the main kernel plane (left main kernel plane) composed of the left photographic main optical axis and the photographic baseline B as the XZ plane, thus forming the right-handed rectangular coordinate system. At this point, the relative orientation elements of the left and right photos are:

Left: XS 1 = 0, YS 1 = 0, ZS 1 = 0, J 1, W 1 = 0, K 1.

Right: xs2 = bx = b, ys2 = by = 0, zs2 = BZ = 0, J2, w2, k2.

The relative orientation elements are five J 1, K 1, J2, W2 and K2.

The intersection of pairs of rays with the same name means that the rays S 1a 1, S2a2 and the photographic baseline B are in the same plane, that is, the three vectors S1a/,S2a2 and B are coplanar. According to vector algebra, three vectors are coplanar, and their mixed product is equal to zero, that is:

Calculation process of relative orientation elements of continuous image pairs

① Measure the image point coordinates (x 1, y 1) and (x2, y2) of the selected six direction points on the stereo coordinate measuring instrument.

(2) determining the initial value: assuming that the left photo is horizontal and the rotation matrix R 1 of the left photo is identity matrix; The initial values of corner elements j, w, k and m, g of the right block are zero; Bx takes the left and right parallax (x 1, x2) of1point in the orientation point.

③ According to the initial value, calculate the right rotation matrix R2.

(4) calculating the auxiliary coordinates of the image space according to the plane coordinates of the input image points;

⑤ Calculate by and bz according to the given initial values, and calculate the projection coefficients of each point N 1 and N2 according to the auxiliary coordinates of the image space.

⑥ According to the operation formula of relative orientation of continuous image pairs, the constant term and coefficient term of the error equation of each orientation point are calculated to form an error equation.

⑦ Calculate the coefficient matrix and constant term of the equation, and solve the normal equation to get the correction number of the unknown quantity.

Find the new value of the unknown quantity, that is, the initial value plus the correction number.

⑨ Check whether the correction number of the unknown quantity is greater than the tolerance. If it is, repeat the calculation in steps ③ ~ ⑧ until all the corrections are less than the tolerance.

25. Calculation of model point coordinates

The photogrammetry coordinate system P-XYPPZP is established, which is parallel to the coordinate axis of the auxiliary coordinate system in the image space, the origin P is on the Z 1 axis, and the distance to S 1 is mf. Here, m is the photo scale and f is the main distance of the camera. Then the photogrammetric coordinate of point S 1 is (0,0, mf).

26. Absolute orientation by analytical method

The purpose of absolute orientation by analytical method is to convert photogrammetric coordinates obtained after relative orientation into ground survey coordinates.

The solution process of absolute orientation

(1) Determine the initial values of the parameters to be solved: φ 0 = ω 0 = к 0 = 0, λ0= 1, δ x = δ y = δ z = 0.

(2) Calculate the barycenter coordinates and barycenter coordinates of the ground photogrammetry coordinate system.

(3) Calculate the barycenter coordinates and barycenter coordinates of the photogrammetry coordinate system.

(4) Calculate the constant term

(5) Constitute the total error equation.

And (6) normalizing point by point and solving the sum equation to obtain the correction number of the undetermined parameters.

(7) Calculating the new values of the parameters to be determined.

(8) Judge whether dφ, dω and dк are all less than the given limit ε. If it is greater than the limit ε, the calculation is repeated, otherwise, the calculation process ends.

27. Comparison of three commonly used methods in double-image analytical photogrammetry

(1) The forward intersection result of the first method depends on the precision of the spatial backward intersection, and the redundancy condition is not fully used for adjustment calculation in the forward intersection process;

② The second method has many formulas, and the accuracy of the final point depends on the accuracy of relative orientation and absolute orientation, so the result of this method cannot strictly express the external orientation elements of an image;

(3) The third method has the strictest theory and the highest accuracy, and the coordinates of the point to be fixed are obtained completely according to the original understanding of the least square method.

28. Analyze the definition, purpose, significance and classification of aerial triangulation.

In dual-image analytical photogrammetry, each image pair must measure four ground control points on the spot. This kind of field work is too heavy and inefficient. Is it possible to survey a small number of field control points only in more than a dozen image pairs within a band, or in a regional network composed of several bands, encrypt the control points required by each image pair in the field with analytical photogrammetry, and then use them for mapping? Analytic aerial triangulation is the solution to this problem.

Analytical aerial triangulation refers to determining the external orientation elements of all images in the area by photogrammetry analytical method.

The significance of photogrammetry to measure (or encrypt) point coordinates lies in:

The position and geometry of any target that can be seen on the image can be determined without direct contact with the measured target or object and without the limitation of ground visibility conditions;

It can quickly synchronize measuring points in a large range, saving a lot of field measurement workload;

In the adjustment calculation of photogrammetry, the internal precision of the encrypted area is uniform and rarely affected by the size of the area;

Traditionally, according to the mathematical model used in adjustment, it can be divided into three methods: navigation belt method, independent model method and beam method.

According to the adjustment range, analytic aerial triangulation can be divided into single model method, single navigation belt method and regional network method.

The free zone method is to establish a free zone through relative orientation and model connection, take the photogrammetric coordinates of points in this zone as observation values, and make the free network enter the required ground coordinate system by determining the transformation parameters in the nonlinear polynomial, and minimize the sum of squares of inconsistent values at common points.

Independent model adjustment is to establish the unit model through relative orientation, take the coordinates of model points as observation values, and bring the unit model into the specified ground coordinate system through spatial similarity transformation, so as to minimize the sum of squares of residuals at the connection points of the model.

The beam method directly starts from the beam of each image, takes the coordinates of the image points as the observed values, and through the translation and rotation of each beam in three-dimensional space, makes the beams with the same name optimally meet in the object side and bring them into the specified coordinate system, thus encrypting the object side coordinates of the point to be found and the directional elements of the image.

29, GPS-assisted aerial triangulation principle

GPS-assisted aerial triangulation refers to the simultaneous, rapid and continuous recording of the same GPS satellite signal by the airborne GPS receiver and the GPS receiver at the ground reference point. Through the off-line data post-processing of the relative positioning technology, the high-precision three-dimensional coordinates of the camera station at the time of camera exposure are obtained as additional non-photogrammetric observation values in the regional network adjustment, and the ground control is replaced (or reduced) by aerial control. The theory, technology and method of using unified mathematical model and algorithm to determine the overall position and evaluate its quality.

chapter four

30, digital photogrammetry definition

Using digital gray signal and digital correlation technology to measure the same name image points, on this basis, through analytical calculation, relative orientation and absolute orientation, a digital stereo model is established, so as to establish a digital elevation model, draw contour lines, make orthophoto maps, provide basic information for GIS, and also provide all-digital photogrammetry.

3 1, image digitization and image resampling

Putting transparent positive film (or negative film) on an image digitizer and recording the gray value of image points in digital form is called image digitization.

The gray scale of an image, also known as optical density, reflects the degree of transparency, that is, the ability to transmit light and the transmittance t, and the gray scale of an image is expressed by the logarithm of opacity.

The discrete measurement process of the actual continuous function model is sampling, the measured points are called sample points, and the distance between sample points is the sampling interval. The process of taking the gray value of each point as an integer is called the quantization of image gray.

32, digital image orientation

In the digital process of photo scanning, the scanning coordinate system of a photo is generally not parallel to the image plane coordinate system, and the coordinate origin is different, so the image plane coordinates x and y of the same image point are not equal to their scanning coordinates x' and y', and need to be converted, which is called the internal orientation of digital images.

33. Gray-based image correlation

Finding the same name image points from the left and right digital images, that is, digital image correlation, is the core problem of all-digital photogrammetry. Firstly, the image signal in a small area with a fixed point as the center is taken out, then the image signal in the corresponding area in another image is taken out, and the correlation function between them is calculated, with the center point of the corresponding area corresponding to the maximum correlation function as the homonym point.

Methods: Correlation coefficient method, covariance method and high precision least square correlation method.

34. Polar lines are related to polar lines and polar lines.

The plane WA formed by the photographic baseline S 1S2 and any object point A is called the nuclear plane passing through point A ... and the nuclear plane passing through the principal point is called the principal nuclear plane. In stereo image pairs, the left image and the right image each have their own principal kernel planes, and usually the two principal kernel planes do not coincide. The intersection of the kernel plane and the image plane is called the epipolar line. At any point on the epipolar line, its image point with the same name on another image must be located on its epipolar line with the same name.

Nuclear line correlation

35. Concept and common methods of feature-based image matching.

Concept: Image matching is mainly used to register those feature points, lines or surfaces. Feature matching can be divided into three steps: ① feature extraction; ② A set of parameters is used to describe the features; ③ Feature matching with parameters.

Point feature extraction algorithms: Moravec operator and Forstner operator.

Line feature extraction operators: Commonly used methods include difference operator, Laplacian and LOG operators.

Comparison results of edge detection operators

36, jumper method image matching

Length feature extraction: Point features or line features are extracted from the left and right images along the epipolar line by feature segmentation, and the features on the epipolar line are defined as a feature segment consisting of three feature points (zero intersection Z and two inflection points S 1, S2).

2. Forming a bridge matching window: The so-called bridge window structure is to connect two or more adjacent features to form a window.

3. Cross image matching

(1) On the left image, Fb and Fe are the registration features and features to be matched that form the target window, respectively.

(2) On the right image, Fb is a registered feature, and several features are selected on the right image.

(3) Comparing the feature parameters between the feature Fe to be matched and the candidate features, and selecting similar features.

(4) On the right figure, Fb is used as the endpoint feature of the window, and the selected candidate feature is used as the feature of the other end of the window to form different matching windows.

(5) resample the matching window to make it always equal to the length of the target window of the left image, thus eliminating the influence of geometric distortion on the correlation.

(6) Calculate the correlation coefficient between the target window and the resampling matching window, and determine the homonym feature of Fe according to the criterion of maximum correlation coefficient.

4. Edge tracking, transfer matching features.

37. Development, composition and function of digital photogrammetry system.

In 1960s, shortly after the first analytical mapping instrument AP- 1 came out, the United States also developed a full digital mapping system DAMC.

1988 commercial digital photogrammetry workstation DSP- 1 exhibited at the International Photogrammetry and Remote Sensing Association 16 conference held in Kyoto.

1992 In August, at the 17 international conference on photogrammetry and remote sensing held in Washington, USA, a number of mature products were exhibited, and the digital photogrammetry workstation was entering the production stage of photogrammetry from the experimental stage.

1In July, 1996, at the 17 International Photogrammetry and Remote Sensing Conference held in Vienna, more than a dozen sets of digital photogrammetry workstations were exhibited, which indicated that digital photogrammetry workstations had entered the use stage.

Hardware composition: computer;

External equipment: stereo observation equipment; operational control unit

Input device: image digitizer

Output device: vector plotter; Grid plotter

Software composition:

Digital image processing software mainly includes: image rotation; Image filtering; Image enhancement; Feature extraction.

Pattern recognition software mainly includes: feature recognition and location, frame mark recognition and location; Image matching (identification of points, lines and faces with the same name); Target recognition.

Analytical photogrammetry software mainly includes: calculation of azimuth parameters; Aerial triangulation solution; Calculation, coordinate calculation and transformation of polar relationship; Digital interpolation and digital differential correction; Projection transformation.

Auxiliary function software mainly includes: data input and output; Data format conversion; Notes; Quality report; Finishing profile; Man-machine interaction.

Functions: image digitization, image processing, measurement (single image, double image and multi-image), image orientation (internal orientation, relative orientation and absolute orientation), epipolar image, image matching, automatic aerial triangulation, establishment of digital elevation model, automatic isoline drawing, orthophoto making, orthophoto mosaic and restoration, digital drawing, image map making, perspective drawing, landscape drawing, etc.

chapter five

38. Concepts and application fields of DTM and digital elevation model

Digital Terrain Model (DTM) is a data array used to represent the spatial distribution of ground objects. The most commonly used data array consists of a series of plane coordinates x and y of ground points and the elevation z or attributes of ground points.

If the ground is regularly arranged in a certain grid form, the plane coordinates x and y of points can be calculated from the initial origin without recording, so the surface morphology is only represented by the elevation z of points, which is called digital elevation model (DEM).

DTM is a finite sequence of N-dimensional vectors defined in a certain area D. If only the terrain components of DTM are considered, we usually call it Digital Elevation Model DEM.

39.DEM representation

Digital elevation model DEM is a finite sequence of three-dimensional vectors {vi = (Xi, yi, sub), i= 1, 2, ... n} to represent the terrain on the D region, where (Xi, yi) ∈D is the plane coordinate and Zi is the elevation corresponding to (xi, yi).

(1) Regular rectangular grid: The elevation z of a series of terrain points arranged at equal intervals in X and Y directions is used to represent the terrain, and a rectangular grid DEM is formed.

(2) TIN: If the points collected according to the terrain features are connected into many triangles covering the whole area according to certain rules, a DEM represented by TIN is formed, which is usually called TIN or TIN.

(3)Grid-TIN hybrid network: Professor Ebner and others in Germany put forward the Grid-TIN hybrid DEM. Generally, the rectangular network data structure is adopted, and the triangular network data structure is attached along the terrain elements.

40, DEM data acquisition method

In order to establish DEM, it is necessary to measure the three-dimensional coordinates of some points, which are called data points.

(1) ground survey: use the automatic recording distance measuring theodolite for field survey.

(2) Digitization of existing maps: the method of digitizing information on existing maps with digitizers.

Manual tracking digitizer, scanning digitizer, semi-automatic tracking digitizer.

(3) Space sensor: use GPS, radar and laser altimeter to collect data.

4 1 and the concept, method and characteristics of DEM interpolation

DEM interpolation is to calculate the elevation of other points to be fixed according to the elevation of reference point, which belongs to interpolation problem in mathematics.

The main methods are moving surface fitting method, weighted average method and least square collocation method.

(1) The undulating shape of the whole earth's surface cannot be fitted by a simple polynomial of low order. However, the solution of higher order polynomial is unstable and will produce unrealistic oscillation.

(2) The terrain surface is continuous and smooth, and it may be discontinuous due to natural forces or man-made reasons.

(3) Due to the limitation of computer memory, it is impossible to interpolate a large range of mathematical ground models at the same time.

(4) Therefore, the survey area or map sheet is generally divided into smaller calculation units, and the local function interpolation method is adopted. The general data points and terrain feature points and lines are considered during interpolation, and the corresponding interpolation method is adopted according to the different data point acquisition methods.

42, moving surface fitting method

It is a point-by-point interpolation method with the point to be solved as the center, defines a new local function to fit the surrounding data points, and then calculates the elevation of the point to be solved. Usually, the coordinate origin is moved to a fixed point, and the data points used should fall within a circle with radius r.

(l) establishing local coordinates: for each DEM grid point, retrieving data points in a plurality of partitioned grids corresponding to the DEM grid point from data points, and moving the coordinate origin to DEM grid point P(Xp, YP);

(2) Selecting adjacent data points: In order to select adjacent data points, a circle is made with the point to be fixed P as the center and R as the radius, and all data points falling in the circle are selected.

(3) List the error equation.

(4) Calculate the weight of each data point.

(5) Legalization solution

43, polygon function method DEM interpolation

A surface (revolving surface) is established at each data point, and this surface is superimposed in a certain proportion to best describe the required object surface, and the superimposed surface strictly passes through each data point.

44, DEM data compression method

Integral storage: subtract a constant Z0 from the elevation data, which can be the average elevation of a certain area or the first elevation of the area. Enlarge 10 times or 100 times according to the accuracy requirements, and keep the integer part after rounding the decimal part.

Differential mapping: the increment between adjacent data, the data range is small, and one byte can be used to store one data, so that the data is compressed to nearly a quarter of the original storage capacity.

Compression coding: When designing a code according to the probability of each number, the number with the highest occurrence probability is represented by the code with the shortest bit, and the number with the lower occurrence probability is represented by the code with longer bit, so the average bit of each data is smaller than the original fixed bit (16 or 8).

45, triangulation digital terrain model storage method

In rectangular grid, the data storage mode of TIN is very different from DTM. It not only stores the elevation of each grid point, but also stores its plane coordinates, topological relations between grid points, triangles and adjacent triangles. The commonly used TIN storage structure has the following three forms: directly representing the adjacency relationship of nodes; Directly represent triangles and adjacencies; Mixed representation of point and triangle adjacency.

46, the application of digital terrain model

(1) can be used to draw contour lines, slopes, aspect maps and stereoscopic perspective views in surveying and mapping, as well as to make orthophoto maps, stereoscopic landscape maps, stereoscopic matching films, stereoscopic terrain models and map revisions.

(2) It can be used to calculate the volume and area, draw various sections and design lines in various projects.

(3) It can be used in military navigation (including missile and aircraft navigation), communication, combat mission planning, etc.

(4) It can be used as auxiliary data for remote sensing classification.

(5) In terms of environment and planning, it can be used for land use status analysis, various planning and flood risk prediction.

(6) The application of digital terrain model in surveying and mapping.

47, contour drawing method based on regular rectangular grid.

Automatic contour drawing based on regular grid DEM mainly includes the following two steps:

(1) Use the elevation of rectangular grid points of DEM to interpolate contour points at the edge of the grid, and arrange these contour points in sequence (i.e. contour tracing).

1) to determine the contour elevation.

2) calculating the state matrix

3) Processing of starting and ending points of contour lines

4) Interpolating contour points

5) Search for the next contour point.

6) Search the contour end point.

(2) Interpolation is carried out by using the plane coordinates x and y of these sequentially arranged contour points, that is, the contour points are further encrypted and drawn into a smooth curve (that is, the contour line is smooth).

In order to obtain a smooth contour, it is necessary to interpolate (encrypt) between these discrete contour points.

Interpolation method has the following requirements:

Curves pass through known contour points (often called nodes);

The curve is smooth at the node, that is, its first derivative (or second derivative) is continuous;

The curve between two adjacent nodes has no unnecessary swing;

The same contour lines cannot intersect.

48, the main process of drawing perspective from DEM

1. Select the appropriate elevation z of the reference plane and the magnification m of the elevation z .. This is very necessary to exaggerate the three-dimensional shape of the terrain.

2. Select appropriate viewpoint positions xs, ys and zs; Line of sight azimuth t (line of sight direction), j (line of sight overlooking angle).

3. According to the selected or calculated parameters XS, YS, ZS, a 1, a2…c2, c3 and the principal distance f, the perspective transformation from the object to the image is calculated, and the "image point" coordinates x and y of each node of DEM are obtained through collinear equation.

4. Processing of hidden lines.

5. Start with the DTM part closest to the viewpoint and draw it part by part. For each grid point in the first paragraph, only one grid point in front of it needs to be connected; For each grid point of each subsequent section, it should be connected not only with the previous grid point of the same section, but also with the adjacent grid point of the previous section (of course, the hidden part is not drawn).

6. By adjusting the value of each parameter, you can draw perspective views of different shapes from different directions and distances to make animation. When the computer speed is high enough, the animated DTM perspective can be generated in real time.