Traditional Culture Encyclopedia - Photography major - Basic implementation method of rasterization
Basic implementation method of rasterization
The most basic rasterization algorithm renders a three-dimensional scene represented by polygons to a two-dimensional surface. A polygon is represented by a collection of triangles, which are represented by three vertices in three-dimensional space. In its simplest implementation, the rasterizer maps the vertex data to corresponding 2D coordinate points on the viewer's display, and then fills the transformed 2D triangle appropriately. Once the triangle vertices are converted to their correct 2D positions, these positions may be outside the viewing window or inside the screen. Cropping is the process of processing triangles to fit the display area.
The most commonly used technique is the Sutherland-Hodgeman clipping algorithm. In this approach, each image plane is tested four edges at a time, and each point to be rendered is tested for each edge. If the point is outside the boundary, the point is eliminated. For a triangle edge that intersects a plane edge of the image, that is, one vertex of the edge is inside the image and one is outside, then insert a point at the intersection and remove the outside point. The final step in the traditional rasterization process is to fill the two-dimensional triangles in the image plane. This process is called scan transformation.
The first question to consider is whether a given pixel needs to be drawn. A pixel to be rendered must be inside a triangle, must not be clipped, and must not be obscured by other pixels. There are many algorithms that can be used to fill within a triangle, the most popular of which is the scanline algorithm.
Since it is difficult to determine whether the rasterization engine will draw all pixels from front to back, there must be some way to ensure that pixels closer to the viewer are not overwritten by pixels further away. One of the most commonly used methods is the depth buffer, which is a two-dimensional array that stores the depth of each pixel corresponding to the image plane. The depth value in the depth buffer must be updated when each pixel is drawn. Each new pixel must check the depth value in the depth buffer before drawing. Pixels closer to the observer will be drawn, while pixels farther away will be drawn. All were abandoned.
In order to determine the pixel color, texture or shade effect calculations are required. A texture map is a bitmap that defines the appearance of a triangle display. Each triangle vertex is associated with a texture in addition to a position coordinate and a 2D texture coordinate (u,v). Each time a pixel in a triangle is rendered, the corresponding texel must be found in the texture, which is done by interpolating between the triangle vertices associated with the texture coordinates based on the distance between the pixel and the vertex on the screen. In perspective projection, interpolation is performed on texture coordinates separated by vertex depth. This avoids the problem of perspective foreshortening.
Before the final color of a pixel is determined, the lighting on the pixel must be calculated based on all light sources in the scene. There are usually three types of light sources in a scene. Directional light is light that travels in a fixed direction in a scene and remains constant in intensity. In real life, sunlight can be viewed as directional light because the sun is so far away that it appears to an observer on Earth as parallel rays with minimal attenuation. A point light is a light source that emits light in all directions from a well-defined location in space. Light incident on distant objects is attenuated. The last one is a spotlight. Just like a spotlight in real life, it has a clear spatial position, direction and angle of the light cone. Additionally, it's common to add an ambient light value after the lighting calculation is complete to compensate for global illumination effects that rasterization cannot calculate correctly.
There are many shading algorithms that can be used for rasterization. All shading algorithms must consider the distance to the light source as well as the normal vector of the occluding object and the light incident angle. The fastest algorithm uses the same brightness for all pixels in a triangle, but this method cannot produce a smooth surface. In addition, you can also calculate the brightness of the vertices separately, and then interpolate the vertex brightness when drawing the internal pixels. The slowest and most realistic implementation is to calculate the brightness of each point individually. Commonly used shading models include Gouraud shading and Phong shading.
- Related articles
- Game activity scheme suitable for middle-aged and elderly people
- How to evaluate the production team of TV series "I heard that you like me"?
- Summarize what happened in every episode of Conan.
- Can you explain this?
- Did Sean Shaw graduate from graphic design before?
- Cindy sherman's early works will be collected in the Gucci Florence Museum.
- A brief introduction to the plot of the super soldier king hanging out in the countryside
- Li Jiaqi, Yang Mi, the beauty cheats of online celebrities and winners.
- The Great Hidden Land, Bingzhongluo
- What's the best way to get to Starlight Equipment City, No.288 Luban Road, Shanghai?