Traditional Culture Encyclopedia - Photography and portraiture - Viewing unified shadow from strip shadow
Viewing unified shadow from strip shadow
It is actually very simple to reproduce the "banded shadow", and we can even get this phenomenon without modifying any Unity default settings. Do an experiment, create a new Unity project (20 19), create two Quad objects, and then overlap them up and down. Set the height of the top Quad (Y-axis world coordinates) to 0.05, and the height of the bottom quad to 0, and adjust the main direction light to keep the vertical direction, that is, rotate: (90,0,0). If all goes well, it may work. 0 & gt。
We keep the illumination and plane position unchanged, and then we need to adjust two important parameters, called offset and normal offset. Unity exposes these two parameters in the setting parameters of the light component about shadows in the standard rendering pipeline mode. URP mode can be found in the settings bar of rendering function. In short, find and set bias = 0 and normal bias = 0, and let's see what happens:
Under the simple inventory, under the direct illumination with a certain inclination angle, we will observe the projection of the upper panel to the lower panel in two panels that are very close together, and we will find "banded shadows"; On this basis, we reset the two parameters of bias and normal bias to zero, and the shadow projected between patches returns to normal, but the original non-shadow area on patches appears "strip shadow".
In any case, before we interpret it, we should make it clear that this phenomenon is caused by Unity's handling of shadow effects, because we only adjusted two parameters that control the shadow display. As for the part about how Unity or the general game engine renders the projection effect of objects, I won't elaborate on it here, considering that the information about related content on the Internet is very rich. In short, we can organize the general rendering logic into three steps, as follows:
Step 1 is slightly expanded and involves placing a camera at the position of the light source, and then generating a depth map based on the perspective of the light source. The shadow area of the light source in the scene is where the camera cannot see the position of the light source. ShadowMap is essentially a depth map, which records the depth value of the surface closest to the light source from the position of the light source in the visible scene.
The left side of the above picture is the depth map. When drawing the scene at the camera position on the right, you can see the point Va in the scene, and its corresponding position in the shadow map is A. The depth of Va is not greater than the depth stored in the position A in the depth map, so Va is not in the shadow.
Considering that the above shadowmap rendering technology can be traced back to 1978 at the earliest, people have fully understood the advantages and disadvantages that this technology can bring to graphic rendering. Return to the phenomenon
With a certain foundation of shadow texture algorithm and a certain understanding of the data structure used, we might as well start to study the phenomenon.
Another black dot is marked on the right side of the red dot on PL, which undoubtedly falls in the visible region. Moreover, since the illumination is vertically downward, the shortest distance from the light source to this black spot should be consistent with the red dot on the side, which is equal to the length of the red connecting line in the figure, that is, {Depth}.
Things get interesting when the light is tilted and intersects the object plane (PL) at an angle. As we all know, due to the perspective principle, the object (PL) near the camera will occupy more pixels in the screen space than the distant object (light source), and the light source space that generates the ShadowMap has a similar phenomenon, but it is not caused by perspective projection (orthogonal projection is used for direct illumination), but the interaction of angle and resolution * * *:
Now consider the BCP triangle filled with the yellow dotted line in the above figure, and the shortest distance from the object (such as vertex P 1) in this area to the light source must be less than the depth value {depth }; indicated by the red solid line in the figure; Similarly, in a triangle PED filled with black dotted lines, the distance from all points to the light source must be greater than the depth value {depth}. According to the definition of illumination projection algorithm, the depth value of point P 1 in BCP triangle after conversion to light source space is smaller than the light source depth recorded there, so P 1 is not in the shadow, which is obviously the expected result; However, P2 on the other side of the triangle PED is given a shadow in the same algorithm, which is obviously a wrong conclusion.
From the previous analysis, we can see that under the condition of certain illumination angle and certain texture resolution, the points P 1, P and P2 on the three planes PL that should be in a non-shadow state will get the wrong conclusion (the projection result of the triangle PED region) after being processed by the standard illumination projection algorithm, and the reasons for the projection error are also shown by drawing. In order to reflect the problem more intuitively and quantitatively, we might as well simplify the above legend as follows ():
In the above picture, the thick black line represents the plane in the scene, and the thick yellow line represents the near plane corresponding to the light source. The near plane corresponds to the generated shadow map. AB is represented as an area corresponding to the texture elements on the shadow map on this near plane. Suppose that the near plane is a square with a standard size and a side length of FSize. The corresponding shadow map resolution is set to SSize. Then the coordinate size corresponding to AB can be calculated as follows: AB = FSize/SSize.
We hope that in some ways, the projected CD part of the corresponding area of texel AB on the object plane can be completely on the left side of the blue dotted line in the figure. The solution is also simple and intuitive. It is reasonable to slightly increase the depth value stored in the shadow map, preferably large enough to ensure that the CD segment is in the non-shadow area.
This is the first correction, which shifts the direction of the light:
Observe the line segment (vector) GD, which is the shortest translation distance according to the direction of light. The blue dotted line is just to the right of the translated line segment CD, which ensures that the depth of the CD segment in the light source space can be less than the sampling depth of the shadow map (that is, the length of the BD line segment in the figure) when calculating the shadow depth.
Solve GD:
In addition to directly increasing the depth along the ray direction, it can also be offset along the normal direction, which indirectly affects the depth value. This is the normal deviation that we will study next.
In fact, normal offset can be achieved in two ways. One method is to move the vertex of the object along the normal direction before transforming it into the light source space, thus reducing the depth of the pixel. For details, please refer to the following figure:
The other is to shift the vertex of the object to the opposite direction of the normal when generating the shadow map, thus increasing the depth value of the shadow in disguise. See the figure below for details:
Let's talk about the first way first. Its mechanism is not to directly modify the depth value of shadow texture, but to minimize the depth of the object in the light source space, so it needs to be implemented in the chip shader. With reference to the graph [normal 1], the surface of the object moves by a distance GM along its normal direction, and a new position C'g is obtained. The whole area of the original CM segment is lifted to the left of the blue dotted line. We know that the depth value of C'G area must be less than or equal to the depth value EF of point F, and now it will be normally irradiated by the light source; As for some line segments on the right side of G-point, don't worry, they will be covered by adjacent ShadowMap.
Here α refers to the angle formed by the direction of light and the normal direction.
For the second method, it is intuitive to understand, because it directly acts on the shadow map to modify (increase) the depth value in the texture element. Referring to Figure [normal 2], the corrected blue dashed line moves to the lower left by a distance FN, thus ensuring that the CD segment area is located to the left of the shadow depth line and normal lighting. As for the calculation of the offset MN, it needs to be divided into two steps:
Where α refers to the angle between the light direction and the normal; φ refers to the angle between the direction of the light source and the normal.
Unity introduces two deviation variables as the reason to control the shadow depth parameter. I think there are at least two floors. One is to increase flexibility, so that users can adjust the depth of light source direction or normal direction according to the characteristics of the scene itself. Imagine that if Unity only provides normal deviation as a parameter, in some extreme cases, excessive normal deviation will lead to the phenomenon that the shadow cast by the object is separated from the object itself (Peter Pan).
As for another reason, you can refer to the above picture. Offset refers to the tanθ curve in the figure, normal offset refers to the sinθ curve, and θ is the angle between the light source plane and the object plane. When the light direction is more parallel to the surface of the object, the plane of the light source is more perpendicular to the surface of the object, and the θ angle is closer to 90 degrees (π/2). At this time, the correction distance calculated by bias will also tend to infinity with tan (the blue arrow in the figure is inclined upwards). If sinθ is introduced at this time, the influence of bias can be balanced, because it is very simple: the sin function has an upper bound (blue horizontal arrow in the figure), so we can give priority to the calculation result of bias when θ angle is less than 45 degrees (π/4), and when θ angle exceeds 45 degrees, the weight of calculating bias distance will fall to the normal bias side.
Some people may ask, since the offset calculated by bias will tend to infinity, why not directly use positive bias? There has been an answer to this before: we need to avoid the Peter Pan phenomenon.
The two motion modes of normal bias are not exactly the same in actual effect. In some cases, changing the depth value of the shadow map in the vertex shader will cause the normal offset to be invalid. Referring to the above picture, the leftmost part is a common case of shadow generation. Because the depth of point C is used for comparison in the whole unit, the ed segment will generate shadows by mistake. The schematic diagram in the middle shows the practice of increasing the depth of shadow mapping through vertex shader. In this method, it is known that the depth difference from c to d is recorded as d, and the included angle between light and the normal of AB section is θ, then the minimum distance of eliminating shadow acne in the opposite direction of normal is calculated as follows:
In this case, when θ tends to 90 (that is, the light is almost parallel to AD) and the minimum distance tends to infinity, it is difficult for normal offset to reach the size of eliminating acne. The diagram on the right shows the calculation method of the moving position of the tile shader. Even if the angle changes, the minimum distance will not approach infinity. However, this part of the calculation on the chip shader needs more calculation than the previous method.
Strip "light and shade" gradient, in Unity20 19.4. 13 version, use comments to render the pipeline, open soft shadows and then take screenshots.
For acne-like or file-like black spots, in Unity20 19.4. 13 version, the label rendering pipeline is used, and the soft shadows are turned off before the screen is taken.
Tucao: I used to wonder why the nickname of self-shadow is shadow pox, not the more common shadow stripe. Until I turned off the soft light shadow, cleared the deviation correction, and saw the picture above. Indeed, this is what the two-dimensional lattice texture looks like after making mistakes. As shown in the figure below, the area corresponding to each independent texture element should be divided into squares (orthogonal projection), and when it encounters incident light at a certain angle on the two-dimensional plane, it forms a regular acne-like dark spot as shown in the figure above.
After the above analysis, we know this phenomenon.
When there are objects that normally produce shadows in the scene (the blue rectangle in the above picture indicates obstacles), a deviation is added globally, which solves the problem of self-shadow in the above picture, but the shadows that should normally produce are also eliminated. When the deviation gradually changes from small to large, for an object that normally produces shadow, its shadow begins near one end of the object and gradually disappears. This phenomenon has a special term called "light bleeding".
Next, we introduce the phenomenon of < 1 >; The scene layout of. Or oblique incidence, this time a new plane (UD) is added directly below the object plane (P 1). If the shadow depth is not corrected (bias is 0), then the isobath CD projected by pixel A in the shadow texture will intersect the PL plane at point P, and then we can see the self-shadowing effect on PL, that is, the yellow triangle surrounded by BCP point and the black triangle surrounded by PDH point. When bias is turned on, the isobath CD moves twice (red arrow in the figure) along the direction of light and the opposite direction of the normal of the object, bringing the position of the line segment GI. When the distance between the UD plane and the PL plane is close enough, the line segment GI will intersect at the point P2 on the UD plane, and at the same time form two different regions surrounded by the points EGP2 and IJP2. At this time, we say that there is "light leakage" on the plane UD, that is, the area (EP2) that should not be illuminated is now wrongly illuminated.
The abnormal influence of projection on the lower plane UD can be eliminated by eliminating two offsets. Referring to the outline plane of the shadow texture represented by the line segment CD in the above figure, it does not appear on the right side of the EJ segment on the plane UD (or does not intersect with the line segment EJ). It can be seen that in this case, the projection from PL can be correctly displayed on UD.
On the premise of canceling the deviation, when we further narrow the distance between the two planes, we can reproduce the anomaly again. At this time, the upper and lower planes can be approximately regarded as one plane, and the lower plane UD will produce self-shadow from PL.
phenomenon
Phenomenologically speaking,
If we are interested in this phenomenon
As shown by the green arrow in the above picture, we need to make a small deviation from the normal direction of the plane UD in the original stage of the film to keep it away from the compressed isobath GI. There is a difficulty in this. How do we judge that a pixel needs extra shift? Or how do we know that the object plane UD needs extra offset instead of the object plane P 1? The simplest method is to throw the problem to the CPU, mark different vertices in the preprocessing stage and bring them into the GPU to participate in the calculation at runtime.
- Previous article:This paper reveals the skills of using the reflector of portrait photography auxiliary equipment.
- Next article:One plus 7tpro parameter
- Related articles
- Custom template for poster design-how to do poster design
- Community Day Program for Disabled Persons
- Works of Miyazaki Hayao Film Studio
- Which book did geography first come from?
- How about Shaanxi Shi Sheng Jin Lan Culture Media Co., Ltd.
- Oppoa9x tape without wide-angle shooting
- Where can I find wedding photography in Beijing (wedding photography shops rank in the top ten)
- What is light painting photography? How to take a photo of light painting [detailed introduction]
- University life: exploration and experience
- "Bold Beauty" Swimsuit Sharing