Approximating Dynamic Global Illumination in Image Space

mac2024-05-24  48

abstract physically plausible illumination at real-time framerates is often achieved using approximations. one popular example is ambient occlusion (AO), for which very simple and efficient implementations are used extensively in production. Recent methods approximate AO between nearby geometry 几何体之间 in screen space (SSAO). the key observation described in this chapter is, that screen-space occlusion methods can be used to compute many more types of effects than just occlusion, such as directional shadows and indirect color bleeding. the proposed generalization has only a small overhead compared to classic SSAO, approximates direct and one-bounce light transport in screen space, can be combined with other methods that simulate transport for macro structures and is visually equivalent to SSAO in the worst case without introducing new artifacts. since our method works in screen space, it does not depend on the geometric complexity. plausible directional occlusion and indirect lighting effects can be displayed for large and fully dynamic scenes at real-time frame rates.

keywords: radiosity, global illumination, constant time

1 introduction real-time global illumination is still an unsolved problem for large and dynamic scenes. currently, sufficient frame rates are only achieved through approximations. one such approximation is ambient occlusion (AO), which is often used in feature films and computer games, because of its high speed and simple implementation. however, AO decouples visibility and illumination, ,allowing only for a coarse approximation of the actual illumination. AO typically display darkening of cavities, but all directional information of the incoming light is ignored. we extend recent developments in screen-space AO towards a more realistic illumination we call screen-space directional occlusion (SSDO). the present work explains, how SDDO a) accounts for the direction of the incoming light. b) includes one bounce of indirect illumination. c) complements standard, object-based global illumination and d) requires only minor additional computation time.

this paper is structured as follows: first, we review existing work in section 2. in section 3 we describe our generalizations of ambient occlusion for the illumination of meso-structures. section 4 explains extensions to improve the visual quality. in section 5 the integration of our method into a complete global illumination simulation is described. we present our results in section 6, we discuss in section 7 before concluding in section 8.

2 previous work approximating physically plausible illumination at real-time framerates has rencently received much attention. ambient occlusion (AO) [Cook and Torrance 1981][Zhukov et al. 1998] is used in production extensively [Landis 2002] because of its speed, simplicity and ease of implementation. while physically correct illumination computes the integral over a product of visibility and illumination for every direction, AO computes a product of two individul integrals: one for visibility and one for illumination. for static scenes, AO allows to pre-compute visibility and store it as a scalar filed over the surface (using vertices or textures). combining static AO and dynamic lighting using a simple multiplication gives perceptually plausible [Langer and Bulthoff 20000] results at high framerates. to account for dynamic scenes, Kontkanen et al. [2005] introduced AO fields, which allow rigid translation and rotation of objects and specialized solutions for animated characters exist [Kontkanen and Aila 2006]. deforming surfaces and bounces of indirect light are addressed by Bunnell[2005] using a set of disks to approximate the geometry. a more robust version was presented by Hoberock and Jia[2007], which was further extended to point-based ambient occlusion and interreflections by Christensen[2008]. Mendez et al. [2006] compute simple color bleeding effects using the average albedo of the surrounding geometry.

these methods either use a discretization of the surface or rely on ray-tracing, which both do not scale well to the amount of dynamic geometry used in current interative applications like games. therefore, instead of computing occlusion over surfaces, recent methods approximate AO in screen space (SSAO) [Shanmugam and Arikan 2007; Mittring 2007; Bavoil et al. 2008; Filion and McNaughton 2008]. the polularity of SSAO is due to its simple implementation and high performance: it is output-sensitive, applied as a post-process, requires no additional data (e.g. surface description, spatial acceleration structures for visibility like BVH, kD-trees or shadow maps) and works with many types of geometry (e.g. displacement/normal maps, vertex/geometry shaders, iso-surface raycasting). image-space methods can also be used to efficiently simulate subsurface scattering [Mertens et al. 2005]. at the same time, SSAO is an approximation with many limitations, that also apply to this work, as we will detail in the following sections.

AO is a coarse approximation to general light transport as eg. in PRT [Lehtinen and Kautz 2003], which also supports direcitonal occlusion (DO) and interreflections. pre-computation requires to store high amounts of data in a compressed form, often limiting the spatial or directional resolution. our approach allows to both resolve very small surface details and all angular resolutions: , no-frequency" AO, all-frequency image-based lighting and sharp shadows from point lights. While PRT works well with distant lighting and static geometry of low to moderate complexity, its adaptation to real applications can remain involved, while SSAO is of uncompromised simplicity. in summary, our work takes advantage of information that is already computed during the SSAO process [Shanmugam and Arikan 2007] to approximate two significant effects which contribute to the realism of the results: directional occlusion and indirect bounces, both in real-time, which was previously impossible for dynamic, high resolution geometry.

3 near-field light transport in image space to compute light transport in image space, our method uses a framebuffer with positions and normals [Segovia et al. 2006] as input, and outputs a framebuffer with illuminated pixels using two rendering passes: one for direct light and another one for indirect bounces.

direct lighting using DO standard SSAO illuminates a pixel by first computing an average visibility value from a set of neighboring pixels. this occlusion value is then multiplied with the unoccluded illumination from all incoming directions. we propose to remove this decoupling of occlusion and illumination in the following way:

for every pixel at 3D position P with normal n, the direct radiance Ldir is computed from N sampling directions wi, uniformly distributed over the hemisphere, each covering a solid angle of

each sample computes the product of incoming radiance Lin, visibility V and the diffuse BRDF ρ/π. we assume that Lin can be efficiently computed from point lights or an environment map. to avoid the use of ray-tracing to compute the visibility V, we approximate occluders in screen space instead. for every sample, we take a step of random length from P in direction wi, where rmax is a use-defined radius. this results in a set of sampling points , located in a hemisphere, centered at P, and oriented around n. since we generated the sampling points as 3D positions in the lcoal frame around P, some of them will be above and some of them will be below the surface. in our approximate visibility test, all the sampling points below the surface of the nearby geometry are treated as occluders. fig.2 (left) shows an example figure 2: left: for direct lighting with directional occlusion, each sample is tested as an occluder. in the example, point P is only illuminated from direction C. right: for indirect light, a small patch is placed on the surface for each occluder 遮光板 and the direct light stored in the framebuffer is used as sender radiance.

with N=4 sampling points A, B, C and D: the points A, B and D are below the surface, therefore they are classified as ocluders for P, while sample C is above the surface and classified as visible. to test if a sampling point is below the surface, the sampling points are back-projected to the image. now the 3D position can be read from the position buffer and the point can be projected onto the surface (red arrows). a sampling point is classified as below the surface if its distance to the viewer decreases by this projection to the surface. in the example in fig.2, the samples A, B and D are below the surface because they move torwards the viewer, while sample C move away from the viewer. in constrast to SSAO, we do not compute the illumination from all samples, but only from the visible directions (Sample C). including this directional information can improve the result siginificantly, especially in case of incoming illumination with different colors from different directions. as shown in fig. 3, we can correctly display the resulting colored shadows, whereas SSAO simply displays a grey shadow at each location.

figure 3: the top row shows the difference between no AO, standard AO, our method with directional occlusion (SSDO) and one additional bounce. in this scene an environment map and an additional point light with a shadow map are used for illumination. the insets in the bottom row show the differences in detail. with SSDO, red and blue shadows are visible, whereas AO shadows are completely grey (bottom left). the images on the bottom right show the indirect bounce. note the yellow light, bouncing from the box to the ground. the effect of dynamic lighting is seen best in the supplemental video.

indirect bounces to include one indirect bounce of light, the direct light stored in the framebuffer from the previous pass can be used: for each sampling point which is treated as an occluder (A,B,D), the corresponding pixel color Lpixel is used as the sender radiance of a small patch, oriented at the surface (see fig.2 right). we consider the sender normal here to avoid color bleeding from backfacing sender patches. the additional radiance from the surrouding geometry can be approximated as:

implement details note, that classic SSAO [Shanmugam and Arikan 2007] has similar steps and computational costs. our method requires more computation to evaluate the shading model, but a similar visibility test. in our examples, we use additional samples for known important light sources (e.g. the sun), applying shadow maps that capture shadows from distant geometry instead of screen-space visibility. we use an MxN-texture to store M sets of N pre-computed low-disrepancy samples . at runtime, every pixel uses one out of the M sets. in a final pass we apply a geometry-sensitive blur [Segovia et al. 2006] to remove the noise which is introuduced by this reduction of samples per pixel.

最新回复(0)