Demo:

When a light hit on the surface of an object, it could be separate into three parts: a. Specular reflection; b. Diffuse reflection; c. Absorbed by the surface.  Specular and diffuse reflection are major resource of environment light for they will become new lights to hit the other objects in the virtual world. Another resource comes from the background, which is known as the skybox. It’s not an object, but it’s the environment of where the objects exist, so itself can be looked as environment lights of the world. These two resources are taken into consideration of calculating the color of every surfaces of every object. In this article, I will discuss the detail of how to calculate the specular reflection and how to properly use it and background light to calculate the reflection of every object. Diffuse reflection is a much boarder topic, which is one of the most challenging topic of computer graphics. This was realized by our team and the report is here: http://1drv.ms/1Pt1zNp.

1. Physical Based Rendering

Physical based rendering (PBR) is widely used in realistic rendering. This rendering based on famous rendering equation and the cook-torrance model boost the using of such technology. This model use the light direction, light color, microfacet direction(normal), surface roughness to calculate the color of every pixel.

2. Environment light capture

After calculating the properties of environment lights, it’s important to find a way to know how these environment lights hit the other surfaces. It’s OK to calculate it by ray tracing but it all based on geometry calculation which will cause high cost. Because most of the object in the world is static, so it’s reasonable to pre-compute most of reflection. So just as any probe lighting approach, the first step is the same as traditional environmental mapping: sampling the scene and store the scene radiance into a cube map texture. A special kind of texture-Cubemap, which is a cube with texture on each of its face. A camera is fixed on one position to take 6 pictures of all 6 directions around it (front, back, up, down, left and right) with field of view (FOV) of 90 degrees.



const CreateCubemap::CameraStruct CreateCubemap::DefaultCubemapCameraStruct[6] = {
    { DefaultPosition, Float3(1.0f, 0.0f, 0.0f), Float3(0.0f, 1.0f, 0.0f) },//Left
    { DefaultPosition, Float3(-1.0f, 0.0f, 0.0f), Float3(0.0f, 1.0f, 0.0f) },//Right
    { DefaultPosition, Float3(0.0f, 1.0f, 0.0f), Float3(0.0f, 0.0f, -1.0f) },//Up
    { DefaultPosition, Float3(0.0f, -1.0f, 0.0f), Float3(0.0f, 0.0f, 1.0f) },//Down
    { DefaultPosition, Float3(0.0f, 0.0f, 1.0f), Float3(0.0f, 1.0f, 0.0f) }, //Center
    { DefaultPosition, Float3(0.0f, 0.0f, -1.0f), Float3(0.0f, 1.0f, 0.0f) },//Back
};

To every direction, the camera will get specific radiance from the environment and render the radiance on every texture.Then these 6 textures are combine together by stick them on each face of the cube. Every object uses the normal of every pixel to map on the cubemap texture and get the final reflection result.

3. Roughness reflection

After sampling the scene and store the scene radiance into a cube map texture. There are two major problems with this approach in the context of global illumination and Cook Torrance BRDF.

1. In the context of Cook Torrance BRDF, the original cube map texture is not able to response to the change of roughness.

2. The maximum number of specular probes is limited in the scene, making objects far away from a probe not able to correctly capture the reflection.

Cube map convolution: The first problem is solved using cube map convolution: The basic idea is to store the evaluations of the lighting of varying roughness in different mipmap channels of the cube map texture. The evaluations are the solution of the radiance integration with Cook Torrance BRDF. According to [1], the integral can be approximated using the following equation.

C1

 Figure.1 Convolution Equation

The first sum is going to be stored into each mipmap level of the cube map. In this process, instead of figuring out the vector lk from the cube map center to the cube map texel, I tried to rasterize a proxy sphere geometry and used hardware interpolation on its normal as vector lk. Then, I could take advantage of the hardware accelerated cube map sampling by using the interpolated vector lk as input. However, although our approach was faster than the traditional method of calculating texture coordinate of the cube map, it had a caveat that many sphere model found online did not have its vertex normal going through the sphere center. Before we discovered this issue, due to the wrong interpolation of the vector lk, we got a distorted convolution result. Our quick fix was that instead of using normal as lk, we calculated the lk by subtracting the vertices coordinate by the sphere center. With this way, lk was always going to go through the center of the sphere. As long as the sphere was perfectly shaped, we are able to get the correct lk. For the second sum, since the result was in on a 2D texture, we did not have much room for optimization, so we just performance the regular summation onto each texel of the 2D texture. The sampling is done my simply multiply these two values evaluated with input roughness and specular values. The screen shot shows how different roughness values affect the result.

c2

Figure.2  The reflection with roughness coefficient of 0.5, 0.75 and 1

Here is the implementation of the convolution equation in shader:


float radicalInverse_VdC(uint bits) {
         bits = (bits << 16u) | (bits >> 16u);
         bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
         bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
         bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
         bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
         return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

float2 Hammersley(uint i, uint NumSamples)
{
	return float2((float)i / (float)NumSamples, radicalInverse_VdC(i));
}
float3 PrefilterEnvMap(float Roughness, float3 R){
	float3 N = R;
	float3 V = R;

	float3 PrefilteredColor = float3(0, 0, 0);

	const uint NumSamples = 1024;
	float TotalWeight = 0;

	for (uint i = 0; i < NumSamples; i++)
	{
		float2 Xi = Hammersley(i, NumSamples);
		float3 H = ImportanceSampleGGX(Xi, Roughness, N);
		float3 L = 2 * dot(V, H) * H - V;

		float NoL = saturate(dot(N, L));

		if (NoL > 0)
		{
			PrefilteredColor += Cubemap.SampleLevel(LinearSampler, L, i).rgb * NoL;
			TotalWeight += NoL;
		}
	}

	return PrefilteredColor / TotalWeight;
}

4. Parallax Correction

The second problem can be alleviated by performing Parallax Correction on a cube map during the shading: When an object chooses a nearest probe and sample the texture cube map, it is not correct to sample by scene objects normal, since this step ignores the spatial relationships between the probe and the object. According to [2], the correction figure is shown in Figure 3: C is the probe position; the camera looks at one point on the ground. The black box is a simulation of the boundary of the object. R is the reflection of view vector on the ground and it intersects with the boundary on point P. It is easy to use the reflection equation: R = 2 (N * V) * N + V to get R and use the intersection equation: P = Position + Direction * Distance (Position: reflect position, Direction: reflect direction (R direction), Distance: distance between position and boundary) to get the position of P. Finally, we sample the cube map in the direction of vector CP, which is the parallax corrected direction.

c3

Figure.3 Parallax Correction

Probe Editor: According to the equation in the previous chapter, there are two parameters that have not been specified: the position of probe and its boundary box size. Currently, there are still no method to automatically determine these two parameters. In the real In game industry, these parameters are mainly hand Fig. 3. Parallax Correction. tweaked by artists. We made a probe editor as shown in Figure 4 to speed up our parameter tweaking: The boundary of object is shown as a green box, and the probes are red except the selected one is blue. The changes of probes will be shown real-time on the screen and is quite easy for modify to get the right result in Figure 5.

            c6

       Figure.4 Correct reflect result          

c5

                                                                                                                                                       Figure.5 Probe Editor

 Here is the implementation of parallax correction in shader:



float3 parallaxCorrection(float3 PositionWS, float3 newProbePositionWS, float3 NormalWS, float3 boxMax, float3 boxMin)
{
        float3 DirectionWS = normalize(PositionWS - CameraPosWS);
        float3 ReflDirectionWS = reflect(DirectionWS, NormalWS);

        float3 FirstPlaneIntersect = (boxMax - PositionWS) / ReflDirectionWS;
        float3 SecondPlaneIntersect = (boxMin - PositionWS) / ReflDirectionWS;

        float3 FurthestPlane = max(FirstPlaneIntersect, SecondPlaneIntersect);

        float Distance = min(min(FurthestPlane.x, FurthestPlane.y), FurthestPlane.z);

        float3 IntersectPositionWS = PositionWS + ReflDirectionWS * Distance;

        ReflDirectionWS = IntersectPositionWS - newProbePositionWS;

        return ReflDirectionWS;
}

Binary:https://onedrive.live.com/redir?resid=D757EA2B031B4C43!7299&authkey=!ABTWSUAZd26gXaE&ithint=file%2crar

REFERENCES
[1] Cook, Robert L., and Kenneth E. Torrance: A reflectance model for computer graphics, ACM Transactions on Graphics, (TOG) 1.1 (1982): 7-24
[2] Sbastien, Lagarde, and Antoine Zanuttini : Local image-based lighting with parallax-corrected cubemaps, ACM SIGGRAPH 2012 Talks, ACM, 2012

Leave a Reply

Your email address will not be published. Required fields are marked *