The intersection tests are virtual functions of the base Geometry class. They are bool function and what they do is to return if the ray intersects with the object and compute the intersection time and normal.
In order to handle arbitrary rotations, translations, and scaling of these geometries, I perform intersection tests in the object’s local space rather than world space.
I implement a function called findNear in the Raytracer class which can return the nearest geometry which intersects with the given ray. It traverses all the geometries in the scene, uses their intersection tests to test if they intersect with the ray and records the time value in order to get the nearest time.
There are three virtual functions in the geometry class about texture, getU, getV and getTexture. The calculation is in the local space. For triangle and model, I implement a non-virtual function named getBarycentric to calculate the alpha, beta and gamma.
I set the recursion depth to 5
For every sample of the pixels, I call the trace_ray function and use the Eye Ray as one of the parameters. Trace_ray function includes most of the ray tracing mechanism. In trace_ray function, the first thing I do is calling the findNear function to get the nearest geometry which intersects with the ray. If I did not get anything, the trace_ ray function will return the background color of the scene, and start tracing the next eye ray. If I get the nearest geometry, the first thing I do is to check if the ray is entering the geometry or leaving the geometry. I use the dot product between the normal and the direction of the ray to check that. If the dot product is less than zero, it is entering. After I know whether the ray is entering the geometry or leaving, I calculate the refraction ray. If the condition is total internal reflection, I set the Fresnel coefficient R to 1. Additionally, I use the slop factor to prevent the collision with the ray origin from occurring.
Then I use object refractive index to test if the object is fully opaque or fully transparent. If it is an opaque, I compute the diffuse, ambient and reflect color. If it is transparent, I compute reflect and refract color. If it is total internal reflection, there is just reflection and no refraction. Both reflect and refract are recursive process.
For the diffuse color, I traverse all the lights in the scene, and compute the diffuse color for each light contribution, and add the color together. For the shadow checking, I compare the time from current object to the shadow ray and the time from the nearest object from the shadow ray to the shadow ray.
I wrote a function named ShadowRay in the Raytracer class. In this function, firstly, I sampled the spherical light source to one point. Secondly, I used this sample technique to generate several shadow ray samples randomly. For these sample shadow rays, I added the color together and divided the color by the number of samples to implement soft shadow.
I wrote a function named ReflectRay in the Raytracer class. To simulate the glossy reflection, I randomly perturb the ideal specular reflection rays. If the ray after perturbing is below the surface from which the ray is reflected, I set the color to (0,0,0). Then I choose the number of samples, generate random reflect rays, add color together and divide the color by the number of samples to make the result soft. I use the dot between direction of reflect ray and the surface normal to test if the ray is below the surface. One thing is when the ray is leaving the object, the normal should be reversed.
Photon Mapping (Global Illumination)
Programming Language: C++
For this project, I took ray tracer program I wrote before and using it as the starting point for a basic photon mapper in order to achieve caustics and diffuse inter-reflection.
The first thing I wrote for tracing the photons is random_sphere function. It will return a vector uniformly selected from the unit ball. Then I trace each photon. For each photon, I stores the normal, color, position and incidence. Every time the photon intersects a diffuse object, I stop tracing the photon with probability which is based on direct lighting component at that point.
a) Building the tree
I use recursion and iterator to build the KD-tree. First, I choose a split axis based on the variance on the x,y,z dimension. I set the split as the dimension that have the biggest variance. Then I choose the median on this split dimension. I wrote a function called chooseSplit for choosing the split axis and median. For compression, I did not contain left and right child pointers to each node in the KD-tree.
b) Nearest-Neighbor Search
I use recursion to implement KNN search. I wrote another function called insertNeighbour to insert neighbors into the neighbor list. This function will also stores the maximum distance from target position to photon in the neighbor list.
I replace the ambient estimate with a radiance estimate using KNN from the photon mapping. The indirect illumination is also based on the number of photon and the wattage.
Programming Language: C, C++
For this project, I used shader to render the mesh with a mirror-like material. With each press of the ‘y’ key, the mesh will be subdivided using the loop subdivision algorithm.
OpenGL Mesh Rendering
How I implemented:
I use “gluPerspective” to specify the camera and “gluLookAt” to set the modelview.
I specify the shadeModel to “smooth”, which has more realistic effect than “flat”, and create different materials for different objects.
I set the resolution to 64, which means there are 64 * 64 vertices in the water mesh. I use the column major method to put all the vertices into an array. Then I create a triangle list to store all the triangles of water mesh, and the three vertices of each triangle is stored according to anticlockwise direction.
when computing the normal, I use the same method for both the pool and water heightmap. Firstly, I use three vertices in one triangle to compute the normal for every triangle plane. For every vertex, there are more than one normal. So, secondly, for each vertex, I add all of its normals to generate the normal of the vertex.
I use “glDrawElements” to draw both of the pool and water, and use “glVertexPointer” and “glNormalPointer” to specify where to get vertex data and normal data.
I use the glEnable(GL_DEPTH_TEST) to enable the depth testing.