Computer Graphics

Raytracing

Programming Language: C++
For this project, I implemented a basic ray tracer that can handle ambient and diff use lighting, soft shadows, reflections and refractions.
p3-1
p3-2
P3-3

  • 1. Intersection tests
    The intersection tests are virtual functions of the base Geometry class. They are bool function and what they do is to return if the ray intersects with the object and compute the intersection time and normal.
  • 2. Instancing
    In order to handle arbitrary rotations, translations, and scaling of these geometries, I perform intersection tests in the object’s local space rather than world space.
  • 3. Find the nearest object
    I implement a function called findNear in the Raytracer class which can return the nearest geometry which intersects with the given ray. It traverses all the geometries in the scene, uses their intersection tests to test if they intersect with the ray and records the time value in order to get the nearest time.
  • 4. Texture
    There are three virtual functions in the geometry class about texture, getU, getV and getTexture. The calculation is in the local space. For triangle and model, I implement a non-virtual function named getBarycentric to calculate the alpha, beta and gamma.
  • 5. Recursion Depth
    I set the recursion depth to 5
  • 6. Ray Tracing
    For every sample of the pixels, I call the trace_ray function and use the Eye Ray as one of the parameters. Trace_ray function includes most of the ray tracing mechanism. In trace_ray function, the first thing I do is calling the findNear function to get the nearest geometry which intersects with the ray. If I did not get anything, the trace_ ray function will return the background color of the scene, and start tracing the next eye ray. If I get the nearest geometry, the first thing I do is to check if the ray is entering the geometry or leaving the geometry. I use the dot product between the normal and the direction of the ray to check that. If the dot product is less than zero, it is entering. After I know whether the ray is entering the geometry or leaving, I calculate the refraction ray. If the condition is total internal reflection, I set the Fresnel coefficient R to 1. Additionally, I use the slop factor to prevent the collision with the ray origin from occurring.
    Then I use object refractive index to test if the object is fully opaque or fully transparent. If it is an opaque, I compute the diffuse, ambient and reflect color. If it is transparent, I compute reflect and refract color. If it is total internal reflection, there is just reflection and no refraction. Both reflect and refract are recursive process.
  • 7. Diffuse
    For the diffuse color, I traverse all the lights in the scene, and compute the diffuse color for each light contribution, and add the color together. For the shadow checking, I compare the time from current object to the shadow ray and the time from the nearest object from the shadow ray to the shadow ray.
  • 8. Soft Shadow
    I wrote a function named ShadowRay in the Raytracer class. In this function, firstly, I sampled the spherical light source to one point. Secondly, I used this sample technique to generate several shadow ray samples randomly. For these sample shadow rays, I added the color together and divided the color by the number of samples to implement soft shadow.
  • 9. Glossy Reflection
    I wrote a function named ReflectRay in the Raytracer class. To simulate the glossy reflection, I randomly perturb the ideal specular reflection rays. If the ray after perturbing is below the surface from which the ray is reflected, I set the color to (0,0,0). Then I choose the number of samples, generate random reflect rays, add color together and divide the color by the number of samples to make the result soft. I use the dot between direction of reflect ray and the surface normal to test if the ray is below the surface. One thing is when the ray is leaving the object, the normal should be reversed.
  • Photon Mapping (Global Illumination)

    Programming Language: C++
    For this project, I took ray tracer program I wrote before and using it as the starting point for a basic photon mapper in order to achieve caustics and di ffuse inter-reflection.
    P4

  • 1. Fire photons from each light source at the scene and store them in the photon map
    The first thing I wrote for tracing the photons is random_sphere function. It will return a vector uniformly selected from the unit ball. Then I trace each photon. For each photon, I stores the normal, color, position and incidence. Every time the photon intersects a diffuse object, I stop tracing the photon with probability which is based on direct lighting component at that point.
  • 2. Organize the photon map into a KD-tree
    a) Building the tree
    I use recursion and iterator to build the KD-tree. First, I choose a split axis based on the variance on the x,y,z dimension. I set the split as the dimension that have the biggest variance. Then I choose the median on this split dimension. I wrote a function called chooseSplit for choosing the split axis and median. For compression, I did not contain left and right child pointers to each node in the KD-tree.
    b) Nearest-Neighbor Search
    I use recursion to implement KNN search. I wrote another function called insertNeighbour to insert neighbors into the neighbor list. This function will also stores the maximum distance from target position to photon in the neighbor list.
  • 3. Radiance estimate
    I replace the ambient estimate with a radiance estimate using KNN from the photon mapping. The indirect illumination is also based on the number of photon and the wattage.

    Subdivision

    Programming Language: C, C++
    For this project, I used shader to render the mesh with a mirror-like material. With each press of the ‘y’ key, the mesh will be subdivided using the loop subdivision algorithm.
    p2-1
    p2-2

  • Firstly, I traverse all the triangles, using the information from triangles to create all the edges and adding them into the edges list. If the edge has already existed in the edges list, do not add it into the list. Now I got a list with all the edges inside, and I also know if they are boundary edges or not.
  • Then, based on the edges information, I calculate the new vertices position and add them at the end of vertices list.
  • After we got new vertices, our previous triangle list is useless, and we need a new one. One big triangle becomes four small ones, so I make a new triangle list and replace the old one. The order we added the new vertices into the vertices list is based on the traversing of the edges list, so the indices of the new vertices in the vertices list is the indices of edges plus the size of old vertices. I traverse all the old triangles, create the four new triangles and add them into the new triangle list. After that, I replace the old triangle list and clear the useless memory.
  • Next is the modification of the old vertices. Firstly, for all the edges, I mark if their two vertices are boundary vertices or not. If the vertex is not a boundary vertex, I put all of its neighbour vertices into its neighbour list. If it is a boundary vertex, I just put the neighbour vertices which are also on the boundary into its neighbour list. Then based on the loop subdivision algorithm, I generate the new position for the old vertices.
  • OpenGL Mesh Rendering

    Programming Language: C, C++
    For this project, I implement basic camera and lighting functionality, and render an imported object (pool) and an animated surface of water using heightmap.
    p1

    How I implemented:

  • Set the projection, modelview and transform matrices in order to let the camera correctly see the object.
    I use “gluPerspective” to specify the camera and “gluLookAt” to set the modelview.
  • Use OpenGL fixed-functionality to add lights to the scene to correctly shade the scene.
  • Create the materials for the pool and water, including the ambient, diffuse and shininess effects.
    I specify the shadeModel to “smooth”, which has more realistic effect than “flat”, and create different materials for different objects.
  • Generate the mesh for the water heightmap.
    I set the resolution to 64, which means there are 64 * 64 vertices in the water mesh. I use the column major method to put all the vertices into an array. Then I create a triangle list to store all the triangles of water mesh, and the three vertices of each triangle is stored according to anticlockwise direction.
  • Compute per-vertex normals for the pool model and water heightmap’s meshes.
    when computing the normal, I use the same method for both the pool and water heightmap. Firstly, I use three vertices in one triangle to compute the normal for every triangle plane. For every vertex, there are more than one normal. So, secondly, for each vertex, I add all of its normals to generate the normal of the vertex.
  • Draw the objects in the scene.
    I use “glDrawElements” to draw both of the pool and water, and use “glVertexPointer” and “glNormalPointer” to specify where to get vertex data and normal data.
  • Implement the depth testing.
    I use the glEnable(GL_DEPTH_TEST) to enable the depth testing.