By the end of this lab, you will:
In this lab, we're going to take a virtual picture of a bird flying over a lake (similar to this picture). I couldn't find a model of a heron, so we'll use a raven model, which I found here.
To get started with the lab, click on the link in the Lab 4 Assignment Link
post on our Ed discussion board. When you preview the initial template (via Go Live
), you should see a blue canvas, which is the color of the lake.
We're going to develop our own virtual camera in this lab, point it at the bird and take a picture. We'll also model the situation in which the bird is rotating, and we'll also add a reflection of the bird in the water.
Most of the implementation for this lab will be in the Camera
takePicture
function as well as the Triangle
constructor
in camera.js
. Of course, you may want to write additional functions to help in your implementation since some functionalities will be repeated, like calculating the color of the bird from our lighting model.
There are two objects in the scene: the bird
and the lake
. Each of these has a center
and a color
attribute. The details of the underlying classes for these two objects are implemented in utils.min.js
, which you do not need to inspect in this lab. Just note that both of these objects have an intersect
method which takes in a Ray
object and returns information about the closest intersection point in JSON format:
{
t: {Number} // ray parameter of intersection
p: {vec3} // surface point (which is equal to ray.origin + t * ray.direction)
n: {vec3} // surface normal (in world space) at intersection
km: {vec3} // material diffuse reflection coefficient
}
A lot of the setup will look similar to what we had in previous labs. In this lab, however, the dimensions of the image plane (width w
and height h
) are already calculated for you.
Please create rays and intersect them with the bird object, using the equations in the notes for pointing a camera at a specific target point. The target point is bird.center
and the camera is placed at this.eye
(of the Camera
object). Use an "up" direction of (0, 1, 0). You can either develop the change-of-basis vectors yourself, or use the glMatrix
targetTo
function (but be careful that it includes a translation).
As a first debugging step, please just check for an intersection with the bird
object using ixnB = bird.intersect(ray)
. If there is an intersection, set the color to ixnB.km
, which should look like the following image:
We will now add a light, which will be modeled as the distant sun, so we only have a direction - use a direction to the light of
If the camera ray does not intersect the bird, check for an intersection between the camera ray and the lake: ixnL = lake.intersect(ray)
. Then cast a reflection ray off of the lake and check if the reflected ray intersects the bird. If so, set the color of the pixel to the shaded bird color from this new intersection. No need to make your color calculation recursive since there will only be one bounce for this scene. If the reflection ray does not intersect the bird, set the color directly to ixnL.km
(no need to shade the lake).
This last part should be implemented entirely in the Triangle
constructor
. The bird model is represented with a soup of triangles called a mesh which is read from the raven.obj
file (OBJ files are a common format for representing 3d models). Each triangle is constructed from three vec3
's as we did in Lab 2, which are saved as a
, b
and c
in each Triangle
object. The constructor now takes a fourth parameter n
which is the outward normal vector to use for the triangle (saved in this.normal
). When an intersection occurs, this.normal
is returned as the normal to the surface at the intersection point.
The axis of the bird from tail to beak is aligned with the z-axis. Your job in this part is to rotate the bird about the z-axis with the bird center as the center of rotation, which is a
, b
and c
points directly in the Triangle
constructor. Hint: think about translating, rotating and then translating back.
There are various ways (and glMatrix
functions) to do this. Please review the notes on homogeneous coordinates in order to transform a
, b
and c
. Note that there is a very convenient function in glMatrix
called vec3.transformMat4
(please look up the documentation).
Here is what a rotation of
Yes, this means we are calculating the same transformation matrix for every triangle, but it's okay because it's just a preprocessing step to our rendering algorithm.
Remember to transform the incoming normal vector too! Please see the notes for a description of how to transform normal vectors. Is vec3.transformMat4
appropriate for this transformation? Think about whether vectors are affected by a translation.
This question can be answered at any point while working on the lab. Similar to Labs 2 and 3, please upload a picture of your response to your repository (or type it in the README.md
file using LaTeX).
In the notes we have the
Let's omit the
Please express this as a transformation of the pixel indices i
and j
in the following form:
where
There is an additional flag that you can pass to the Lake
constructor in index.html
. Find where the Lake
object is created and change the last parameter passed to the constructor to true
. This will take a bit longer to render (about 2x for me) because the km
value returned by the lake intersection will be looked up in the water.jpeg
image which takes a little extra time. More on techniques to do this in a few weeks! You can also add shadows if you want, but this is also optional since we just practiced with that in Lab 3.
The initial submission for the lab is due on Thursday 3/27 at 11:59pm EDT. Please see the Setup page for instructions on how to commit and push your work to your GitHub repository and then submit your repository to Gradescope (in the Lab 4 assignment). I will then provide feedback within 1 week so you can edit your submission.