Our main task is to animate snow, which will build upon the particle animation we did in class on Tuesday. The initial template for this lab doesn't have any code (see the assignment link on our discussion board). After accepting the assignment, you should start by creating a new file and then copy the contents of index.html
from the code you wrote in class on Tuesday. If you weren't in class, start from the template here and then go through the reading to complete the exercise (all the code snippets you need are in the reading, starting from the Animating Particles (effficiently) section).
Your repository contains an image (snowflake.png
) to texture the snowflakes (instead of texturing the particles as gummy bears) as well as some images to either add a background cabin scene (cabin.jpeg
) or texture the Earth (earth.jpg
) for the features in Part 5.
This part should be quick - update your code to load snowflake.png
and render snowflakes.
In the example from Tuesday, we assumed the velocity was constant. Now, the velocity should be updated using the gravity and drag forces Specifically, the acceleration
where a_Velocity
).
Recall that the Euler update to the velocity (to calculate the varying
v_Velocity
) is:
The update to the position (to calculate the varying
v_Position
) is:
You'll need to create initial velocity data on the JavaScript
side (and buffer it to the GPU). One option is to set it to zero, but I would recommend using random velocities for all the particles to start the simulation:
let velocity = new Float32Array(dim * nParticles);
for (let i = 0; i < nParticles * dim; i++) {
velocity[i] = -0.5 + Math.random();
}
Once this is implemented (with all the transform feedback stuff complete), the snow should move! Try increasing the number of particles (nParticles
) to 10,000 or so.
Similar to the example from Tuesday, particles are contained within a cube with corners at v_Position.y < -1.
), please "respawn" the snow particle to the top of the domain. In other words, set the y-coordinate of v_Position
to 1
when this happens.
Let's make the snow accumulate on a sphere with radius v_Position
is inside the sphere, set v_Velocity
to
For a complete lab, please pick one of the following features to implement. You're also free to propose your own feature. For Features 2, 3 or 4, you might want to use a second shader program to use during the second rendering pass (and make sure only the necessary attributes are enabled). The video at the bottom of this page shows what these features should look like.
mousemove
event listener), (2) determining the world coordinates from the mouse (screen) coordinates, (3) writing these world mouse coordinates to the shader program (as a uniform) and then (4) adding another force that attracts snow to the mouse. For step (2), I would recommend using event.offsetX
and event.offsetY
for the screen coordinates of the mouse. You'll also need canvas.width
and canvas.height
to determine the relative coordinates in the screen (and remember the HTML
canvas has a_Position
). I found Note: A nice way to transform the mouse coordinates to world space is by casting a ray from the eye
through the pixel where the mouse is. Then you can intersect this ray with a plane in the scene to determine where the mouse is in world space. One option is to use a plane with the normal
Add a background using the cabin.jpeg
image. You'll need two rendering pipeline passes to do this: one for the particles and another for a "full-screen quad" to render the background. A "full-screen quad" is defined as a rectangle with vertices (-1, -1), (+1, -1), (+1, +1) and (-1, +1) and triangles (0, 1, 2) and (0, 2, 3). For the z-coordinate of the quad, one option is to set it to 0 and disable DEPTH_TEST
before drawing the two triangles for the background, and then re-enabling DEPTH_TEST
before drawing the particles. Since the quad vertices are already defined in the viewing space, there is no need to transform them by a model, view or projection matrix here. The texture coordinates can be computed in the fragment shader from a variable called gl_FragCoord
, which are the pixel coordinates: vec2 texcoord = gl_FragCoord.xy / vec2(width, height)
where width
and height
are the number of pixels in the horizontal and vertical directions of the canvas. You can write these to your shader as uniforms, or you can hard-code these (and make sure you set these hard-coded values when creating the <canvas>
element).
Ray trace the sphere as the Earth using earth.jpg
. Again, you should use two rendering pipeline passes to do this (i.e. two calls to gl.drawArrays
or gl.drawElements
depending on how you implement this). You should use a full-screen quad as described in Feature #2. It's probably simpler to use the same shader program and add some logic (via a uniform
) to determine if the render pass is for the snow particles or the sphere. Recall that gl_FragCoord
has the pixel coordinates which you can use to cast rays. Another hint: see the Exercise: a ray tracer in a rasterizer? example in the Lecture 08 notes.
Add some model of your choosing (maybe one you proposed for the final rendering project)! Again, you'll need two rendering passes to do this. You might need to scale your model and translate it to the origin (i.e. set a model matrix to do this) in order to fit it into the
Here's an idea for a post-semester project :)
To really make this look like a snow globe (even though the snow is on the outside of the sphere here...), we can give the sphere a glass-like material and use refraction to model how rays bend when they (1) enter the sphere and (2) exit the sphere. After exiting, they will intersect the background, which we can then use to look up the fragment color.
Rays will refract in the direction defined in the Chapter 07 notes. Luckily, GLSL
has a special built-in function called refract
which computes this refraction direction for us. Be careful with the ratio of refraction indices when either entering or exiting the material. You can set the refraction index of glass to 1.5. This feature will allow you to render a scene like the one at the top-right of this page - refractive materials make things look upside down (do a Google image search for "refractive ball").
The initial submission for the lab is due on Thursday 5/8 at 11:59pm EDT. Please see the Setup page for instructions on how to commit and push your work to your GitHub repository and then submit your repository to Gradescope (in the Lab 10 assignment).
Remember, if you see a git
error when you try to sync changes, you might need to run the following two commands in the Terminal before being able to sync (push) your changes:
git config --global pull.rebase true
git pull