Lab 10: Snow Globe

Goals

  • Practice some more with textures.
  • Use transform feedback to ping-pong the updated position and velocity buffers.
  • Implement Euler's method to animate particles representing snow.
  • Practice independently setting up the WebGL rendering pipeline.
  • (Maybe) Revisit ray tracing!

Our main task is to animate snow, which will build upon the particle animation we did in class on Tuesday. The initial template for this lab doesn't have any code (see the assignment link on our discussion board). After accepting the assignment, you should start by creating a new file and then copy the contents of index.html from the code you wrote in class on Tuesday. If you weren't in class, start from the template here and then go through the reading to complete the exercise (all the code snippets you need are in the reading, starting from the Animating Particles (effficiently) section).

Your repository contains an image (snowflake.png) to texture the snowflakes (instead of texturing the particles as gummy bears) as well as some images to either add a background cabin scene (cabin.jpeg) or texture the Earth (earth.jpg) for the features in Part 5.

Part 1: Render snowflakes instead of gummy bears.

This part should be quick - update your code to load snowflake.png and render snowflakes.

Part 2: Extend your code to update velocity in the vertex shader (and capture the result with transform feedback).

In the example from Tuesday, we assumed the velocity was constant. Now, the velocity should be updated using the gravity and drag forces Specifically, the acceleration a is:

a=gρCdA2mvkvk.

where g=(0,9.81,0) m/s2, Cd=0.5 (the drag coefficient), ρ=1.022 kg/m3, m=3106 kg and A=2.83105 m2. The velocity vk is the current particle velocity (an input to the shader, which you might call a_Velocity).

Recall that the Euler update to the velocity (to calculate the varying v_Velocity) is:

vk+1=vk+Δt a.

The update to the position (to calculate the varying v_Position) is:

pk+1=pk+Δt vk.

Δt can be set to 5104 seconds.

You'll need to create initial velocity data on the JavaScript side (and buffer it to the GPU). One option is to set it to zero, but I would recommend using random velocities for all the particles to start the simulation:

let velocity = new Float32Array(dim * nParticles);
for (let i = 0; i < nParticles * dim; i++) {
  velocity[i] = -0.5 + Math.random();
}

Once this is implemented (with all the transform feedback stuff complete), the snow should move! Try increasing the number of particles (nParticles) to 10,000 or so.

Part 3: respawn snow particles once they fall below y=1.

Similar to the example from Tuesday, particles are contained within a cube with corners at (±1,±1,±1), i.e. p[1,1]3. Unfortunately, the particles will usually fall below the screen after a certain time, specifically when the y-coordinate of a particle is less than -1. When this happens (v_Position.y < -1.), please "respawn" the snow particle to the top of the domain. In other words, set the y-coordinate of v_Position to 1 when this happens.

Part 4: Accumulate snow on a sphere.

Let's make the snow accumulate on a sphere with radius R=0.2 centered at the origin (0,0,0). When v_Position is inside the sphere, set v_Velocity to (0,0,0). You'll need to come up with an expression to determine whether the particle is inside (or on the surface of) the sphere. When this works, you should see something like the following:

Part 5: Choose your own adventure.

For a complete lab, please pick one of the following features to implement. You're also free to propose your own feature. For Features 2, 3 or 4, you might want to use a second shader program to use during the second rendering pass (and make sure only the necessary attributes are enabled). The video at the bottom of this page shows what these features should look like.

  1. Add a wind force that attracts snow to the mouse location. This will involve (1) adding a callback for the mouse motion (mousemove event listener), (2) determining the world coordinates from the mouse (screen) coordinates, (3) writing these world mouse coordinates to the shader program (as a uniform) and then (4) adding another force that attracts snow to the mouse. For step (2), I would recommend using event.offsetX and event.offsetY for the screen coordinates of the mouse. You'll also need canvas.width and canvas.height to determine the relative coordinates in the screen (and remember the HTML canvas has y pointing downwards). For step (4), you can use the wind force fw=αr/r3, where r=mp (m are the coordinates of the mouse in world space and p is still the particle position, i.e. a_Position). I found α=104 gave nice results, but please feel free to experiment with the constant and the wind force model.

Note: A nice way to transform the mouse coordinates to world space is by casting a ray from the eye through the pixel where the mouse is. Then you can intersect this ray with a plane in the scene to determine where the mouse is in world space. One option is to use a plane with the normal n=(0,0,1) centered at c=(0,0,0) - see the notes from Chapter 03 on how to calculate a ray-plane intersection.

  1. Add a background using the cabin.jpeg image. You'll need two rendering pipeline passes to do this: one for the particles and another for a "full-screen quad" to render the background. A "full-screen quad" is defined as a rectangle with vertices (-1, -1), (+1, -1), (+1, +1) and (-1, +1) and triangles (0, 1, 2) and (0, 2, 3). For the z-coordinate of the quad, one option is to set it to 0 and disable DEPTH_TEST before drawing the two triangles for the background, and then re-enabling DEPTH_TEST before drawing the particles. Since the quad vertices are already defined in the viewing space, there is no need to transform them by a model, view or projection matrix here. The texture coordinates can be computed in the fragment shader from a variable called gl_FragCoord, which are the pixel coordinates: vec2 texcoord = gl_FragCoord.xy / vec2(width, height) where width and height are the number of pixels in the horizontal and vertical directions of the canvas. You can write these to your shader as uniforms, or you can hard-code these (and make sure you set these hard-coded values when creating the <canvas> element).

  2. Ray trace the sphere as the Earth using earth.jpg. Again, you should use two rendering pipeline passes to do this (i.e. two calls to gl.drawArrays or gl.drawElements depending on how you implement this). You should use a full-screen quad as described in Feature #2. It's probably simpler to use the same shader program and add some logic (via a uniform) to determine if the render pass is for the snow particles or the sphere. Recall that gl_FragCoord has the pixel coordinates which you can use to cast rays. Another hint: see the Exercise: a ray tracer in a rasterizer? example in the Lecture 08 notes.

  3. Add some model of your choosing (maybe one you proposed for the final rendering project)! Again, you'll need two rendering passes to do this. You might need to scale your model and translate it to the origin (i.e. set a model matrix to do this) in order to fit it into the [1,1]3 domain of the particles.

Extending the lab even further.

Here's an idea for a post-semester project :)

To really make this look like a snow globe (even though the snow is on the outside of the sphere here...), we can give the sphere a glass-like material and use refraction to model how rays bend when they (1) enter the sphere and (2) exit the sphere. After exiting, they will intersect the background, which we can then use to look up the fragment color.

Rays will refract in the direction defined in the Chapter 07 notes. Luckily, GLSL has a special built-in function called refract which computes this refraction direction for us. Be careful with the ratio of refraction indices when either entering or exiting the material. You can set the refraction index of glass to 1.5. This feature will allow you to render a scene like the one at the top-right of this page - refractive materials make things look upside down (do a Google image search for "refractive ball").

Submission

The initial submission for the lab is due on Thursday 5/8 at 11:59pm EDT. Please see the Setup page for instructions on how to commit and push your work to your GitHub repository and then submit your repository to Gradescope (in the Lab 10 assignment).

Remember, if you see a git error when you try to sync changes, you might need to run the following two commands in the Terminal before being able to sync (push) your changes:

git config --global pull.rebase true
git pull

© Philip Caplan, 2025