Lab 10: Snow Globe

Goals

By the end of this lab, you will:

  • practice some more with textures,
  • use transform feedback to ping-pong the updated position and velocity buffers,
  • implement Euler's method to animate particles representing snow,
  • (maybe) revisit ray tracing!

The initial template for this lab will look similar to the Complete Example from the end of the notes for this week. However, the stuff related to transform feedback has been omitted. One of your tasks for this lab is to re-add the code related to using transform feedback to update particle positions and velocity. You'll then implement the actual position and velocity updates in the vertex shader, and then add some features of your choice.

There are a few differences between our in-class example and the initial template. First, everything is wrapped within the window.onload method so any images (e.g. for the snowflake or background) are loaded before the script is run. Next, an animate / pause button has been added to either start or stop the animation (search for the toggleAnimation and animateParticles functions).

Also, there are some utilities in the utils.js file for (1) compiling a shader program, which may also involve setting which varyings to capture and (2) for setting up a texture. The setupTexture function is a bit different than what we have used before since it now uses a specified fmt for the format of the image. Generally, for .png images (like the snowflake image), you can pass gl.RGBA. For .jpg images, you can pass gl.RGB for the format.

Part 1: Render snowflakes instead of squares.

This part should be quick - let's start by making the particles look like snowflakes. A texture for the snow.png image has already been set up and the associated sampler2D has been declared in the fragment shader. Please use gl_PointCoord in the fragment shader to sample the tex_Snowflake texture. When this works, you should see snowflakes in the canvas.

Part 2: Set up transform feedback object and buffers.

Functions you will need for this part: gl.createTransformFeedback, gl.bindTransformFeedback, gl.bindBufferBase, gl.beginTransformFeedback and gl.endTransformFeedback. The code you add here will be very similar to what was done in class (and in the example). Note that the varyings v_Position and v_Velocity are already set up to be captured by the call to compileProgram, so you don't need to change this.

In PART 2A, create pNext and vNext buffers to hold the updated particle positions and velocities, respectively. This should be similar to what we did in class. When calling gl.bufferData, remember that we will pass the size (in bytes) to allocate for the buffers. Then create a transform feedback object.

In PART 2B, bind the transform feedback object and then bind the pNext and vNext buffers to the transform feedback buffer. Next, wrap the call to gl.drawArrays within gl.beginTransformFeedback and gl.endTransformFeedback. Again, these steps should be similar to what we did in class.

For PART 2C, please swap the positionBuffer with pNext and the velocityBuffer with vNext.

It won't immediately be apparent that this works, but please check that there are no WebGL warnings about transform feedback by opening the web page to a new tab (outside of replit) and checking the console output. There may be warnings about the attributes, but that is likely because Part 3 needs to be implemented.

Part 3: Implement Euler's method to update position and velocity (M status).

For PART 3A, let's now calculate v_Position and v_Velocity in the vertex shader. Specifically, the acceleration is modeled as:

$$ \vec a = \vec{g} - \frac{\rho C_d A}{2m} \lVert \vec v \rVert \vec v. $$

where $\vec g = (0, -9.81, 0)\ m/s^2$, $C_d = 0.5$ (the drag coefficient), $\rho = 1.022\ kg/m^3$, $m = 3 \cdot 10^{-6}\ kg$ and $A = 2.83 \cdot 10^{-5}\ m^2$. The velocity $\vec v$ is a_Velocity. Note that the a_Velocity attribute has been written to the velocityBuffer buffer (on the JavaScript) side and has already been enabled. The snow has been given initially random positions and velocities.

Recall that the Euler update to the velocity (to calculate the varying v_Velocity) is:

$$ \vec v^{k+1} = \vec v^k + \Delta t\ \vec a. $$

And the update to the position (to calculate the varying v_Position) is:

$$ \vec p^{k+1} = \vec p^k + \Delta t\ \vec v^k. $$

$\Delta t$ has been set to $5\cdot 10^{-4}$ seconds. Once PART 3A is complete, the snow should move!

In this lab, particles are contained within a cube with corners at $(\pm 1, \pm 1, \pm 1)$, i.e. $\vec p \in [0, 1]^3$. Unfortunately, the particles will usually fall below the screen after a certain time, specifically when the y-coordinate of a particle is less than -1. When this happens (v_Position.y < -1.), please "respawn" the snow particle to the top of the domain. In other words, set the y-coordinate of v_Position to 1 when this happens. You can also randomize the x- and z-coordinates if you want, but this will require some research to determine how to calculate random numbers in GLSL.

For PART 3C, let's make the snow accumulate on a sphere with radius $R = 0.2$ centered at the origin $(0, 0, 0)$. When v_Position is inside the sphere, set v_Velocity to $(0, 0, 0)$. Please come up with an expression to determine whether the particle is inside (or on the surface of) the sphere.

When this works, you should see something like the following:

Part 4: Choose your own adventure (E status).

For E status, please pick one of the following features to implement. I really recommend the first one! Of course, you are free to propose your own extension.

  1. Add a wind force that attracts snow to the mouse location. This will involve (1) adding a callback for the mouse motion (mousemove event listener), (2) determining the world coordinates from the mouse (screen) coordinates, (3) writing these world mouse coordinates to the shader program (as a uniform) and then (4) adding another force that attracts snow to the mouse. For step (2), I would recommend using event.offsetX and event.offsetY for the screen coordinates of the mouse. You'll also need canvas.width and canvas.height to determine the relative coordinates in the screen (and remember the HTML canvas has $y$ pointing downwards). For step (4), you can use the wind force $\vec f_w = \alpha \vec r / \lVert r\rVert^3$, where $\vec r = \vec m - \vec p$ ($\vec m$ are the coordinates of the mouse in world space and $\vec p$ is still the particle position, i.e. a_Position). I found $\alpha = 10^{-4}$ gave nice results, but please feel free to experiment with the constant and the wind force model.

Note: A nice way to transform the mouse coordinates to world space is by casting a ray from the eye through the pixel where the mouse is. Then you can intersect this ray with a plane in the scene to determine where the mouse is in world space. One option is to use a plane with the normal $\vec n = (0, 0, 1)$ centered at $\vec c = (0, 0, 0)$ - see the notes from Lecture 02 on how to calculate a ray-plane intersection. You can come up with another way to calculate the world mouse coordinates, but the snow may not concentrate near the mouse.

  1. Ray trace the sphere as the Earth. You should use two rendering pipeline passes to do this (i.e. two calls to gl.drawArrays or gl.drawElements depending on how you implement this). You should use a full-screen quad (see the hints in the Lab 08 features on how to do this). It's probably simpler to use the same shader program and add some logic (via a uniform) to determine if the render pass is for the snow particles or the sphere. Recall that gl_FragCoord has the pixel coordinates which you can use to cast rays. Another hint: see the Exercise: a ray tracer in a rasterizer? example in the Lecture 07 notes.

  2. Rasterize the sphere as the Earth. You'll need to load the sphere.obj mesh, create buffers and draw (with the Earth texture) similarly to what was done in Lab 08.

  3. Add a background using the cabin.jpeg image. Similar to the other features, you'll also need two rendering pipeline passes to do this: one for the particles and another for the full-screen quad to render the background. Note that the cabin.jpeg image is loaded and you can set this up using setupTexture(gl, program, "background", gl.RGB, "tex_Background", 1) (you can change the texture unit index and name for the sampler2D variable). Again, please see the hints on how to implement this in the Lab 08 features.

Extending the lab even further.

Here's an idea for a post-semester project :)

To really make this look like a snow globe (even though the snow is on the outside of the sphere here...), we can give the sphere a glass-like material and use refraction to model how rays bend when they (1) enter the sphere and (2) exit the sphere. After exiting, they will intersect the background, which we can then use to look up the fragment color.

Rays will refract in the direction defined in the Week 5 slides here (see slide 5). Luckily, GLSL has a special built-in function called refract which computes this refraction direction for us. Be careful with the ratio of refraction indices when either entering or exiting the material. You can set the refraction index of glass to 1.5. Also note that the normal vector in the refraction equation points into the first material (where the ray is coming from). This feature will allow you to render a scene like the one at the top-right of this page - refractive materials make things look upside down (do a Google image search for "refractive ball").

Submission

The initial submission for the lab is due on Wednesday 12/06 at 11:59pm EDT. I will then provide feedback by Monday 12/11 so you can edit your submission.

When you and your partner are ready, please submit the assignment on replit. I will then make comments (directly on your repl) and enter your current grade status at the top of the index.html file.

Please also remember to submit your reflection for this week in this Google Form.


© Philip Claude Caplan, 2023 (Last updated: 2023-11-29)