Lab 10: Snow GlobeGoalsBy the end of this lab, you will:
|
The initial template for this lab will look similar to the Complete Example from the end of the notes for this week. However, the stuff related to transform feedback has been omitted. One of your tasks for this lab is to re-add the code related to using transform feedback to update particle positions and velocity. You'll then implement the actual position and velocity updates in the vertex shader, and then add some features of your choice.
There are a few differences between our in-class example and the initial template. First, everything is wrapped within the window.onload
method so any images (e.g. for the snowflake or background) are loaded before the script is run. Next, an animate / pause button has been added to either start or stop the animation (search for the toggleAnimation
and animateParticles
functions).
Also, there are some utilities in the utils.js
file for (1) compiling a shader program, which may also involve setting which varying
s to capture and (2) for setting up a texture. The setupTexture
function is a bit different than what we have used before since it now uses a specified fmt
for the format of the image. Generally, for .png
images (like the snowflake image), you can pass gl.RGBA
. For .jpg
images, you can pass gl.RGB
for the format.
This part should be quick - let's start by making the particles look like snowflakes. A texture for the snow.png
image has already been set up and the associated sampler2D
has been declared in the fragment shader. Please use gl_PointCoord
in the fragment shader to sample the tex_Snowflake
texture. When this works, you should see snowflakes in the canvas.
Functions you will need for this part: gl.createTransformFeedback
, gl.bindTransformFeedback
, gl.bindBufferBase
, gl.beginTransformFeedback
and gl.endTransformFeedback
. The code you add here will be very similar to what was done in class (and in the example). Note that the varyings v_Position
and v_Velocity
are already set up to be captured by the call to compileProgram
, so you don't need to change this.
In PART 2A
, create pNext
and vNext
buffers to hold the updated particle positions and velocities, respectively. This should be similar to what we did in class. When calling gl.bufferData
, remember that we will pass the size (in bytes) to allocate for the buffers. Then create a transform feedback object.
In PART 2B
, bind the transform feedback object and then bind the pNext
and vNext
buffers to the transform feedback buffer. Next, wrap the call to gl.drawArrays
within gl.beginTransformFeedback
and gl.endTransformFeedback
. Again, these steps should be similar to what we did in class.
For PART 2C
, please swap the positionBuffer
with pNext
and the velocityBuffer
with vNext
.
It won't immediately be apparent that this works, but please check that there are no WebGL
warnings about transform feedback by opening the web page to a new tab (outside of replit
) and checking the console output. There may be warnings about the attributes, but that is likely because Part 3 needs to be implemented.
For PART 3A
, let's now calculate v_Position
and v_Velocity
in the vertex shader. Specifically, the acceleration is modeled as:
where a_Velocity
. Note that the a_Velocity
attribute has been written to the velocityBuffer
buffer (on the JavaScript
) side and has already been enabled. The snow has been given initially random positions and velocities.
Recall that the Euler update to the velocity (to calculate the varying
v_Velocity
) is:
And the update to the position (to calculate the varying
v_Position
) is:
PART 3A
is complete, the snow should move!
In this lab, particles are contained within a cube with corners at v_Position.y < -1.
), please "respawn" the snow particle to the top of the domain. In other words, set the y-coordinate of v_Position
to 1
when this happens. You can also randomize the x- and z-coordinates if you want, but this will require some research to determine how to calculate random numbers in GLSL
.
For PART 3C
, let's make the snow accumulate on a sphere with radius v_Position
is inside the sphere, set v_Velocity
to
When this works, you should see something like the following:
For E status, please pick one of the following features to implement. I really recommend the first one! Of course, you are free to propose your own extension.
mousemove
event listener), (2) determining the world coordinates from the mouse (screen) coordinates, (3) writing these world mouse coordinates to the shader program (as a uniform) and then (4) adding another force that attracts snow to the mouse. For step (2), I would recommend using event.offsetX
and event.offsetY
for the screen coordinates of the mouse. You'll also need canvas.width
and canvas.height
to determine the relative coordinates in the screen (and remember the HTML
canvas has a_Position
). I found Note: A nice way to transform the mouse coordinates to world space is by casting a ray from the eye
through the pixel where the mouse is. Then you can intersect this ray with a plane in the scene to determine where the mouse is in world space. One option is to use a plane with the normal
Ray trace the sphere as the Earth. You should use two rendering pipeline passes to do this (i.e. two calls to gl.drawArrays
or gl.drawElements
depending on how you implement this). You should use a full-screen quad (see the hints in the Lab 08 features on how to do this). It's probably simpler to use the same shader program and add some logic (via a uniform) to determine if the render pass is for the snow particles or the sphere. Recall that gl_FragCoord
has the pixel coordinates which you can use to cast rays. Another hint: see the Exercise: a ray tracer in a rasterizer? example in the Lecture 07 notes.
Rasterize the sphere as the Earth. You'll need to load the sphere.obj
mesh, create buffers and draw (with the Earth texture) similarly to what was done in Lab 08.
Add a background using the cabin.jpeg
image. Similar to the other features, you'll also need two rendering pipeline passes to do this: one for the particles and another for the full-screen quad to render the background. Note that the cabin.jpeg
image is loaded and you can set this up using setupTexture(gl, program, "background", gl.RGB, "tex_Background", 1)
(you can change the texture unit index and name for the sampler2D
variable). Again, please see the hints on how to implement this in the Lab 08 features.
Here's an idea for a post-semester project :)
To really make this look like a snow globe (even though the snow is on the outside of the sphere here...), we can give the sphere a glass-like material and use refraction to model how rays bend when they (1) enter the sphere and (2) exit the sphere. After exiting, they will intersect the background, which we can then use to look up the fragment color.
Rays will refract in the direction defined in the Week 5 slides here (see slide 5). Luckily, GLSL
has a special built-in function called refract
which computes this refraction direction for us. Be careful with the ratio of refraction indices when either entering or exiting the material. You can set the refraction index of glass to 1.5. Also note that the normal vector in the refraction equation points into the first material (where the ray is coming from). This feature will allow you to render a scene like the one at the top-right of this page - refractive materials make things look upside down (do a Google image search for "refractive ball").
The initial submission for the lab is due on Wednesday 12/06 at 11:59pm EDT. I will then provide feedback by Monday 12/11 so you can edit your submission.
When you and your partner are ready, please submit the assignment on replit
. I will then make comments (directly on your repl
) and enter your current grade status at the top of the index.html
file.
Please also remember to submit your reflection for this week in this Google Form.
© Philip Claude Caplan, 2023 (Last updated: 2023-11-29)