In our first lab, we saw how to use curves to prescribe how scene parameters (e.g. motion or size of objects) change over time. The techniques we used were not necessarily realistic, but they perhaps created an effect that was suitable for an animation, like the squash-and-stretch effect. Using curves, we gave the artist control over the animation.
If we were simulating snow, smoke, fire or some liquid, it isn't really clear how these objects would move over time. Even if we did know, there are just too many knobs to turn to control every detail of their movement.
In the next two lectures, the artist will be the laws of physics. Our job is to respect the physical laws that govern the motion of objects in the scene, and we'll represent these objects as a collection of particles. In some cases, the motion may have an analytic solution, but in most cases (like the motion of a fluid), we'll need to use numerical methods to calculate the motion.
The image at the top-right of this page is from a paper on using Voronoi cells (more specifically, Power cells) to represent fluid particles. Please see this video for some demos!
Although this isn't a physics course, we do need to know a little bit of physics to make particles move according to the forces that act upon them. The motion of objects are described by Newton's laws of motion. Specifically, we need Newton's second law which states that the change of motion of an object is proportional to the net force acting upon it. This change of motion is "how much the velocity
The "net force" in Newton's second law means that we need to add up any force acting on our object. The second law can then be written mathematically as:
Each
Let's go back to the Pixar ball from the first lab and try to use physics to describe its height instead of using a curve. The free-body diagram is shown on the left below, and the only force acting on the ball (for now) is the gravitational force, which is equal to the mass of the object times the gravitational acceleration
Here we are assuming that the origin
If we have an equation, why don't we just evaluate it the same way we evaluated curves in the first lab?
As soon as we add other forces, things get more complicated. Depending on the force, we might still be able to derive a closed-form analytic expression (after a lot of tedious math), but for more general forces, we probably can't.
For example, consider the case in which we add a drag force to model the height of the ball over time. Drag opposes the motion (velocity) of the ball, so it points upwards for our ball (in the rightmost picture above). Without getting too much into the aerodynamics, the drag force
To model the motion when more complicated forces are present, we can use numerical methods, which consists of approximating the equations of motion and taking small steps to update the motion of our objects. For example, we can assume that we will take time steps of
where
How are we going to get the acceleration
For particles subjected to gravity and drag, this is
The scheme we just derived is called Euler's method. It works fine but just know that it's not the most accurate. In fact, it's a first-order scheme, which means that the error between the numerical soluation and the actual solution is
The Runge-Kutta method is a fourth-order accurate scheme which is a little more complicated to implement - the position and velocity updates are done in four "stages" instead of a single stage like Euler's method. In the demo below, a satellite is orbiting the Earth. Using Euler's method to model the orbit causes the satellite to fly off (in the model) because of the error in Euler's equation. Runge Kutta's method is more accurate and keeps the satellite closer to the analytic solution. For reference, we are just solving
WebGL
So far, we have seen how to draw triangles (gl.TRIANGLES
) with WebGL
. When drawing particles, we don't have triangle connectivity information. We just have particle positions. Instead of using gl.drawElements
, we'll use gl.drawArrays
to draw gl.POINTS
.
// create buffer for particle positions and write to the GPU
let positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(position), gl.STATIC_DRAW);
Assuming we have enabled some a_Position
attribute in a WebGLProgram
, we can then draw the particles using:
// draw nPartices particles - for 3d points, this will be points.length / 3
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.drawArrays(gl.POINTS, 0, nParticles);
Our vertex shader transforms points from object space to clip space as usual by the model-view-projection matrix. To draw the points, we also need to set a new vertex shader output called gl_PointSize
. When drawings points, WebGL
will draw a little square around each point and the size of the square is influenced by gl_PointSize
. If we want far-away points to appear smaller, we can set the size to be inversely proportional to the depth after the projection:
attribute vec4 a_Position; // remember to enable this attribute on the JavaScript side!
uniform mat4 u_ProjectionMatrix;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ModelMatrix; // if there is one
void main() {
gl_Position = u_ProjectionMatrix * u_ViewMatrix * u_ModelMatrix * vec4(a_Position, 1);
gl_PointSize = 50.0 / gl_Position.w; // inversely proportional to depth after projection
}
In our fragment shader, we can either set a constant color for the little square, or we can use another special input to the fragment shader (when drawing points) called gl_PointCoord
. This will be the relative coordinates within the square (in
uniform sampler2D tex_Sprite;
void main() {
gl_FragColor = texture2D(tex_Sprite, gl_PointCoord); // use an image for each particle
//gl_FragColor = vec4(1, 1, 1, 1); // use a constant color for each particle
}
WebGL2
transform feedback.To simulate more realistic particle systems like snow or rain, we want to use thousands or even millions of particles! For the examples we are considering today, each particle's motion is only influenced by external forces (and its previous position and velocity), but does not depend on other particle positions. If we were to write a particle animation in JavaScript
, it might look something like this:
// initialize particle positions and velocity
let nParticles = 1000;
let position = new Float32Array(nParticles * 3);
// initialize position and velocity with random or known data...
const draw = (position) {
// assume gl is some WebGLRenderingContext
let positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(position), gl.STATIC_DRAW);
gl.clear(gl.DEPTH_BUFFER_BIT | gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.POINTS, 0, nParticles);
}
const mass = 1; // some mass in kg
const nSteps = 1000;
const tFinal = 10;
const deltaT = tFinal / nSteps;
for (let k = 0; k < nSteps; k++) {
// draw particles at this time step
draw(position);
// for each particle, calculate the update
for (let i = 0; i < nParticles; i++) {
const ak = vec3.fromValues(0, -9.81, 0); // only gravity in this example
for (let d = 0; d < 3; d++) {
const vNext = velocity[3 * i + d] + deltaT * ak[d];
const pNext = position[3 * i + d] + deltaT * velocity[3 * i + d];
velocity[3 * i + d] = vNext;
position[3 * i + d] = pNext;
}
}
}
The problem with this is that we are rewriting the position
data to the GPU at every time step which is not very efficient. Furthermore, the loop over every particle i
can be done in parallel since the particle i
equations only depend on i
, and not on any other information from some particle j
.
To make this more efficient, we can use a WebGL2
feature called transform feedback. Here, we will use WebGL
to both draw and update particle positions and velocities. Transform feedback allows us to write varying
s to buffers. So we can write the updated position and velocity (pNext
and vNext
) to a varying (which will be captured during transform feedback), and then swap the buffers so these values become the input ones (a_Position
and a_Velocity
) on the next time step.
The first thing we need to do is tell our program
(before linking) that we will capture varyings into a buffer. We can do this with a function called gl.transformFeedbackVaryings
which accepts an arrays of strings for the name of the varying
s we want to capture. For example, assuming we have created a vertexShader
and fragmentShader
(each a WebGLShader
object):
// create shader program
let program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
let varyings = ["v_Position"]; // must match the names of the varyings in the shader
gl.transformFeedbackVaryings(program, varyings, gl.SEPARATE_ATTRIBS);
gl.linkProgram(program);
Here we are using gl.SEPARATE_ATTRIBS
which means the varying
s will be captured to separate buffers instead of "interleaved" within a single buffer (gl.INTERLEAVED_ATTRIBS
). Having the attributes separate are convenient for swapping the data at the end of each time step.
As an example, we'll just focus on updating position using a constant velocity
attribute vec3 a_Position;
varying vec3 v_Position;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ProjectionMatrix;
float deltaT = 5e-4;
void main() {
// for rendering
gl_Position = u_ProjectionMatrix * u_ViewMatrix * vec4(a_Position, 1.0);
gl_PointSize = 10.0 / gl_Position.w;
// for updating
vec3 v0 = vec3(0, -5, 0); // constant velocity of each particle
// (if velocity was another input, we would use a_Velocity instead)
v_Position = a_Position + v0 * deltaT;
// if necessary, calculate v_Velocity (vNext) from a_Velocity and the forces
}
The fragment shader can be the same one defined above. On the JavaScript
side, we need to (1) create buffers to hold v_Position
during the transform feedback capturing and (2) enable transform feedback:
// create and allocate (but do not fill) buffers for transform feedback
const nComponentsPerVarying = 3; // 3 because position is a vec3
const bytesPerComponent = 4; // # bytes per component (4 for float)
const totalBufferBytes = nParticles * nComponentsPerVarying * bytesPerComponent
let nextPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, nextPositionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, totalBufferBytes, gl.DYNAMIC_DRAW);
// if necessary, create a similar "nextVelocityBuffer"
// create transform feedback object
let feedback = gl.createTransformFeedback();
draw = () => {
// bind the buffers to the appropriate index in the transform feedback object
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, feedback);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, nextPositionBuffer);
// if necessary, do something similar for nextVelocityBuffer using the index of "v_Velocity" passed to transformFeedbackVaryings
// clear the screen
gl.clear(gl.DEPTH_BUFFER_BIT | gl.COLOR_BUFFER_BIT);
// point the "a_Position" attribute to the positionBuffer
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
// if necessary, do something similar for a_Velocity and velocityBuffer
// draw with transform feedback
gl.beginTransformFeedback(gl.POINTS);
gl.drawArrays(gl.POINTS, 0, nVertices);
gl.endTransformFeedback();
// swap the buffers for the next time step
[nextPositionBuffer, positionBuffer] = [positionBuffer, nextPositionBuffer];
// if necessary, swap the velocity buffers as well
}
The way we are using bufferData
is slightly different than how we've used it before. We're simply allocating enough memory (on the GPU) for the data, but not writing anything to it. Note that we can also use totalBufferBytes = position.byteLength
, assuming position
is a typed array (e.g. Float32Array
).
Note: the indices of the varyings specified in the varyings
array correspond to the indices passed to gl.bindBufferBase
. For example, we specified 0
when calling bindBufferBase
because "v_Position"
is the first (and only) varying captured in the array passed to gl.transformFeedbackVaryings
. If we had another varying to capture, e.g. the velocity v_Velocity
, we would use 1
(assuming we set up the varyings
as ["v_Position", "v_Velocity"]
). Of course, this assumes we had created buffers to hold the current and next velocity values (e.g. velocityBuffer
and nextVelocityBuffer
).
Transform feedback is also useful for retrieving the data (on the JavaScript
side) that was written by the vertex shader. Here is how to retrieve the updated position and velocity data:
let x = new Float32Array(nParticles * nComponentsPerVarying); // allocate space for the data
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, nextPositionBuffer);
gl.getBufferSubData(gl.TRANSFORM_FEEDBACK_BUFFER, 0, x); // read the data
console.log(x); // the result of v_Position
Why would we want to do this? It means we can do General-Purpose GPU (GPGPU) programming - i.e. run programs on the GPU that are not necessarily intended for drawing. As an example, consider calculting CUDA
or OpenCL
to do this but it's pretty cool that we can do this with WebGL
.
Things are more complicated when the equations of motion for each particle depend on other particles. Sometimes particle interactions require finding the nearest particle during an update step, which could involve using a kd-tree to find the nearest neighbors. In other situations, particles may be directly connected to each other, for example in the simulation of cloth. We'll talk about modeling cloth next class in which each particle will be connected by springs to other particles.