Lecture 11: Animation 2 (Particles) (slides)

Learning Objectives

By the end of this lecture, you will be able to:

  • implement Euler's method for updating the position and velocity of particles,
  • draw particles as either squares or using a sprite,
  • use transform feedback to ping-pong updated position and velocity data during an animation.

In the last topic, we saw how to use curves to prescribe how scene parameters (e.g. motion or size of objects) change over time. The techniques we used were not necessarily realistic, but they perhaps created an effect that was suitable for an animation, like the squash-and-stretch effect. Using curves, we gave the artist control over the animation.

If we were simulating snow, smoke, fire or some liquid, it isn't really clear how these objects would move over time. Even if we did know, there are just too many knobs to turn to control every detail of their movement.

In the next two lectures, the artist will be the laws of physics. Our job is to respect the physical laws that govern the motion of objects in the scene, and we'll represent these objects as a collection of particles. In some cases, the motion may have an analytic solution, but in most cases (like the motion of a fluid), we'll need to use numerical methods to calculate the motion.

The image at the top-right of this page is from a paper on using Voronoi cells (more specifically, Power cells) to represent fluid particles. Please see this video for some demos!

Newton's second law.

Although this isn't a physics course, we do need to know a little bit of physics to make particles move according to the forces that act upon them. The motion of objects are described by Newton's laws of motion. Specifically, we need Newton's second law which states that the change of motion of an object is proportional to the net force acting upon it. This change of motion is "how much the velocity $\vec v$ (the speed and direction) changes ($\Delta \vec v$) over some time change ($\Delta t$)", also known as the acceleration ($\vec a$). This should technically be written using a derivative, but it's fine for us to use finite differences (with $\Delta$):

$$ \vec a = \frac{\Delta \vec v}{\Delta t}. $$

The "net force" in Newton's second law means that we need to add up any force acting on our object. The second law can then be written mathematically as:

$$ m \vec a = \sum\vec f_i. $$

Each $\vec f_i$ is a force acting on our object and $m$ is the mass of the object. When using Newton's second law, it's a very good idea to use a free-body diagram to (1) annotate all the forces acting on an object and (2) display the coordinate system and assumed motion of the object.

Back to the Pixar ball.

Let's go back to the Pixar ball from the last lecture and lab and try to use physics to describe its height instead of using a curve. The free-body diagram is shown on the left below, and the only force acting on the ball (for now) is the gravitational force, which is equal to the mass of the object times the gravitational acceleration $\vec g$ (for the Earth, $g = 9.81 m/s^2$ pointing towards the Earth). The height of the ball is $y$, so the gravitational force only points in the $y$-direction and is equal to $f_{g,y} = - mg$.

Here we are assuming that the origin $y = 0$ is at the ground. Assuming we are dropping the ball from some initial y-velocity $v_{y,0}$ and height $y_0$, the height of the ball at some time $t$ can be derived analytically to be:

$$ y = y_0 + v_{y,0} t - \frac{1}{2} g t^2. $$

If we have an equation, why don't we just evaluate it the same way we evaluated curves in the last lecture?

Adding other forces.

As soon as we add other forces, things get more complicated. Depending on the force, we might still be able to derive a closed-form analytic expression (after a lot of tedious math), but for more general forces, we probably can't.

For example, consider the case in which we add a drag force to model the height of the ball over time. Drag opposes the motion (velocity) of the ball, so it points upwards for our ball (in the rightmost picture above). Without getting too much into the aerodynamics, the drag force $\vec f_d$ depends on the density of the fluid $\rho$, the velocity of the ball $\vec v$, and the cross-sectional area of the ball $A$ (facing the flow). It also depends on the drag coefficient of the ball $C_d$. $C_d$ depends on a lot of factors, and we'll just take this to be a constant equal to $0.5$ - see regime 4 in this figure for a sphere.

Euler's method: approximating the solution with a numerical method.

To model the motion when more complicated forces are present, we can use numerical methods, which consists of approximating the equations of motion and taking small steps to update the motion of our objects. For example, we can assume that we will take time steps of $\Delta t$ to update the position and velocity of the ball. Velocity is the rate of change of position over time, so:

$$ \vec v^k = \frac{\Delta p}{\Delta t} = \frac{\vec p^{k + 1} - \vec p^k}{\Delta t} \quad \rightarrow \quad \vec p^{k+1} = \vec p^k + \Delta t\ \vec v^k, $$

where $k$ is used to represent the "step" (or iteration) in the simulation. Similarly, acceleration is the rate of change of velocity over time:

$$ \vec a^k = \frac{\Delta v}{\Delta t} = \frac{\vec v^{k + 1} - \vec v^k}{\Delta t} \quad \rightarrow \quad \vec v^{k+1} = \vec v^k + \Delta t\ \vec a^k. $$

How are we going to get the acceleration $\vec a^k$? Newton's second law. Putting $m$ onto the other side, we have:

$$ \vec a^k = \frac{1}{m} \sum \vec f_i. $$

For particles subjected to gravity and drag, this is

$$ \vec a^k = \vec g - \frac{\rho C_d A}{2m} \lVert v \rVert^2 \frac{\vec v}{\lVert v\rVert} = \vec g - \frac{\rho C_d A }{2m}\vec v \lVert v \rVert. $$

The scheme we just derived is called Euler's method. It works fine but just know that it's not the most accurate. In fact, it's a first-order scheme, which means that the error between the numerical soluation and the actual solution is $\mathcal{O}(\Delta t)$. Euler's method made an appearance in the movie Hidden Figures (see this clip).

The Runge-Kutta method is a fourth-order accurate scheme which is a little more complicated to implement - the position and velocity updates are done in four "stages" instead of a single stage like Euler's method. In the demo below, a satellite is orbiting the Earth. Using Euler's method to model the orbit causes the satellite to fly off (in the model) because of the error in Euler's equation. Runge Kutta's method is more accurate and keeps the satellite closer to the analytic solution. For reference, we are just solving $\frac{\Delta x}{\Delta t} = \vec h(x,y)$, where $\vec h(x, y) = (-y, x)$ - the analytic solution to this equation is a circle.




Animating particles (efficiently).

Drawing particles with WebGL

So far, we have seen how to draw triangles (gl.TRIANGLES) with WebGL. When drawing particles, we don't have triangle connectivity information. We just have particle positions. Instead of using gl.drawElements, we'll use gl.drawArrays to draw gl.POINTS.

// create buffer for particle positions and write to the GPU
let positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(position), gl.STATIC_DRAW);

Assuming we have enabled some a_Position attribute in a WebGLProgram, we can then draw the particles using:

// draw nPoints particles - for 3d points, this will be points.length / 3
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.drawArrays(gl.POINTS, 0, nPoints);

Our vertex shader transforms points from object space to clip space as usual by the model-view-projection matrix. To draw the points, we also need to set a new vertex shader output called gl_PointSize. When drawings points, WebGL will draw a little square around each point and the size of the square is influenced by gl_PointSize. If we want far-away points to appear smaller, we can set the size to be inversely proportional to the depth after the projection:

attribute vec4 a_Position; // remember to enable this attribute on the JavaScript side!

uniform mat4 u_ProjectionMatrix;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ModelMatrix; // if there is one

void main() {
  gl_Position = u_ProjectionMatrix * u_ViewMatrix * u_ModelMatrix * vec4(a_Position, 1);
  gl_PointSize = 50.0 / gl_Position.w; // inversely proportional to depth after projection
}

In our fragment shader, we can either set a constant color for the little square, or we can use another special input to the fragment shader (when drawing points) called gl_PointCoord. This will be the relative coordinates within the square (in $[0, 1]^2$), which means we can use a texture!

uniform sampler2D tex_Sprite;

void main() {
  gl_FragColor = texture2D(tex_Sprite, gl_PointCoord); // use an image for each particle
  //gl_FragColor = vec4(1, 1, 1, 1); // use a constant color for each particle
}

Updating particle position and velocity with WebGL2 transform feedback.

To simulate more realistic particle systems like snow or rain, we want to use thousands or even millions of particles! For the examples we are considering today, each particle's motion is only influenced by external forces (and its previous position and velocity), but does not depend on other particle positions. If we were to write a particle animation in JavaScript, it might look something like this:

// assume gl is some WebGLRenderingContext
const draw = (position) {
  let positionBuffer = gl.createBuffer();
  gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
  gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(position), gl.STATIC_DRAW);
  gl.drawArrays(gl.POINTS, 0, nPoints);
}

const mass = 1; // some mass in kg
const nSteps = 1000;
const tFinal = 10;
const deltaT = tFinal / nSteps;

// initialize particle positions and velocity
let nParticles = 1000;
let position = new Array(nParticles * 3);
// initialize position and velocity with random or known data...

draw(position);
for (let k = 0; k < nSteps; k++) {
  // for each particle, calculate the update
  for (let i = 0; i < nParticles; i++) {
    const ak = vec3.fromValues(0, -9.81, 0); // only gravity in this example
    for (let d = 0; d < 3; d++) {
      const v_k_plus_1 = velocity[3 * i + d] + deltaT * ak[d];
      const p_k_plus_1 = position[3 * i + d] + deltaT * velocity[3 * i + d];

      velocity[3 * i + d] = v_k_plus_1;
      position[3 * i + d] = p_k_plus_1;
    }
  }
  draw(position);
}

The problem with this is that we are rewriting the position data to the GPU at every time step which is not very efficient. Furthermore, the loop over every particle i can be done in parallel since the particle i equations only depend on i, and not on any other information from some particle j.

To make this more efficient, we can use a WebGL2 feature called transform feedback. Here, we will use WebGL to both draw and update particle positions and velocities. Transform feedback allows us to write varyings to buffers. So we can write the updated position and velocity (p_k_plus_1 and v_k_plus_1) to a varying (which will be captured during transform feedback), and then swap the buffers so these values become the input ones (a_Position and a_Velocity) on the next time step.

As an example, we'll just focus on updating position using one of the kinematic formulas for a particle that starts at rest (the initial velocity is zero: $\vec v_0 = \vec 0$). The update equation here is $\vec p = \vec p_0 + \frac{1}{2}\vec a t^2$. We'll approximate the current particle velocity on the fly. Here would be the vertex shader:

attribute vec3 a_Position;

varying vec3 v_Position;
varying vec3 v_Velocity;

uniform mat4 u_ViewMatrix;
uniform mat4 u_ProjectionMatrix;

uniform int u_iteration;
float deltaT = 5e-4;

void main() {
  // for rendering
  gl_Position = u_ProjectionMatrix * u_ViewMatrix * vec4(a_Position, 1.0);
  gl_PointSize = 10.0 / gl_Position.w;

  // for updating
  float time = float(u_iteration) * dt;
  v_Position = a_Position + 0.5 * vec3(0, -9.81, 0) * time * time;
  v_Velocity = (v_Position - a_Position) / deltaT;
}

The fragment shader can be the same one defined above. On the JavaScript side, we need to (1) create buffers to hold v_Position and v_Velocity during the transform feedback capturing and (2) enable transform feedback:

// create and allocate (but do not fill) buffers for transform feedback
const nb = 4; // # of bytes per component (4 for float)
let nc = 3; // # of components (3 for vec3)
let p_k_plus_1 = gl.createBuffer();
let v_k_plus_1 = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, p_k_plus_1);
gl.bufferData(gl.ARRAY_BUFFER, nVertices * nc * nb, gl.DYNAMIC_DRAW);

gl.bindBuffer(gl.ARRAY_BUFFER, v_k_plus_1);
gl.bufferData(gl.ARRAY_BUFFER, nVertices * nc * nb, gl.DYNAMIC_DRAW);

// create transform feedback and associate with buffers
let feedback = gl.createTransformFeedback();

let iteration = 0;
draw = () => {
  iteration++;
  let u_iteration = gl.getUniformLocation(program, "u_iteration");
  gl.uniform1i(u_iteration, iteration);

  // bind the buffers to to the first and second locations in the feedback object
  gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, feedback);
  gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, p_k_plus_1);
  gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 1, v_k_plus_1);

  gl.clear(gl.DEPTH_BUFFER_BIT | gl.COLOR_BUFFER_BIT);
  gl.beginTransformFeedback(gl.POINTS);
  gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
  gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
  //gl.enable(gl.RASTERIZER_DISCARD); // uncomment to disable rasterization (no drawing)
  gl.drawArrays(gl.POINTS, 0, nVertices);
  gl.endTransformFeedback();

  // swap the buffers for the next time step
  [p_k_plus_1, positionBuffer] = [positionBuffer, p_k_plus_1];
}

There is one thing missing here: we need to tell the program (before linking) that we want to capture certain varyings. We can do this with a function called gl.transformFeedbackVaryings which accepts an arrays of strings for the name of the varyings we want to capture. For example, assuming we have created a vertexShader and fragmentShader (each a WebGLShader object):

// create shader program
let program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);

let varyings = ["v_Position", "v_Velocity"]; // must match the names of the varyings in the shader
gl.transformFeedbackVaryings(program, varyings, gl.SEPARATE_ATTRIBS);

gl.linkProgram(program);

Here we are using gl.SEPARATE_ATTRIBS which means the varyings will be captured to separate buffers instead of "interleaved" within a single buffer (gl.INTERLEAVED_ATTRIBS). Having the attributes separate are convenient for swapping the data at the end of each time step.

It's also possible to retrieve the data after transform feedback!

Transform feedback is also useful for retrieving the data (on the JavaScript side) that was written by the vertex shader. Here is how to retrieve the updated position and velocity data:

let x = new Float32Array(nParticles * nc); // allocate space for the data
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, p_k_plus_1);
gl.getBufferSubData(gl.TRANSFORM_FEEDBACK_BUFFER, 0, x); // read the data
console.log(x); // the result of v_Position

let v = new Float32Array(nParticles * nc);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 1, v_k_plus_1);
gl.getBufferSubData(gl.TRANSFORM_FEEDBACK_BUFFER, 0, v);
console.log(v); // the result of v_Velocity

Why would we want to do this? It means we can do General-Purpose GPU (GPGPU) programming - i.e. run programs on the GPU that are not necessarily intended for drawing. As an example, consider calculting $\vec c = \vec a + \vec b$ for some $n$-dimensional vectors $\vec a$, $\vec b$ and $\vec c$, where $n$ gets really big (like a billion). Each component of $\vec c$ ($c_i = a_i + b_i$) can be calculated independently in the vertex shader and we can capture the result of $\vec c$ using transform feedback. Usually, you would use languages like CUDA or OpenCL to do this but it's pretty cool that we can do this with WebGL.

Other things to consider.

Things are more complicated when the equations of motion for each particle depend on other particles. Sometimes particle interactions require finding the nearest particle during an update step, which could involve using a kd-tree to find the nearest neighbors. In other situations, particles may be directly connected to each other, for example in the simulation of cloth. We'll talk about modeling cloth next class in which each particle will be connected by springs to other particles.

Complete example
<!DOCTYPE html>
<html>

<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width" />
  <title>Week 12: Particles</title>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.8.1/gl-matrix-min.js"></script>
</head>

<body>
  <div style="width: 100%; position: relative">
    <canvas id="animation-canvas" style="width: inherit"></canvas>
  </div>
  <script type="text/javascript">
    const dim = 3;
    const nParticles = 1e2;
    let position = new Float32Array(dim * nParticles);
    for (let i = 0; i < nParticles * dim; i++)
      position[i] = -1 + 2 * Math.random();

    let canvas = document.getElementById("animation-canvas");
    canvas.width = window.innerWidth;
    canvas.height = window.innerHeight;
    let gl = canvas.getContext("webgl2");

    // create vertex shader
    const vertexShaderSource = `
        attribute vec3 a_Position;

        varying vec3 v_Position;
        varying vec3 v_Velocity;

        uniform mat4 u_ViewMatrix;
        uniform mat4 u_ProjectionMatrix;

        uniform int u_iteration; // k
        float dt = 5e-4;

        void main() {
          // for rendering
          gl_Position = u_ProjectionMatrix * u_ViewMatrix * vec4(a_Position, 1.0);
          gl_PointSize = 50.0 / gl_Position.w;

          // PART 3: calculate v_Position and v_Velocity as updates for next time step
          float time = float(u_iteration) * dt;
          v_Position = a_Position + 0.5 * vec3(0, -9.81, 0) * time * time; // p^{k+1}
          v_Velocity = (v_Position - a_Position) / dt; // v^{k+1}
        }`;
    let vertexShader = gl.createShader(gl.VERTEX_SHADER);
    gl.shaderSource(vertexShader, vertexShaderSource);
    gl.compileShader(vertexShader);
    if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS))
      throw "Error in vertex shader: " + gl.getShaderInfoLog(vertexShader);

    // create fragment shader
    const fragmentShaderSource = `
        precision highp float;
        void main() {
          gl_FragColor = vec4(1, 1, 1, 0.5);
        }`;
    let fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
    gl.shaderSource(fragmentShader, fragmentShaderSource);
    gl.compileShader(fragmentShader);
    if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS))
      throw "Error in fragment shader: " + gl.getShaderInfoLog(fragmentShader);

    // create shader program
    let program = gl.createProgram();
    gl.attachShader(program, vertexShader);
    gl.attachShader(program, fragmentShader);

    // PART 1: declare varyings to capture
    gl.transformFeedbackVaryings(program, ["v_Position", "v_Velocity"], gl.SEPARATE_ATTRIBS);

    gl.linkProgram(program);
    gl.useProgram(program);

    gl.clearColor(0, 0, 0, 1);
    gl.enable(gl.BLEND);
    gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
    gl.disable(gl.DEPTH_TEST);

    // create buffers for initial data
    let positionBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW);

    // enable attributes
    let a_Position = gl.getAttribLocation(program, "a_Position");
    gl.enableVertexAttribArray(a_Position);

    // setup view and write uniforms
    const aspectRatio = canvas.width / canvas.height;
    let projectionMatrix = mat4.create();
    mat4.perspective(projectionMatrix, Math.PI / 4.0, aspectRatio, 1e-3, 1000);

    const eye = vec3.fromValues(0, 0, 3);
    const center = vec3.fromValues(0, 0, 0);
    const up = vec3.fromValues(0, 1, 0);
    const viewMatrix = mat4.lookAt(mat4.create(), eye, center, up);

    let u_ProjectionMatrix = gl.getUniformLocation(program, "u_ProjectionMatrix");
    let u_ViewMatrix = gl.getUniformLocation(program, "u_ViewMatrix");
    gl.uniformMatrix4fv(u_ProjectionMatrix, false, projectionMatrix);
    gl.uniformMatrix4fv(u_ViewMatrix, false, viewMatrix);

    // PART 2
    // create and allocate (but do not fill) buffers for transform feedback
    // and create transform feedback object
    let pNext = gl.createBuffer();
    let vNext = gl.createBuffer();

    gl.bindBuffer(gl.ARRAY_BUFFER, pNext);
    gl.bufferData(gl.ARRAY_BUFFER, 3 * 4 * nParticles, gl.DYNAMIC_DRAW);

    gl.bindBuffer(gl.ARRAY_BUFFER, vNext);
    gl.bufferData(gl.ARRAY_BUFFER, 3 * 4 * nParticles, gl.DYNAMIC_DRAW);

    let feedback = gl.createTransformFeedback();

    // draw
    let maxIteration = 150;
    let iteration = 0;
    let animate = true;
    const draw = () => {
      iteration++;
      let u_iteration = gl.getUniformLocation(program, "u_iteration");
      gl.uniform1i(u_iteration, iteration);
      gl.clear(gl.DEPTH_BUFFER_BIT | gl.COLOR_BUFFER_BIT);

      // PART 4a: bind feedback buffers and start transform feedback
      gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, feedback);
      gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, pNext);
      gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 1, vNext);

      gl.beginTransformFeedback(gl.POINTS);

      gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
      gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
      gl.drawArrays(gl.POINTS, 0, nParticles);

      // PART 4b: end transform feedback, swap buffers and animate
      gl.endTransformFeedback();
      if (animate && iteration < maxIteration) {
        [pNext, positionBuffer] = [positionBuffer, pNext];
        requestAnimationFrame(draw);
      }
    };
    draw();

    // PART 5: read the data from the buffers (if not animating)
    if (!animate) {
      let p = new Float32Array(nParticles * 3);
      gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, pNext);
      gl.getBufferSubData(gl.TRANSFORM_FEEDBACK_BUFFER, 0, p);
      console.log(p);

      let v = new Float32Array(nParticles * 3);
      gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 1, vNext);
      gl.getBufferSubData(gl.TRANSFORM_FEEDBACK_BUFFER, 0, v);
      console.log(v);
    }
  </script>
</body>

</html>

© Philip Claude Caplan, 2023 (Last updated: 2023-11-28)