WebGL
Our main goal for today is to be able to run the entire pipeline below from start to finish using WebGL
. The notes here mostly serve as a reference for building complete WebGL
programs. In class, we'll build our own application to render a cube with our cut-and-paste texturing technique.
In last week's lab, we focused on implementing a fragment shader. Today, we'll also build a vertex shader which takes in certain inputs that we'll need to store in the GPU's memory. We'll also talk about how we pass data from the vertex shader to the fragment shader and control global variables that are common to our shader programs.
WebGL
is a state machine: manage the state using the "context".WebGL
is a state machine in the sense that it is a large collection of variables that define how it should operate. The state is managed by the context we introduced in the last lecture. Recall that we can retrieve a context from the canvas like this:
let gl = canvas.getContext("webgl"); // retrieves a WebGL context
This context has a collection of functions and constants that we can use and/or modify to change the behavior of the rendering pipeline.
Recall that in Lab 8, we had to enable depth testing (gl.enable(gl.DEPTH_TEST)
). This changed the state so that whenever we ran the rendering pipeline, WebGL
would use depth testing and draw stuff in the correct order. In this example, gl.enable
was the function that allowed us to change the state.
In addition to global variables, we can also manage the state used to describe how data flows through the rendering pipeline using buffers and attributes. We can also manage which shader program is currently active in the rendering pipeline.
To better understand this concept of state, please open the following link. In the dropdown at the top-right, click on the "Rainbow Triangle" example.
https://webglfundamentals.org/webgl/lessons/resources/webgl-state-diagram.html
If you click the two rightwards-pointing arrows at the top right, you should see something like this. Please take a moment to investigate all the different boxes - we'll talk about the details of how this relates to the code we write in the next sections.
Usually, the first thing you'll do is create a program that represents how vertices and fragments are processed. Each shader (vertex, fragment) will be compiled individually and then attached to this shader program. To compile a shader:
const compileShader = (gl, shaderSource, type) => {
let shader = gl.createShader(type);
gl.shaderSource(shader, shaderSource);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
let error = gl.getShaderInfoLog(shader);
gl.deleteShader(shader);
throw ("Unable to compile " + (type === gl.VERTEX_SHADER ? "vertex" : "fragment") + " shader: " + error);
}
return shader;
};
This returns WebGLShader
object that can be attached to a program (a WebGLProgram
object):
const compileProgram = (gl, vertexShaderSource, fragmentShaderSource) => {
let vertexShader = compileShader(gl, vertexShaderSource, gl.VERTEX_SHADER);
let fragmentShader = compileShader(gl, fragmentShaderSource, gl.FRAGMENT_SHADER);
let program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS))
throw ("Unable to compile the shader program: " + gl.getProgramInfoLog(program));
gl.useProgram(program);
return program;
};
At the end of the compileProgram
function, note that gl.useProgram(program)
will change the state of WebGL
so that any rendering calls use this program
. We will usually have a single program in our course, but it's possible to have multiple programs to create different effects for different objects in your scene.
Here is an example of a vertex shader and fragment shader. Don't worry about the attribute
stuff for now, we'll talk about that soon. Right now, I just want to highlight the idea of a varying
. This is a way to specify that we want to pass a variable called v_Color
from the vertex shader to the fragment shader. Wait a second. If the vertex shader operates on every vertex and the fragment shader operates on every fragment, how can be get a value for v_Color
at a fragment (which is some portion of a triangle that overlaps a particular pixel)? Interpolation! Using barycentric coordinates (again), the values for the three varyings at the three vertices of a triangle will be interpolated to create a value at the fragment. Again, WebGL
does the interpolation for us - we just need to specify that a variable will be passed from the vertex shader to the fragment shader using the varying
declaration.
Vertex Shader:
attribute vec3 a_Position;
attribute vec3 a_Color;
varying vec3 v_Color; // declare varyings before main()
void main() {
gl_Position = vec4(a_Position, 1.0);
v_Color = a_Color; // we need to assign this as an output of the vertex shader
}
Fragment Shader:
varying vec3 v_Color; // declare incoming interpolated varying
void main() {
gl_FragColor = vec4(v_Color, 1); // use the varying
}
We can also specify qualifiers to tell WebGL
how to do the interpolation. In the example above, we will use the default perspective-correct interpolation.
We need to tell WebGL
where to look for our scene data. This means we need to upload the meshes in our scenes to the GPU. We do this with buffers. Think of a buffer as a chunk of memory where we will put our mesh data. Sometimes we need to upload floating-point data (for vertices), and sometimes it will be integer data (for triangle and/or edge indices).
gl.createBuffer
):let pointBuffer = gl.createBuffer();
let triangleBuffer = gl.createbuffer();
gl.bindBuffer
):Binding a buffer refers to the idea of setting the state of which buffer WebGL
considers "active". We will use gl.ARRAY_BUFFER
for floating-point data (vertices) and gl.ELEMENT_ARRAY_BUFFER
for integer data (triangle/edge indices).
gl.bindBuffer(gl.ARRAY_BUFFER, pointBuffer);
// OR (if doing an operation with triangle indices)
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
A very important (and bug-prone) aspect of buffers is that every operation on a buffer will work with the currently bound buffer. This relates to the concept of a state. WebGL
can only hold one state at a time for the ARRAY_BUFFER
, and one for the ELEMENT_ARRAY_BUFFER
, so we need to remember which one was the last one we bound.
The picture below shows the difference between when a positionBuffer
and colorBuffer
are independently bound to the ARRAY_BUFFER
.
gl.bufferData
):Say we have vertices stored in a 1d array called vertices
and triangle indices stored in a 1d array called triangles
. We want to upload both of these arrays to the GPU so WebGL
can use them when running through the pipeline.
Unfortunately, raw JavaScript
arrays (Array()
or []
) contain no information about the type of each element in the array. The GPU needs to know how much data we are uploading, so we need to convert these to typed arrays. We will use Float32Array
for vertex coordinates and either Uint16Array
or Uint32Array
for integer data. These typed arrays can be constructed directly from the original arrays (e.g. vertices_f32 = new Float32Array(vertices)
).
Now the data can be written to the GPU using the gl.bufferData
function. You can always set the last argument to gl.STATIC_DRAW
:
// write vertex coordinates to the GPU
gl.bindBuffer(gl.ARRAY_BUFFER, pointBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
// write triangle indices to the GPU
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(triangles), gl.STATIC_DRAW);
We're almost ready to draw. We first need to tell WebGL
how the data we uploaded is associated with the buffers we created and wrote to. This is done using attributes. There are two steps we need to do here: (1) enable the attribute in the shader program, and (2) associate the attribute with a buffer. In order to enable the attribute, we need to know the "location" of the attribute in the program. Think of this as a pointer to the data. Here is a complete example that associates our pointBuffer
with the a_Position
attribute in the vertex shader. Assume the program
is the same one we created earlier.
// assuming our vertex shader looks like this:
const vertexShaderSource = `
attribute vec3 a_Position; // declare the position attribute in the vertex shader (add "attribute" before declaring type and name of variable)
void main() {
gl_Position = vec4(a_Position, 1.0);
}`;
let a_Position = gl.getAttribLocation(program, "a_Position"); // the second argument (String) should match the name of the attribute in the vertex shader
gl.enableVertexAttribArray(a_Position); // enable the attribute
// now, associate the attribute with some buffer (here, we want to use the pointBuffer so we need to bind that one)
gl.bindBuffer(gl.ARRAY_BUFFER, pointBuffer);
gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
// ^ ^ ^
The first three arguments (underlined with ^
) are the most important. The last three can always be set to false, 0, 0
(in this class).
The first argument is the location of the attribute in the shader program. Next we have the stride of the data that is bound to the pointBuffer
. This is 3
because there are three coordinates (x, y, z) for every vertex. The third argument is the type of the data (gl.FLOAT
because we buffered a Float32Array
for the vertices when we did the call to gl.bufferData
).
If you want to attach data to vertices (and use it in the pipeline), you'll need to remember the stride of the data. For 3d vertex normals and 3d vertex coordinates, this will be 3. For 2d texture coordinates (next week), this will be 2. You can also attach colors to each vertex, in which case, the stride will also be 3 for the RGB components.
gl.drawElements
.We're finally ready to invoke the rendering pipeline. We'll generally use gl.drawElements
to draw our meshes. Again, before each call to gl.drawElements
, we need to bind the buffer with the indices that will be used. For our triangleBuffer
declared above, this will be:
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.drawElements(gl.TRIANGLES, triangles.length, gl.UNSIGNED_SHORT, 0);
The first argument specifies that we are drawing triangles. If you want to draw edges, this should be gl.LINES
. The next argument corresponds to how much data there is to draw. You would think that WebGL
would remember this length since we wrote it earlier! But it doesn't. This sounds annoying but it's actually convenient because we may not always want to draw all the data. The last parameter is the offset in the indexed data, which gives control over where to start the drawing call. Again, this allows us to draw a subset of the triangles. In our course, we'll always draw everything, so you can set this to 0
.
The second-to-last argument can be a source of bugs. It refers to the type of the data that we are drawing. In our case, we wrote a Uint16Array
to represent our triangle indices as unsigned 16-bit integers. These are also called unsigned shorts, hence, the use of gl.UNSIGNED_SHORT
. Note that this means the maximum number of vertices would be gl.UNSIGNED_INT
(unsigned 32-bit integer) with triangle indices buffered as a Uint32Array
.
Note there is another function called gl.drawArrays
to invoke the rendering pipeline, but we won't really use it in our course.
Before drawing, we may also need to set some constants to control the global state. The most frequent ones we will use are gl.viewport
, gl.enable
and gl.polygonOffset
and gl.clearColor
.
We may also want to calculate variables on the JavaScript
side and pass them into our shaders. For example, if the view and or model matrices are modified when a user clicks and drags the mouse (we can apply a rotation to the model matrix), we need to transform points accordingly in the vertex shader. These variables can be passed into the pipeline as uniforms. We can pass int
, float
, vec3
, vec4
, mat3
and mat4
. These uniforms can be used in either the vertex or fragment shader and need the uniform
qualifier before declaring the type. For example:
uniform int u_shader; // as we saw in Lab 8
uniform float u_transparency; // in case you want to adjust the transparency value (4th component in gl_FragColor)
uniform vec3 u_km; // specify the material km on the JavaScript side
uniform mat4 u_ModelViewProjectionMatrix; // 4x4 matrix to transform points by the MVP matrix (in the vertex shader).
I usually prefix uniforms with u_
to remember that they are uniforms. This helps when your shaders have a lot of variables. Similar to attributes, we need to retrieve the "location" of the uniform in the shader program, this time using gl.getUniformLocation
. Here are some examples for float
, vec3
and mat4
. Let the vertex and fragment shaders be:
Vertex Shader:
attribute vec3 a_Position;
uniform mat4 u_ModelViewProjectionMatrix;
void main() {
gl_Position = u_ModelViewProjectionMatrix * vec4(a_Position, 1.0);
}
Fragment Shader:
uniform float u_transparency;
uniform vec3 u_km;
void main() {
gl_FragColor = vec4(u_km, u_transparency);
}
We can set these uniforms from the JavaScript
side like this:
let u_transparency = gl.getUniformLocation(program, "u_transparency");
gl.uniform1f(u_transparency, 0.6); // use gl.uniform1i if the uniform is an integer
let u_km = gl.getUniformLocation(program, "u_km");
gl.uniform3f(u_km, 0.6, 0.4, 0.5);
// OR
gl.uniform3fv(u_km, vec3.fromValues(0.6, 0.4, 0.5)); // the "v" at the end of uniform3fv means it is a vector
// assuming you have computed the model-view-projection (MVP) matrix somewhere
let u_mvp = gl.getUniformLocation(program, "u_ModelViewProjectionMatrix");
gl.uniformMatrix4fv(u_mvp, false, mvp); // assuming "mvp" is a mat4
Note the difference between writing scalars and vectors (uniform[1234][fi][v]
) and how matrices are passed (uniformMatrix[234]fv
). This is often a source of bugs (it's easy to forget the transpose
option for matrices, which we have set to false
in this example). For more information, please see the following documentation:
After setting a uniform, we can use it in the shader program when the rendering pipeline runs.
WebGL
There are also mechanisms to write and use textures in our WebGL
-based rendering pipeline. Just like with mesh data, we need to write texture data to the GPU so we can use it in our shaders.
Assume we have the image of Spot loaded into our HTML
document using an img
tag:
<img id="spot-texture" src="spot.png" hidden/>
Recall the hidden
property means that it won't be rendered in the HTML
document - but we'll be able to retrieve it on the JavaScript
side using the id
. Here is how to set up a texture for WebGL
using this image (assume we have a WebGLRenderingContext
called gl
and a WebGLShaderProgram
called program
):
// retrieve the image
let image = document.getElementById("spot-texture");
// create the texture and activate it
let texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0); // we are using texture index 0 <-- make a note of this!
gl.bindTexture(gl.TEXTURE_2D, texture);
// define the texture to be that of the requested image
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, 1); // may or may not need depending on the y-axis of the texture image
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, image);
Remember, we also need to tell WebGL
how to do the lookup for texels (ingredient #3 when we talked about textures). For both minification and magnification, we'll specify that we want WebGL
to use the nearest filter (gl.NEAREST
):
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
Now, on the GLSL
side, assuming we calculated the s
and t
(float
) values, we can use a sampler2D
to "sample" our texture object. This will actually be a uniform
, and we'll need to tell WebGL
what to use for this uniform
(just like we did last class for scalars, vectors, matrices). This is where the texture "index" noted above is important. If we want to bind this sampler2D
to our texture at gl.TEXTURE0
, we need to set the value of this uniform
to be 0
. If we activated and wanted to use gl.TEXTURE8
, this would be 8
.
So for a shader with the following declaration (in GLSL
):
uniform sampler2D tex_Image;
the corresponding JavaScript
to bind our texture in unit 0 with this sampler2D
would be:
gl.uniform1i(gl.getUniformLocation(program, 'tex_Image'), 0); // pass the N in gl.TEXTUREN when activating the texture unit
Finally, to perform the texel lookup, we can use the following within the main()
of our shader:
vec3 km = texture2D(tex_Image, vec2(s, t)).rgb;
The texture2D
function in GLSL
returns a vec4
, so we extract the first three components of the result with .rgb
(does the same as .xyz
).
For general models, we'll pass some OBJ
file) and send it (using a varying
) to be interpolated and, subsequently, input to the fragment shader. Next, the fragment shader will look up which color (or other value) to use in the lighting model.
Just like we have been writing vertex coordinates, normals, colors to the GPU, we'll now write texture coordinates. Assume that we have a 1d array in our mesh called texcoords
with a stride of 2 to store the
// enable the attribute in the program
let a_TexCoord = gl.getAttribLocation(program, "a_TexCoord");
gl.enableVertexAttribArray(a_TexCoord);
// create a buffer for the texture coordinates and write to it
let texcoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(mesh.texcoords), gl.STATIC_DRAW);
// associate the data in the texcoordBuffer with the a_TexCoord attribute whenever we want to use them
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer); // redundant here, but safe!
gl.vertexAttribPointer(a_TexCoord, 2, gl.FLOAT, 0, false, false);
Note that we are using 2
instead of 3
since there are only two values per vertex:
WebGL
A mipmap can be generated for the current bound texture (assuming it is bound to gl.TEXTURE_2D
) in WebGL
using:
gl.generateMipmap(gl.TEXTURE_2D);
Note that since the width and height are halved at each level of the mipmap, the original image width and height should be a power of 2 (e.g. 512 x 1024). For more information on the texture parameters that can be set, please see this WebGL documentation.
The level of the mipmap can be determined by sampling the nearby pixels and determining if there is a large change in texel value. This works because the GPU actually processes 2x2 batches of pixels (a quad) so it can use the texels retrieved by sampling these 4 pixels.
In most of the exercises and labs we have done so far, the models have rotated when we clicked and dragged the mouse. This can be done by adding callbacks that are used when a certain "event" happens on the canvas. These events can be: mousedown
(mouse button is clicked), mouseup
(mouse button is released), mousemove
(mouse is moved), mousewhel
keydown` (some key is pressed). Here are some common callbacks that were set in the labs/exercises:
const mouseMove = (event, renderer) => {
if (!renderer.dragging) return;
let R = rotation(
(event.pageX - renderer.lastX) / renderer.canvas.width,
(event.pageY - renderer.lastY) / renderer.canvas.height
);
mat4.multiply(renderer.modelMatrix, R, renderer.modelMatrix);
renderer.draw();
renderer.lastX = event.pageX;
renderer.lastY = event.pageY;
};
const mouseDown = (event, renderer) => {
renderer.dragging = true;
renderer.lastX = event.pageX;
renderer.lastY = event.pageY;
};
const mouseUp = (event, renderer) => {
renderer.dragging = false;
};
const mouseWheel = (event, renderer) => {
event.preventDefault();
let scale = 1.0;
if (event.deltaY > 0) scale = 0.9;
else if (event.deltaY < 0) scale = 1.1;
let direction = vec3.create();
vec3.subtract(direction, renderer.eye, renderer.center);
vec3.scaleAndAdd(renderer.eye, renderer.center, direction, scale);
mat4.lookAt(
renderer.viewMatrix,
renderer.eye,
renderer.center,
vec3.fromValues(0, 1, 0)
);
renderer.draw();
};
canvas = document.getElementById(canvasId);
let renderer = new Renderer(canvas);
renderer.dragging = false;
canvas.addEventListener("mousemove", (event) => {
mouseMove(event, renderer);
});
canvas.addEventListener("mousedown", (event) => {
mouseDown(event, renderer);
});
canvas.addEventListener("mouseup", (event) => {
mouseUp(event, renderer);
});
canvas.addEventListener("mousewheel", (event) => {
mouseWheel(event, renderer);
});
The rotation
function can be customized based on your application. The idea is to return a 4x4 rotation matrix from the difference in x and y screen coordinates. One option is to assume these changes represent angles about the y- and x-axis, respectively. In other words, the relative difference dx
from dragging along the width (x-direction) represents a small rotation about the y-axis and the relative difference dy
from dragging along the height represents a rotation about the x-axis, which we can compound:
const rotation = (dx, dy) => {
const speed = 4;
const R = mat4.fromYRotation(mat4.create(), speed * dx);
return mat4.multiply(
mat4.create(),
mat4.fromXRotation(mat4.create(), speed * dy),
R
);
};
Keep mind that if you want to center the rotation about a different point, you'll need to translate to the origin, rotate and then translate back (as we did in Lecture 5).
Also remember that the HTML
canvas has y pointing downwards, so a positive dy
is a positive rotation about the x-axis (in the samples above).
In order to set key callbacks, we can use something like this:
canvas.addEventListener("keydown", (event) => {
const key = event.key; // a character
// do something when this key was pressed
});
Please see some information about the KeyboardEvent
object here. In Thursday's lab, we will use the arrow keys, for which the event.key
will be ArrowRight
, ArrowLeft
, ArrowUp
or ArrowDown
.
To summarize, the main steps to follow when setting up a WebGL
application are:
WebGLShaderProgram
from a vertex shader and fragment shader (each a WebGLShader
).WebGLBuffer
(createBuffer
, bindBuffer
, bufferData
).getAttribLocation
, enableVertexAttribArray
).bindBuffer
, vertexAttribPointer
).viewport
, enable
, clear
).getUniformLocation
, uniform[1234][fi][v]
and uniformMatrix[234][fi]v
).bindBuffer
for the element indices, then drawElements
).Depending on your application and/or scene, some of these steps might be omitted or merged with others.
There are a lot of places bugs can happen in WebGL
programs. If your application doesn't work, please go through the following checklist:
attribute
? Did you enable them on the JavaScript
side? Did you use the correct string name and location when enabling the attribute?varying
variables in both shaders? Did you set the value of the varying
in your vertex shader?bufferData
)? Are you using the correct buffer type? Did you convert the arrays to the correct typed array? Are you using the intended buffer? Check for instances of accidentally typing gl.ARRAY_BUFFER
when you intended gl.ELEMENT_ARRAY_BUFFER
and vice versa.vertexAttribPointer
, bufferData
and drawElements
.gl.ARRAY_BUFFER
when you intended gl.ELEMENT_ARRAY_BUFFER
.JavaScript
and GLSL
code.The most common bug I see (and also produce) is the result of forgetting which buffer is currently bound. Bugs also arise when incorrect parameters are passed to bufferData
, vertexAttribPointer
and drawElements
so please double-check these thoroughly when your application doesn't work.
We'll put all these pieces together in class. Just in case something isn't working, here is the completed exercise (right click and View Page Source).