Lecture 09: Textures (slides)Learning ObjectivesBy the end of this lecture, you will be able to:
|
The methods we have used so far for calculating the color at a surface point are pretty limited. This is mostly related to the fact that our base model color, described by the diffuse reflection coefficient
Our main goal for today is to retrieve a better value for
The main idea of texturing is to sample the pixels in an image to determine the properties (usually, a color) of a point on our surface. To distinguish the screen pixels (in our destination image) from this texturing image, the pixels in this texture image are called texels. We need the following ingredients to texture our surfaces:
Ingredient #1: We need an image to sample and paste, like the image at the top-right of these notes. We'll also need to write the data in this image to the GPU.
Ingredient #2: We need to associate our surface points with a "location" in this image, so we can look up the color. This is done using texture coordinates, which are 2d coordinates (since they are defined with respect to the reference image). We will denote these texture coordinates as
The image below shows how a point on Spot's eye, which has texture coordinates
Ingredient #3: We need to determine how we will look up the texel values in the image. The texture coordinates on the surface will almost never align exactly with the center of a texel. Furthermore, we need to consider the relative size of a pixel (associated with our fragment) and the texels in the texture image. We'll revisit this concept later, and WebGL
will take care of this for us, but we need to tell it what to do. For now, let's assume we always look up the texel whose center is nearest to the
As we have seen with our graphics programs, it's usually a good idea to break up our task into smaller pieces. Let's leave ingredient #2 aside for now and assume that we can calculate texture coordinates analytically. To do so, we'll assume our model is a sphere, just like we did in Lecture 2.
The surface of a sphere can be parametrized by two variables, which are actually angles. Let's assume the sphere is centered on the origin,
The range of
We can also go the other way around, i.e. from
Note that the range of the GLSL
atan
function is acos
is WebGL
however, it knows the width and height of our texture image, so we just need to get
WebGL
.Before we can look up texels in our shader programs, we need to write the image data to the GPU. Let's first assume we have the image of Spot loaded into our HTML
document using an img
tag:
<img id="spot-texture" src="spot.png" hidden/>
The hidden
property means that it won't be rendered in the HTML
document - but we'll be able to retrieve it on the JavaScript
side using the id
. Here is how to set up a texture for WebGL
using this image (assume we have a WebGLRenderingContext
called gl
and a WebGLShaderProgram
called program
):
// retrieve the image
let image = document.getElementById("spot-texture");
// create the texture and activate it
let texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0); // we are using texture index 0 <-- make a note of this!
gl.bindTexture(gl.TEXTURE_2D, texture);
// define the texture to be that of the requested image
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, 1); // may or may not need depending on y-axis of texture image
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, image);
Remember, we also need to tell WebGL
how to do the lookup for texels (ingredient #3). For both minification and magnification (more about this soon), we'll specify that we want WebGL
to use the nearest filter (gl.NEAREST
):
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
Now, on the GLSL
side, assuming we calculated the s
and t
(float
) values, we can use a sampler2D
to "sample" our texture object. This will actually be a uniform
, and we'll need to tell WebGL
what to use for this uniform
(just like we did last class for scalars, vectors, matrices). This is where the texture "index" noted above is important. If we want to bind this sampler2D
to our texture at gl.TEXTURE0
, we need to set the value of this uniform
to be 0
. If we activated and wanted to use gl.TEXTURE8
, this would be 8
.
So for a shader with the following declaration (in GLSL
):
uniform sampler2D tex_Image;
the corresponding JavaScript
to bind our texture in unit 0 with this sampler2D
would be:
gl.uniform1i(gl.getUniformLocation(program, 'tex_Image'), 0); // pass the N in gl.TEXTUREN when activating the texture unit
Finally, to perform the texel lookup, we can use the following within the main()
of our shader:
vec3 km = texture2D(tex_Image, vec2(s, t)).rgb;
The texture2D
function in GLSL
returns a vec4
, so we extract the first three components of the result with .rgb
(does the same as .xyz
).
For more general surfaces represented by a mesh, we don't have an analytic way of getting the WebGL
will) use interpolation to figure out the
So what we'll do is pass some varying
) to be interpolated and, subsequently, input to the fragment shader. Next, the fragment shader will look up which color (or other value) to use in the lighting model.
Just like we have been writing vertex coordinates, normals, colors to the GPU, we'll now write texture coordinates. Assume that we have a 1d array in our mesh called texcoords
with a stride of 2 to store the
// enable the attribute in the program
let a_TexCoord = gl.getAttribLocation(program, "a_TexCoord");
gl.enableVertexAttribArray(a_TexCoord);
// create a buffer for the texture coordinates and write to it
let texcoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(mesh.texcoords), gl.STATIC_DRAW);
// associate the data in the texcoordBuffer with the a_TexCoord attribute whenever we want to use them
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer); // redundant here, but safe!
gl.vertexAttribPointer(a_TexCoord, 2, gl.FLOAT, 0, false, false);
Note that we are using 2
instead of 3
since there are only two values per vertex:
.obj
files.The Wavefront (.obj
) files we have been using are a common format for computer graphics models. The first character of every line represents what the remaining values on that line correspond to:
v
: vt
: vn
: f
: an v/vt/vn
, depending on the number of forward slashes in each vertex description. The number of vertices f
).Let's re-interpret our texturing process at the level of a texel. In other words, we are mapping our texel colors to the final pixel (which is associated with the fragments obtained from the rasterization process). At a certain view, the size of the texel will be exactly the size of the pixel, though this rarely happens. When the surface we are painting is very far away, each pixel we are processing can cover many texel values (size of pixel > size of texel), a phenomenon known as minification. When we are closer to the surface, there may be many texels that map to a single pixel (size of pixel < size of texel), which is known as magnification.
Minification (left) versus Magnification (right) (Interactive Computer Graphics, Angel & Schreiner, 2012)
In the demo below, try to zoom into the chess board (by scrolling the mouse). As we get closer, the effects of magnification start to become apparent, and we can see the "blockiness" (aliasing) in the final image. This is because we were initially using the nearest sample when retrieving a texel (gl.NEAREST
). If you change the dropdown to LINEAR, the magnification filter will be changed to gl.LINEAR
which uses a weighted average of the surrounding texels. This is more expensive but will smooth out the blocky artifacts in the image.
The difference between "nearest" and "linear" filters in terms of the magnification problem is also demonstrated in the image below:
Now try scrolling away from the chess board, and notice that the colors don't look as patterned. If you rotate (click and drag) you should also start to see some "flickering" effects. We are now seeing the effects of minification: the texel lookup has many texels to pick from for a single pixel and the one it picks appear somewhat arbitrary. Now change the dropdown to MIPMAP. The chess board should appear to have the correct pattern again, regardless of the distance or how you rotate the square. A mipmap is a minification filter in which a sequence of images is first generated by halving the width and height of the image at each level. Texels will then be looked up at the appropriate level, which can again, use either nearest or linear filters. One disadvantage of mipmaps, however, is that the resulting rendering can look a bit blurry.
A mipmap can be generated for the current bound texture (assuming it is bound to gl.TEXTURE_2D
) in WebGL
using:
gl.generateMipmap(gl.TEXTURE_2D);
Note that since the width and height are halved at each level of the mipmap, the original image width and height should be a power of 2 (e.g. 512 x 1024). For more information on the texture parameters that can be set, please see this WebGL documentation.
The level of the mipmap can be determined by sampling the nearby pixels and determining if there is a large change in texel value. This works because the GPU actually processes 2x2 batches of pixels (a quad) so it can use the texels retrieved by sampling these 4 pixels.
Another method for determining the color at a surface point is called procedural texturing, which consists of defining an explicit function (procedure) to describe the relationship between the surface coordinates and the color. This can work using either the 3d surface coordinates or the 2d parametric description of the surface. Note that we already did a form of procedural texturing! Recall Lab 3 when we assigned a kmFunction
for BB-8.
Texturing isn't restricted to looking up the color of the surface. We can look up other items that might go into our lighting model. For example, we might want to use an image to look up the normal vector at a point on the surface, or we might want to displace the surface by some amount defined in an image:
It's also possible to look up the specular coefficient (the shininess exponenent
Other texturing methods are also possible such as environment maps (looking up the background image assuming the scene is enclosed in a cube or sphere) and projective texturing, whereby an input image (with known camera orientation and perspective settings) is pasted into our model.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<title>Week 09: Textures</title>
<!-- load gl-matrix and webgl-obj-loader -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.8.1/gl-matrix-min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/webgl-obj-loader@2.0.8/dist/webgl-obj-loader.min.js"></script>
</head>
<body>
<canvas id="renderer-canvas" width="500" height="500"></canvas>
<img id="spot-texture" src="spot.png" hidden />
<script type="text/javascript">
window.onload = () => {
OBJ.downloadMeshes({
sphere: "sphere3.obj",
spot: "spot.obj",
}, (meshes) => {
// retrieve the mesh
const useSphere = 0; // 0 for spot.obj, 1 for sphere mesh
let mesh = (useSphere) ? meshes.sphere : meshes.spot;
mesh.triangles = mesh.indices.slice();
mesh.indices = [];
const nTriangles = mesh.triangles.length / 3;
const nVertices = mesh.vertices.length / 3;
console.log(`mesh has ${nTriangles} triangles and ${nVertices} vertices`);
if (mesh.textures.length === 0) mesh.textures = new Array(2 * mesh.vertices.length / 3);
let canvas = document.getElementById("renderer-canvas");
let gl = canvas.getContext("webgl");
// create vertex shader
const vertexShaderSource = `
attribute vec3 a_Position;
attribute vec2 a_TexCoord;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ProjectionMatrix;
uniform mat4 u_ModelMatrix;
varying vec3 v_Position;
varying vec2 v_TexCoord;
void main() {
gl_Position = u_ProjectionMatrix * u_ViewMatrix * u_ModelMatrix * vec4(a_Position, 1.0);
v_Position = a_Position;
v_TexCoord = a_TexCoord;
}`;
let vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertexShaderSource);
gl.compileShader(vertexShader);
if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS))
throw ("error in vertex shader: " + gl.getShaderInfoLog(vertexShader));
// create fragment shader
const fragmentShaderSource = `
precision highp float;
#define PI 3.1415926454
uniform sampler2D tex_Image;
varying vec3 v_Position;
varying vec2 v_TexCoord;
uniform float u_sphere;
void main() {
float x = v_Position.x;
float y = v_Position.y;
float z = v_Position.z;
float theta = atan(y, x) + PI;
float phi = acos(z / 1.0);
float s = theta / (2.0 * PI);
float t = phi / PI;
vec2 texcoord = u_sphere * vec2(s, t) + (1.0 - u_sphere) * v_TexCoord;
vec3 km = (texture2D(tex_Image, texcoord)).rgb;
gl_FragColor = vec4(km, 1);
}`;
let fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragmentShader, fragmentShaderSource);
gl.compileShader(fragmentShader);
if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS))
throw ("error in fragment shader: " + gl.getShaderInfoLog(fragmentShader));
// create shader program
let program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
gl.useProgram(program);
// enable attributes
let a_Position = gl.getAttribLocation(program, "a_Position");
gl.enableVertexAttribArray(a_Position);
let a_TexCoord = gl.getAttribLocation(program, "a_TexCoord");
gl.enableVertexAttribArray(a_TexCoord);
// create buffer for vertices
let vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(mesh.vertices), gl.STATIC_DRAW);
// create buffer for texture coordinates
let texcoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(mesh.textures), gl.STATIC_DRAW);
// create buffer for triangles
let triangleBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(mesh.triangles), gl.STATIC_DRAW);
// associate the buffer data in vertices to a_Position
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
// associate the buffer data in texture coordinates to a_TexCoord
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.vertexAttribPointer(a_TexCoord, 2, gl.FLOAT, false, 0, 0);
// retrieve the image
let image = document.getElementById("spot-texture");
// create the texture and activate it
let texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
// define the texture to be that of the requested image
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, 1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, image);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.uniform1i(gl.getUniformLocation(program, 'tex_Image'), 0);
// retrieve uniform locations
let u_ModelMatrix = gl.getUniformLocation(program, "u_ModelMatrix");
let u_ViewMatrix = gl.getUniformLocation(program, "u_ViewMatrix");
let u_ProjectionMatrix = gl.getUniformLocation(program, "u_ProjectionMatrix");
// projection matrix is always the same, so write this to the program here
const projectionMatrix = mat4.perspective(mat4.create(), Math.PI / 4., 1.0, 1e-3, 1000);
gl.uniformMatrix4fv(u_ProjectionMatrix, false, projectionMatrix);
// view parameters and model matrix
let eye = vec3.fromValues(5, 0, 0);
let up = vec3.fromValues(0, 0, 1);
let center = vec3.fromValues(0, 0, 0);
let modelMatrix = mat4.create(); // identity matrix
// set the uniform that defines whether we are using spot.obj or a sphere mesh
gl.uniform1f(gl.getUniformLocation(program, "u_sphere"), useSphere);
// renders the scene whenever the view or model matrices change
let draw = () => {
gl.clearColor(0, 0, 1, 0.4);
gl.enable(gl.DEPTH_TEST);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// write the current view and model matrices
gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix);
gl.uniformMatrix4fv(u_ViewMatrix, false, mat4.lookAt(mat4.create(), eye, center, up));
// draw triangles
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.drawElements(gl.TRIANGLES, mesh.triangles.length, gl.UNSIGNED_SHORT, 0);
//console.log(gl.getError());
}
draw(); // initial draw
// set up the mouse click and motion listeners
let dragging = false;
let mouseX, mouseY;
canvas.addEventListener("mousedown", (event) => {
dragging = true;
mouseX = event.pageX;
mouseY = event.pageY;
});
canvas.addEventListener("mouseup", () => {
dragging = false;
});
canvas.addEventListener("mousemove", (event) => {
if (!dragging) return;
const speed = 4;
const dx = speed * (event.pageX - mouseX) / canvas.width;
const dy = speed * (event.pageY - mouseY) / canvas.height;
modelMatrix = mat4.multiply(mat4.create(), mat4.fromZRotation(mat4.create(), dx), modelMatrix);
modelMatrix = mat4.multiply(mat4.create(), mat4.fromYRotation(mat4.create(), dy), modelMatrix);
draw();
mouseX = event.pageX;
mouseY = event.pageY;
});
canvas.addEventListener("wheel", (event) => {
eye[0] += 0.01 * event.deltaY;
draw();
});
})
}
</script>
</body>
</html>
© Philip Claude Caplan, 2023 (Last updated: 2023-11-07)