The methods we have used so far for calculating the color at a surface point are pretty limited. This is mostly related to the fact that our base model color, described by the diffuse reflection coefficient
Our main goal for today is to retrieve a better value for
The main idea of texturing is to sample the pixels in an image to determine the properties (usually, a color) of a point on our surface. To distinguish the screen pixels (in our destination image) from this texturing image, the pixels in this texture image are called texels. We need the following ingredients to texture our surfaces:
Ingredient #1: We need an image to sample and paste, like the image at the top-right of these notes.
Ingredient #2: We need to associate our surface points with a "location" in this image, so we can look up the color. This is done using texture coordinates, which are 2d coordinates (since they are defined with respect to the reference image). We will denote these texture coordinates as
The image below shows how a point on Spot's eye, which has texture coordinates
Ingredient #3: We need to determine how we will look up the texel values in the image. The texture coordinates on the surface will almost never align exactly with the center of a texel. Furthermore, we need to consider the relative size of a pixel (associated with our fragment) and the texels in the texture image. We'll revisit this concept later when we talk about rasterization - the framework we'll use (WebGL
) will take care of this for us (but we need to tell it what to do). For now, let's assume we always look up the texel whose center is nearest to the
As we have seen with our graphics programs, it's usually a good idea to break up our task into smaller pieces. Let's leave ingredient #2 aside for now and assume that we can calculate texture coordinates analytically. To do so, we'll assume our model is a sphere, just like we did in Chapter 3.
The surface of a sphere can be parametrized by two variables, which are actually angles. Let's assume the sphere is centered on the origin,
The range of
However, we may wish to switch how
We can also go from
Again, these might be different based on your problem setup.
Note that the range of Math.atan2
function is Math.acos
is
where width
and height
are the width and height of the input texture image (not the dimensions of the destination canvas). We can then use the getImageData
function defined for the CanvasRenderingContext2D
. For example, this might look like:
// 1. setup the texture image
let imgCanvas = document.getElementById("image-canvas"); // replace "image-canvas" with the "id" you assigned to the HTML Canvas
let imgContext = canvas.getContext("2d", { willReadFrequently: true });
const img = document.getElementById("earth-texture"); // replace "earth-texture" with the "id" you assigned to your img element
imgCanvas.width = img.width;
imgCanvas.height = img.height;
imgContext.drawImage(img, 0, 0); // draw the image to the canvas so we can look it up
// 2. loop over pixels:
// 2a. determine intersection point as usual from ray-scene intersections
// 2b. calculate s, t from equations above (and round to integers), or use mesh information
s = ...
t = ...
// 2c. retrieve texture color from img
const colorData = imgContext.getImageData(s, t, 1, 1);
color[0] = colorData.data[0] / 255;
color[1] = colorData.data[1] / 255;
color[2] = colorData.data[2] / 255;
// assign 'color' to the pixel
// ...
This method is a bit slow because we are calling getImageData
for each ray-scene intersection, but it highlights the general concept of texturing. We can actually make this a bit more efficient by doing a single call to getImageData
to retriave all the texture image data (before the loop over the pixels), and then simply look up the color ourselves. This is how we'll design it in class:
class Texture {
constructor() {
this.canvas = document.createElement("canvas");
}
texImage2D(image) {
let context = this.canvas.getContext("2d", {
willReadFrequently: true,
});
this.width = image.width;
this.height = image.height;
this.canvas.width = this.width;
this.canvas.height = this.height;
context.drawImage(image, 0, 0);
this.data = context.getImageData(0, 0, this.width, this.height).data;
}
texture2D(uv) {
let us = Math.floor(this.width * uv[0]);
let vs = Math.floor(this.height * uv[1]);
let color = vec3.create();
for (let j = 0; j < 3; j++) {
const idx = Math.round(this.width * vs + us);
color[j] = this.data[4 * idx + j] / 255;
}
return color;
}
}
With this utility class, an example of setting up a texture would then be:
let img = document.getElementById("earth-image"); // or whatever image ID you want to use
let texture = new Texture();
texture.texImage2D(img);
This texture can be used to find the color (step 2C
above):
const km = texture.texture2D(uv);
These functions might seem like they have strange names, but there is a very specific reason we are calling them texImage2D
and texture2D
. This is a preview of the WebGL
functions we'll use in a few weeks.
For more general surfaces represented by a mesh, we don't have an analytic way of getting the vt
lines in an OBJ
file.
Let's re-interpret our texturing process at the level of a texel. In other words, we are mapping our texel colors to the final pixel. At a certain view, the size of the texel will be exactly the size of the pixel, though this rarely happens. When the surface we are painting is very far away, each pixel we are processing can cover many texel values (size of pixel > size of texel), a phenomenon known as minification. When we are closer to the surface, there may be many texels that map to a single pixel (size of pixel < size of texel), which is known as magnification.
Minification (left) versus Magnification (right) (Interactive Computer Graphics, Angel & Schreiner, 2012)
In the demo below, try to zoom into the chess board (by scrolling the mouse). As we get closer, the effects of magnification start to become apparent, and we can see the "blockiness" (aliasing) in the final image. This is because we were initially using the nearest sample when retrieving a texel. If you change the dropdown to LINEAR, the magnification filter will use a weighted average of the surrounding texels. This is more expensive but will smooth out the blocky artifacts in the image.
The difference between "nearest" and "linear" filters in terms of the magnification problem is also demonstrated in the image below:
Now try scrolling away from the chess board, and notice that the colors don't look as patterned. If you rotate (click and drag) you should also start to see some "flickering" effects. We are now seeing the effects of minification: the texel lookup has many texels to pick from for a single pixel and the one it picks appear somewhat arbitrary. Now change the dropdown to MIPMAP. The chess board should appear to have the correct pattern again, regardless of the distance or how you rotate the square. A mipmap is a minification filter in which a sequence of images is first generated by halving the width and height of the image at each level. Texels will then be looked up at the appropriate level, which can again, use either nearest or linear filters. One disadvantage of mipmaps, however, is that the resulting rendering can look a bit blurry.
Texturing isn't restricted to looking up the color of the surface. We can look up other items that might go into our lighting model. For example, we might want to use an image to look up the normal vector at a point on the surface, or we might want to displace the surface by some amount defined in an image:
It's also possible to look up the specular coefficient (the shininess exponenent
Other texturing methods are also possible such as environment maps (looking up the background image assuming the scene is enclosed in a cube or sphere) and projective texturing, whereby an input image (with known camera orientation and perspective settings) is pasted into our model.
Another method for determining the color at a surface point is called procedural texturing, which consists of defining an explicit function (procedure) to describe the relationship between the surface coordinates and the color. This can work using either the 3d surface coordinates or the 2d parametric description of the surface. Note that we already did a form of procedural texturing! Recall Lab 3 when we assigned a kmFunction
for BB-8.
Some of the more popular procedural texturing techniques involve Worley noise (left), Voronoi diagrams (middle) or Perlin noise (right).
A Voronoi pattern can be created by distributing some points (called sites, seeds or generators) on your surface and defining a color for that site. Then, for every ray-surface intersection point, look up the closest site and use the corresponding color of the closest site for your base model color (km
). This would make for a great Final Rendering project :)
You'll implement the Perlin noise procedural texturing algorithm in the lab this week. The main idea is to calculate the contributions to the noise from a sequence of grids with random vectors defined at the vertices of the grids (this will be described in detail in the lab).