Shadertoy Project

Part 1: due Friday 11/14 (final) on Canvas here
Part 2: due Friday 12/5 (final) on Canvas here

Goals

This project contains two parts which will both be graded based on effort/completion. The main goal this assignment is to explore how the concepts in our course can be applied creatively. By the end of the project, you will:

This entire assignment is equivalent (in grade) to 1 lab. Completing Part 1 is sufficient for an R (Revise) and then completing Part 2 is sufficient for a C (Complete).

Getting started with Shadertoy

In the first two-thirds of the course, we learned about ray tracing. After that, we learned about rasterization and implemented shaders to process (1) geometry in a vertex shader and (2) pixels in a fragment shader. Shadertoy will be a blend of the concepts we have learned. In Shadertoy, we write fragment shaders using GLSL. The scenes we render, however, are either described procedurally or by uploading data in a texture.

First, start by creating an account here: https://www.shadertoy.com/

Then click "Browse" and enter keywords for some applications you want to investigate. You can pick any shader you want as long as it contains at least 1 element we have covered in the course (see the calendar for a list of topics). Once you find a Shadertoy, click on the button that looks like this which will "fork" the shader to your account. Send me an email if you're not sure if the shader is appropriate for this assignment.

How does Shadertoy work?

Shadertoy processes each fragment (pixel) through the customizable mainImage function (defined in the Image tab). Each invocation of the shader is provided an input fragCoord (which has the pixel coordinates: similar to our i and j pixel indices when we did ray tracing). The iResolution (a vec2) has the number of pixels in the horizontal and vertical directions respectively (i.e. nx and ny in our notation). It's then up to us to figure out how to determine the color of this pixel and assign it to the output fragColor (a vec4). This is similar to how we implemented custom rendering techniques within the pixel loops of the labs.

For more information, see this documentation.

Ray Marching using a Signed Distance Function (SDF)

A lot of Shadertoys use a slightly different rendering technique called "Ray Marching." This is similar to ray tracing in the way camera rays are set up, but differs in the way intersections are calculated.

Some geometries can be described by a "signed distance function" (SDF). This is a function that is < 0 if a point is inside a surface and > 0 if a point is outside a surface (and zero if exactly on the surface). A very brute force way to ray-march is to take mini steps along the ray (starting from the eye/camera) until your SDF evaluates to $\approx 0$, in which case you found an intersection. Another way is to continue advancing along the ray by a distance of the current SDF value. This is sometimes called Sphere Tracing because we are effectively advancing spheres towards the closest surface until the sphere radius (which is the SDF) is close to zero:

(from Wikipedia)

The advantage of this is that complex shapes can be described by combining SDFs of simpler shapes. For example, the union of two shapes $s_1$ and $s_2$ with corresponding SDFs $d_{s_1}(\vec{p})$ and $d_{s_2}(\vec{p})$ can be represented with the SDF $d_{s_1 \cup s_2} = \min(d_1(\vec{p}), d_2(\vec{p}))$. Functions for SDFs are often prefixed with sd, e.g. sdSphere(p, c, R) which would be $\lVert \vec{p} - \vec{c} \rVert - R$ for some arbitrary point $\vec{p}$ measured from a sphere of radius $R$ centered on $\vec{c}$.

Once intersections are calculated, surfaces can be shaded similar to what we've been doing in our course.

Part 1: Reverse engineer a Shadertoy using AI

In this part, you will use a generative AI tool to understand what's going on in the shader you picked. You can use any AI you like, but note that you have access to Microsoft Copilot and Google Gemini using your Middlebury account.

Your submission for Part 1 will consist of two components (submitted on Canvas here):

Start by copying the contents of your Shadertoy to a .txt file. You'll attach this to the prompt in the AI tool instead of copying the code (there may be a character limit for the prompt, but attaching a .txt file should work).

Here are some prompts I would suggest:

Now you should extract which functions are used to render various components in the shader. Here are some suggested prompts:

Continue asking questions to understand which functions are used to render the scene so you can annotate them onto the screenshot of your Shadertoy.

Here is an example of an annotated image using this Shadertoy (which means you'll need to pick a different one).
Don't worry about capturing all the details. Just try to annotate the main functions/components that are used to render the image. You can either annotate the frame digitally or you can annotate a hand-drawn frame of the Shadertoy.

Part 2: Add a feature to your forked Shadertoy

Again, I would recommend using AI to help you with this part. First, think about an extension you want to add. For example, maybe you'd like to add another geometry, or change how the texturing is done. Or perhaps you want to add mouse controls to interact with the scene or make some components animate.

There are no requirements on the complexity of the feature you implement (i.e. in terms of lines of code) but you cannot simply change a color or a parameter that defines the geometry. Your feature should at least involve adding some new lines of code.

For example, the shader linked above was extended by changing the background to use a noise-like feature to create clouds (which also animate from left-to-right) and to add a terrain:

Submission

Part 1: complete this Canvas quiz by Friday November 14th.
Part 2: complete this Canvas quiz by Friday December 5th.


© Philip Caplan, 2025