Shadertoy ProjectPart 1: due Friday 11/14 (final) on Canvas here
Part 2: due Friday 12/5 (final) on Canvas here
This project contains two parts which will both be graded based on effort/completion. The main goal this assignment is to explore how the concepts in our course can be applied creatively. By the end of the project, you will:
Shadertoy (Part 1)Shadertoy (Part 2)This entire assignment is equivalent (in grade) to 1 lab. Completing Part 1 is sufficient for an R (Revise) and then completing Part 2 is sufficient for a C (Complete).
In the first two-thirds of the course, we learned about ray tracing. After that, we learned about rasterization and implemented shaders to process (1) geometry in a vertex shader and (2) pixels in a fragment shader. Shadertoy will be a blend of the concepts we have learned. In Shadertoy, we write fragment shaders using GLSL. The scenes we render, however, are either described procedurally or by uploading data in a texture.
First, start by creating an account here: https://www.shadertoy.com/
Then click "Browse" and enter keywords for some applications you want to investigate. You can pick any shader you want as long as it contains at least 1 element we have covered in the course (see the calendar for a list of topics). Once you find a Shadertoy, click on the button that looks like this
which will "fork" the shader to your account. Send me an email if you're not sure if the shader is appropriate for this assignment.
Shadertoy work?Shadertoy processes each fragment (pixel) through the customizable mainImage function (defined in the Image tab). Each invocation of the shader is provided an input fragCoord (which has the pixel coordinates: similar to our i and j pixel indices when we did ray tracing). The iResolution (a vec2) has the number of pixels in the horizontal and vertical directions respectively (i.e. nx and ny in our notation). It's then up to us to figure out how to determine the color of this pixel and assign it to the output fragColor (a vec4). This is similar to how we implemented custom rendering techniques within the pixel loops of the labs.
For more information, see this documentation.
A lot of Shadertoys use a slightly different rendering technique called "Ray Marching." This is similar to ray tracing in the way camera rays are set up, but differs in the way intersections are calculated.
Some geometries can be described by a "signed distance function" (SDF). This is a function that is < 0 if a point is inside a surface and > 0 if a point is outside a surface (and zero if exactly on the surface). A very brute force way to ray-march is to take mini steps along the ray (starting from the eye/camera) until your SDF evaluates to
The advantage of this is that complex shapes can be described by combining SDFs of simpler shapes. For example, the union of two shapes sd, e.g. sdSphere(p, c, R) which would be
Once intersections are calculated, surfaces can be shaded similar to what we've been doing in our course.
Shadertoy using AIIn this part, you will use a generative AI tool to understand what's going on in the shader you picked. You can use any AI you like, but note that you have access to Microsoft Copilot and Google Gemini using your Middlebury account.
Your submission for Part 1 will consist of two components (submitted on Canvas here):
Start by copying the contents of your Shadertoy to a .txt file. You'll attach this to the prompt in the AI tool instead of copying the code (there may be a character limit for the prompt, but attaching a .txt file should work).
Here are some prompts I would suggest:
Prompt 1: Attached is a Shadertoy. I'd like to reverse engineer what it's doing. Can you provide a high-level description of what it's doing please?
Prompt 2: Thank you, here is a list of topics I covered in my computer graphics class when we talked about ray tracing: camera setup, transformations, acceleration structures, texturing (from an image or procedurally like Perlin noise), recursive ray tracing, Phong reflection model, shadows, subsurface scattering, refraction. Can you help me relate these topics to what is done in this Shadertoy please?
Now you should extract which functions are used to render various components in the shader. Here are some suggested prompts:
Continue asking questions to understand which functions are used to render the scene so you can annotate them onto the screenshot of your Shadertoy.
Here is an example of an annotated image using this Shadertoy (which means you'll need to pick a different one).
Don't worry about capturing all the details. Just try to annotate the main functions/components that are used to render the image. You can either annotate the frame digitally or you can annotate a hand-drawn frame of the Shadertoy.
ShadertoyAgain, I would recommend using AI to help you with this part. First, think about an extension you want to add. For example, maybe you'd like to add another geometry, or change how the texturing is done. Or perhaps you want to add mouse controls to interact with the scene or make some components animate.
There are no requirements on the complexity of the feature you implement (i.e. in terms of lines of code) but you cannot simply change a color or a parameter that defines the geometry. Your feature should at least involve adding some new lines of code.
For example, the shader linked above was extended by changing the background to use a noise-like feature to create clouds (which also animate from left-to-right) and to add a terrain:
Part 1: complete this Canvas quiz by Friday November 14th.
Part 2: complete this Canvas quiz by Friday December 5th.