5.1 Introduction
A shader is, in fact, one of the main components of WebGL, but it is one of the main reasons why native WebGL is so hard. A shader is a program written in GLSL that is sent to the GPU. They are used to position each vertex of geometry and to colorize each visible pixel of that geometry. The term ”pixel” is not accurate because each point in the render doesn’t necessarily match each pixel of the screen and this is why the term ”fragment” is used. Then we send a lot of data to the shader such as the vertices coordinates, the mesh transformation, information about the camera and its field of view, and parameters like the color, the textures, the lights, the fog, etc.
The GPU then processes all of this data following the shader instructions, and our geometry appears in the render. There are two types of shaders, one is the vertex shader and the other is the fragment shader. [3]
5.2 Vertex shader
The vertex shader’s purpose is to position the vertices of the geometry. The idea is to send the vertices’ positions, the mesh transformations (like its position, rotation, and scale), and the camera information (like its position, rotation, and field of view).
Then, the GPU will follow the instructions in the vertex shader to process all of this information in order to project the vertices on a 2D space that will become the render.
When using a vertex shader, its code will be applied to every vertex of the geometry.
But some data like the vertex position will change between each vertex. This type of data, the one that changes between vertices, is called an attribute. But some data does not need to switch between each vertex like the position of the mesh. Yes, the location of the mesh will impact all the vertices, but in the same way. This type of data, the one that doesn’t change between vertices, is called a uniform. The vertex shader happens first. Once the vertices are placed, the GPU knows what pixels of the geometry is visible and can proceed to the fragment shader.
Thegl Positionvariable already exists in the vertex shader. We need to assign it.
This variable will contain the position of the vertex on the screen. The goal of the instructions in the main function is to set this variable properly.
When setting the values, we do not truly move the plane in a 3D space as if we were playing with the position in Three.js. We only move the projected plane into a 2D
Figure 5.1: Code of vertex shader used in the window light of the scene.
space. We need 4 values for the gl Position because its final goal is to position vertices on a 2D space. It’s actually because of the coordinates or not precisely in 2D space; they are in what is called clip space which needs 4 dimensions.
Clip space is a space that goes in all 3 directions (x, y, and z) in a range from -1 to +1. It’s like positioning everything in a 3D box. Anything out of this range will be
”clipped” and disappear. The fourth value (w) is responsible for the perspective. All of this is done automatically. The same code applies to every vertex of the geometry.
Attributes are the only variable that will change between the vertices. The same vertex shader will be applied for each vertex and the position attribute will contain the x, y, and z coordinates of that specific vertex. Then this is converted from vec3 to vec4.
Each matrix will transform the position until we get the final clip space coordinates.
There are 3 matrices in our code, and because their values are the same for all the vertices of the geometry, we retrieve them by using uniforms.
Each matrix will do a part of the transformation:
• The modelMatrix will apply all transformations relative to the Mesh. If we scale, rotate, or move the Mesh, these transformations will be contained in the modelMatrix and applied to the position.
• The viewMatrix will apply transformations relative to the camera. If we rotate the camera to the left, the vertices should be on the right. If we move the camera in direction of the Mesh, the vertices should get bigger, etc.
• The projectionMatrix will finally transform our coordinates into the final clip space coordinates.
5.3 Fragment shader
The fragment shader’s purpose is to color each visible fragment of the geometry.
The same fragment shader will be used for every visible fragment of the geometry.
We can send data to it like a color by using uniforms, just like the vertex shader, or we can send data from the vertex shader to the fragment shader. We call this type of data, the one that comes from the vertex shader to the fragment shader, varying.
The most straightforward instruction in a fragment shader can be to color all the fragments with the same color. We get the equivalent of the MeshBasicMaterial, if we had set only the color property. Or we can send more data to the shader, for instance, a light position. We can then color the fragments according to how much the face is in front of the light source. We would get the MeshPhongMaterial equivalent if we had one light in the scene.
Figure 5.2: Code of fragment shader used in the window light of the scene.
The gl FragColor is like the gl Position but for the color. It’s already declared, and we need to assign it in the main function. It’s a vec4 with the first three values being the red, green, and blue channels (r, g, b) and the fourth value is the alpha (a).
Attributes are values that change between each vertex. We already have one at- tribute named position that contains a vec3 of the coordinates of each vertex.There is a way of sending data from the vertex shader to the fragment shader called vary- ings.
Uniforms are a way to send data from JavaScript to the shader. That can be valuable if we want to use the same shader but with different parameters, and it’s also the occasion to have parameters that can change during the experience. We can use uniforms with both vertex and fragment shaders, and the data will be the same for every vertex and every fragment. We already have uniforms in our code with projectionMatrix, viewMatrix, and modelMatrix but we didn’t create these because Three.js did that automatically. The fragment shader code has a Uniform being passed through the JavaScript which is stored in the variable uColor. uColor contains the RGB values which will be used to paint the window light in the scene.
The figure below showcases the result.
Figure 5.3: Mimicking the effect of light-emitting through the window using shaders.
5.4 Particles
Particles are precisely what you expect from that name. They are very popular and can be used to achieve various effects such as stars, smoke, rain, dust, fire, and
many other things. The good thing with particles is that we can have hundreds of thousands of them on screen with a reasonable frame rate. The downside is that each particle is composed of a plane (two triangles) always facing the camera. Creating particles is as simple as making a Mesh. We need a BufferGeometry, a material that can handle particles (PointsMaterial), and instead of producing a Mesh, we need to create a Points. Each vertex of the geometry will become a particle. We need a special type of material called PointsMaterial. This material has multiple properties specific to particles like the size to control all particle’s size and the site attenuation to specify if distant particles should be smaller than close particles. When creating the particles, rather than using a Mesh, we use the Points class.
Figure 5.4: Fireflies geometry code
The fireflies’ geometry was a custom geometry using THREE.BufferGeometry().
The position for each firefly requires x,y, and z coordinates, thus in the Float32Array, every firefly needs to be multiplied by 3, so that the entire variable can store the coordinates. In the for loop, I used Math.random() to randomly generate the x,y, and z coordinates and stored them in the array. The position and scale of the fireflies are custom attributes, so the setAttribute was used to apply these properties.
Figure 5.5: Fireflies materials code
The above code shows the uniforms passed into the vertex shader. uTime is used to perform positional animation of the fireflies, uPixelRatio is to make sure the fireflies remain consistent regardless of which display it is rendered in, and the uSize is to simply adjust the size of the fireflies. When fireflies are behind one another, the light needs to pass through from the one behind to the one in front. The effect needs to
be additive, therefore the blending mode is set to AdditiveBlending. In order to see this effect, the transparency is set to true. The depthWrite is set to false because it causes a clipping effect when particles are behind one another.
Figure 5.6: Vertex shader code for the fireflies
There are 3 uniforms beings passed and 1 attribute. Inside the main function, the uTime is multiplied with the modelPosition inside a sin function to create a floating animation. Then this value is multiplied with the aScale to reduce the amplitude of the fireflies’ movement. The smaller fireflies move less while the bigger ones move more. gl PointSizesimply controls the size of the fireflies and to activate the size attenuation the last formula was used.
Figure 5.7: Fragment shader code for the fireflies
The distanceToCenter variable calculates the center value between gl PointCoord and vec2(0.5). This value is used to calculate the alpha value, which is then passed into thegl FragColor to apply colors to the fireflies.
5.5 Perlin Noise - Water and Portal
Perlin Noise algorithm was an Oscar-winning noise algorithm created for the movie
“Tron”. It was created by Ken Perlin during the 1980s. The Perlin noise is instru- mental in recreating nature shapes like clouds, water, fire, and terrain elevation but it can also be used to animate the grass or snow moving in the wind. There are many Perlin noise algorithms with different results, and different dimensions (2D, 3D, and even 4D), some that repeat themselves, others more performant, etc. The noise produces random values which have relationships with other values. The re- lational values allow a smoothness effect. We enter two float values, usually, x and y, in the algorithm, and a float value is returned which is used in shaders.[2]
The vertex shader code for the water is similar to the house light shader code. There is a varying vec2 variable (vUv) which allows the vertex shader to pass values of the UV coordinates into the fragment shader.
Figure 5.8: Vertex code for the water
Figure 5.9: Fragment code for the water
For the water, the Perlin noise algorithm is used to form patterns. The strength variable contains a float value which is calculated by the cnoise function within the Perlin noise algorithm. It takes in two values and the returned float is used as the alpha when mixing in the colors. The mix values are then placed in the gl FragColor to apply the colors in the render.
The vertex shader for the Portal is the same as the water, but the fragment shader is written differently. For the Porta; the 3D Perlin noise was used just like the water, but the patterns are drastically different.
The UV coordinates passed from the vertex shader are used to displace the original UV coordinates along with the addition of the noise produced by the Perlin noise algorithm. The result is multiplied by the uTime to allow animation at any desired speed. The effect is enhanced more by adding the displaced value with another Perlin noise. To create the outer glow, the distance of the center is calculated. The desired gradient is pushed to the edges by multiplying and offsetting with satisfactory values.
Then we clamp the values between 0.0 to 1.0 to ensure that the edges are completely white. Also a step() function is applied to get the value of the strength. The first parameter of the step() function is a limit (also called edge). When the value of the second parameter is above this limit, we will get 1.0 and if the value is below this limit, we get 0.0. Instead of replacing the strength with that step() function, we are going to add it to the initial strength. We can also multiply it by a lower value to dim the step effect a little. The two uniforms uColorStart and uColorEnd are passed into a final color variable along with the new calculated strength. The final color is used to render the colors on the pattern for the scene.
5.6 Debugging Tool
An essential aspect of every creative project is making debugging easy and tweaking the code. The developer (in this case, me) and other actors working on the project
Figure 5.10: Fragment shader code for Portal
(like designers or even the client) must be able to change as many parameters as possible. We have to take this into account for them to find the perfect color, speed, quantity, etc. for the best experience. There might even be unexpected results that look great. For this project, I used lil-gui to enable the tweaking process. To add an element to the panel, we must use gui.add(). The first parameter is an object and the second parameter is the property of that object we want to tweak. We need to set it after you created the concerned object. When it comes to colors, we need to use addColor() instead of add(). This is due to lil.gui not knowing if we want to tweak a text, a number, or a color just by the type of the property. Since we are using lil-gui, we can use addColor() directly on the material. There is a color picker in the panel that appears for the created object. The problem is that changing this color does not affect the material. It does change the color property of the parameter variable, but we do not even use that variable in our material. To fix that, we need lil.gui to alert us when the value changes. We can do that by chaining the onChange() method and updating the material color using material.color.set().
This method is very useful because of how many formats we can use like ’#ff0000’,
’#f00’, 0xff0000 or even ’red’.
Figure 5.11: Code for tweaks - lil.gui
5.7 Hosting and Deployment
In order to share the project online, I had to host and deploy it. The project is hosted using Vercel. Vercel is one of those ”modern” hosting solutions and features continuous integration (automatization of testing, deployment, and other develop- ment steps like this). It is very developer-friendly and easy to set up. It can be used for complex projects, but also very simple ”single page” websites. Also, other good alternatives should be mentioned like Netlify and GitHub Pages. Vercel directly links up with the Github repository which makes it very easy to update the page.
The service needs instructions on where to place the build files for deployment. In this project, everything is built using ‘npm run build’ and the resulting files are placed in the ‘dist’ folder.
Figure 5.12: Final rendered scene along with debugging tools