[![Review Assignment Due Date](https://classroom.github.com/assets/deadline-readme-button-22041afd0340ce965d47ae6ef1cefeee28c7c493a6346c4f15d667ab976d596c.svg)](https://classroom.github.com/a/cZkyWhKO) [![Open in Codespaces](https://classroom.github.com/assets/launch-codespace-2972f46106e565e64193e422d61a12cf1da4916b45550586e14ef0a7c637dd04.svg)](https://classroom.github.com/open-in-codespaces?assignment_repo_id=15928466) # Letter Rendering and Animation using WebGL > render some letters, animate it ![App Preview](./images/final-preview.gif) **Live Demo:** https://demo.taufan.dev/webgl-letters/ **Source Code:** https://git.taufan.dev/cg/webgl-letters/ > [!NOTE] > Source code will be made public no earlier than Mon, 2024-09-22 ## What I came up with I figured that the requirements itself are pretty lax; as long as the three letters are shown, one pair is made out of lines while the other is made out of triangle, and the background color changes while they cycle, I am technically in the safe side. I infer one thing from this: since there is no restriction to add more things, I should be good to do so. That brings us to: - Handmade animation timeline API - Animated character and color transition using linear interpolation - Custom (naive) parser to convert OBJ data into OpenGL VBO Let me explain how I came up with each of these functionalities ### The Absolute Minimum The requirement dictates that total of 6 characters are needed to be made: 3 using lines and 3 using triangles. I handwrote the vertices required to form the lines (I did an attempt to make an SVG parser to generate this, more on that later). Also, since there is no statement in the requirement forbidding against computer-generated vertices, I decided to create an OBJ parser to do it for me (also, more on that later). My base color is Madrasah Green (`#3A5A40`) and I got the complement color from [Adobe Color](https://colors.adobe.com). To convert all the HEX colors to normalized RGB, I used an [online converter](https://www.rapidtables.com/convert/color/hex-to-rgb.html) to do that (sorry Pak Onggo 🙏). Shape and color is done, I think that cover most of our needs. ### Vertex, Buffer, and Shader Organization using VAO I found out about this cool feature called Vertex Array Object from a [YouTube tutorial](https://youtu.be/lLa6XkVLj0w?si=NluccVo1DW_ORJ06) and [this website](https://webgl2fundamentals.org/webgl/lessons/webgl-fundamentals.html). I highly recommend checking them out, these are invaluable resources! Basically, VAOs bind together collection of attribute state to reduce code duplications. These VAOs can then be bind to the WebGL global state before doing a draw call. This is especially important if you want to treat each character as its own object as opposed to collection of 3 character. Keep in mind, though, that VAO is natively available in WebGL v2. You can technically use VAO in WebGL v1 through extension, but I just decided to use v2 as it is more convenient, and baseline is widely available anyway. Another good thing about WebGL v2 is that it allows me to use GLSL v3.00 ES, which in my opinion has more beautiful syntax (particularly `in` and `out`). Other than VAOs, the basic functionalities like buffer loading, shader parsing, and program linking looks largely the same as the WebGL code from the other day. I split some functions into different file to make it a little neater, and got rid of the program object as it has been handled by my VAO generator. ### The Shape Object Initially, my idea were to create a base Shape2D class, that would then be inherited by each shape type (e.g. letter K, letter E, and so on). Each of these implementations would then contain their own set of vertices, rendering logic, position update logic, and so on. Later on I decided that this design would probably make sense if I were to instantiate each shape multiple times, which clearly is not the case here in this assignment. Thus, I settled with the current design, creating a generic Shape2D class that holds together position vector, color, size, and very minimal stuff to do with WebGL rendering. The decision on the shape class design was largely influenced by Daniel Shiffman, the author of the book [Nature of Code](https://natureofcode.com/). ### Shaders You may also notice that the shape object holds position and scale information. This is possible because later on, the vertex shader will do the transformation, by using scale as the dilation factor and its position as the offset. This new position is then normalized into -1.0 to 1.0 range based on the provided canvas size (provided via uniform). The fragment shader doesn't do much, it just asks the programmer for what the shape color should be via uniform and apply the said value as the fragment color output. I learned these techniques from the same [YouTube tutorial](https://youtu.be/lLa6XkVLj0w?si=NluccVo1DW_ORJ06) and [website](https://webgl2fundamentals.org/webgl/lessons/webgl-fundamentals.html). ### Animation I wanted to create a timeline-based animation API like they have in [Motion](https://motion.dev). Of course, I can't really steal their code as it was created on top of native Web Animation API, so I had to build it myself. I've worked with animations before, my [personal site](https://taufan.dev) being one of them, so I knew right away that I have to at least get the interpolation right. I was thinking of doing generalized `(t) => t'` transition function (inspired by Svelte) but in the end, I just settled with lerp as it is dead simple to make. The first iteration was focused on making things work, and I was quite satisfied with the result actually. There are some problems, like the loop doesn't work and the shape that overshoots its intended final position because I forgot to clamp the interpolated value. The overall code itself was a mess, too. I think the Python programmers usually judges a certain code by how "pythonic" it is, and if there is such thing in Typescript, my code wasn't really near "Typescript-ic", if that make any sense. I use Claude 3.5 Sonnet to tidy up my code and fix that clamping issue. Here are the prompts I used: ``` I am trying to create some sort of generalized animation timeline for my WebGL project. Here's what I have so far: The problem is, once a shape has reached its final position, it will go past through its intended final position when there's still more time ``` and ``` Is there any advice on how to make the code cleaner, and is there any redundant part of the code? ``` The result turned out quite nice. ### SVG Parser Now, creating lines isn't that hard, and I guess the same could be said for the triangles, but I am sure you are likely agree that creating them are pretty tedious, particularly if you want to make them pretty. I came up with an idea: instead of wasting 15 minutes to hand write the triangle vertices yourself, why not waste more than 10 times the amount of time needed to research and create some way to automate the process? Sure, it's weekend anyway. My thought was to attempt to generate VBOs from a well known format, so I can create the text itself using some sort of software, to then be parsed. My first instinct was to make an SVG parser, since it is human readable and I have worked with them before. I did some research on this from: - [Reddit - How do I convert SVG image into an array of vertex](https://www.reddit.com/r/opengl/comments/1c31bmv/how_do_i_convert_a_svg_image_into_an_array_of/) - [Gamedev StackExchange - How can I generate vertex data from SVG](https://gamedev.stackexchange.com/questions/152442/how-can-i-generate-vertex-data-from-an-svg) - [Google Group - SVG path to vertices/indices array](https://groups.google.com/g/webgl-dev-list/c/S17ad3jFbek) - [Processing - SVG to vertex code for P5JS](https://discourse.processing.org/t/svg-to-vertex-code-for-p5js/14942) Eventually, I tried to play around and even asked Claude (LLM) to create the base implementation. The general idea is, because SVG path is like a pen, we just need to convert the end position into a vertex data and normalize it at the end of parsing. ![SVG Conversion Result](./images/svg-preview.webp) This works somewhat okay for lines, but since SVGs itself aren't typically triangulated during the creation process (except probably you explicitly make it to triangulate itself using some fancy softwares), you'd need to triangulate those vertices yourself. Now, triangulation isn't exactly easy and it could be a topic on its own, so for now I decided to ditch the idea completely and think of something else, which brings us to OBJ. ### Wavefront Parser Now, you might think that creating an wavefront parser is unreasonable given that my attempt to create an SVG parser fails, but let's recall that: - Wavefront is just as human readable, if not more human readable than SVG - Because it is designed to work with graphics API, there must be quite a lot of people doing their implementation of parser - 3D softwares like Blender typically has built in triangulation function, so I do not have to do it myself - While what we're aiming for is 2D VBO, we can achieve that just by ignoring the third axis from the generated wavefront file After reading its entry on [Wikipedia](https://en.wikipedia.org/wiki/Wavefront_.obj_file), I got a general sense on how the format works. My first attempt on making this parser was to grab all XZ vertex from the file (that has been conveniently normalized) and take it as the VBO. Of course, since it only contains vertices, they still lack information required to actually be rendered as triangles. Though, if I were to render it using line loops, the shape turned out to be quite good. I've consulted to some resources, like [this website](https://webgl2fundamentals.org/webgl/lessons/webgl-load-obj.html) and Claude Sonnet on how do I go about creating a wavefront parser, and they lead me to creating separate buffer for vertex array and indices array. Now, this is a valid approach and even more optimized, this is probably not that hard to implement considering I have set up VAOs, but I just want it simple and take the faces information to push for vertices to the final vertex buffer, so I did just that. Is it optimized? No. Does it work? Absolutely. Just in case you want to use it for yourself, navigate to the `/scripts` directory and use the provided python script. ```bash cat ./in/yourfile.obj | python ./objParser.py > ./out/yourfile.out ``` ## Acknowledgements These resources helped me a lot in doing this assignment: - [Nature of Code (book)](https://natureofcode.com/) by [Daniel Shiffman](https://thecodingtrain.com/) - [webgl2fundamentals.org](https://webgl2fundamentals.org/) - [Indigo Code (YouTube)](https://www.youtube.com/@IndigoCode) - [Gamedev StackExchange](https://gamedev.stackexchange.com/) - [OpenGL Subreddit](https://www.reddit.com/r/opengl/) ### AI Involvements I try to avoid as much AI as possible so I can fail and learn, but there are some instances where I use LLMs to help myself save some time: - Refactoring the animation API - Base implementation of SVG parser (unused) - Base implementation of Wavefront parser All these are done using [Claude Sonnet 3.5](https://claude.ai) by [Anthropic](https://www.anthropic.com/) ### More Resources While not directly related to this assignment in particular, these people have helped me in the past to learn about computer graphics, so check them out! - [Yan Chernikov](https://www.youtube.com/@TheCherno) - ex. EA, now creating Hazel game engine - [Daniel Shiffman](https://thecodingtrain.com/) - board member of Processing Foundation, prof. at NYU, founder of The Coding Train - [Grant Sanderson](https://www.3blue1brown.com/) - founder of 3b1b, has created tons of video explaining various topics in math - [Sebastian Lague](https://www.youtube.com/@SebastianLague) - created tons of video explaining topics in CS and math