cl

David Grace

Bay Area, California

Programmer, SysAdmin, Artist

The Art of Vector Graphics

Go through the building of a Javascript 3D vector graphics engine in this article. I’ll discuss the basics of how to get 3D objects on the screen and the structure of a simple 3D engine.

One of my projects for the last Ludum Dare of 2015 was a Javascript game engine based entirely upon vector graphics. But what sort of vector graphics?

I’m talking about very old school vector graphics, such as what you would find in an arcade game from the late 70s. This would be games such as Battlezone, Tempest, or Star Wars. Back when memory was at a premium and bitmaps costly to store, some early arcade games used dedicated hardware for drawing 2D vector lines on a display. The display wasn’t raster-scanned as every CRT would be since that time; The images were built from lines drawn by the electron beam, racing around the phosphors and tracing patterns that were distinctly unique to that era. It’s really hard to describe today’s generation what those early vectors graphics felt like. Flickery, smeared and blindingly bright at times, they had a visual style which has emulated for nostalgic value since.

My goal was to create a similar look for the web, utilizing Javascript and webGL. I decided to use the PIXI engine as my basis for all of my rendering, as it contains seamless calls which work for both WebGL and a standard HTML5 canvas. I also borrowed linear algebra classes (such as Vector3 and Matrix4) from Three.js, as a time saver.

Beginning Steps

First, how do you design a vector engine? The very initial thing you will need is a simple geometric format for storing your 3D data. I decided upon a simple “model” format which just consists of an array of vertices, along with an array of lines. The lines would be the actual rendered portion: They are simple lists of connected vertices. [0, 1, 1, 2] would draw a line from vertex #0 to vertex #1, and from vertex #1 to vertex #2.

To build these in-memory strutures, I decided to write a simple Wavefront Obj parser. Wavefront is an ancient 3D model format, created for the Advanced Visualizer (forerunner to Maya). The format is very simple, as it contains a list of vertices and then a list of faces which utilize these vertices. Vertices are just three floating point numbers, and faces are polygons (usually triangles, but can be any number of sides) which refer to the index numbers from the vertices list. In my engine, I turn the polygons into a series of line pairs without caring about the faces or their ordering.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
o Cube
v 1.000000 -1.000000 -1.000000
v 1.000000 -1.000000 1.000000
v -1.000000 -1.000000 1.000000
v -1.000000 -1.000000 -1.000000
v 1.000000 1.000000 -1.00000
v 1.000000 1.000000 1.000000
v -1.000000 1.000000 1.000000
v -1.000000 1.000000 -1.000000
f 1 2 3 4
f 5 8 7 6
f 1 5 6 2
f 2 6 7 3
f 3 7 8 4
f 5 1 4 8

Later, I added materials. Materials are just groupings of lines – In effect, take the lines array and split into separate arrays which are processed individually. This allows you to apply effects to different parts of a 3D object.

From A Local 3D Representation to A 3D World

Now that you’ve got a series of lines and their associated vertices, what do you do with them? This requires a step up in complexity. I’m not going to cover 3D engines and how they represent 3D objects in detail, but here is an overview.

Objects in 3D space require spatial parameters such as their size, which direction they are facing (orientation), and in what position they are located in the 3D world. These properties are stored individually on the model, but before rendering they are combined to form an ObjectMatrix. The ObjectMatrix is a representation of our model expressed in world coordinates.

There are two additional matrices which are combined with the ObjectMatrix. The first is the inverted ViewMatrix, but we’ll cover this later when we talk about the camera. The second is the ProjectionMatrix, which is a matrix that defines the “shape” of the 3D camera that is used to view the scene. Most of this time the camera is a perspective camera (where lines recede to a point on the horizon) but it can be other shapes, such parallel projection (think: Isometric view). The ProjectionMatrix also sets us up for the final perspective divide, by calculating the W component of our final matrix.

When combined with the ObjectMatrix, these two matrices create the ModelViewMatrix, which is an aggregation of all matrix operations we need to perform on our conversion from our model’s original 3d mesh coordinates (defined in the Wavefront Obj file) to the final view transformed by displacements of the model (such as rotation, scale, and position), plus the view and projection matrices.

At this point, it’s fairly easy to take our model’s local vertices and then multiply them by the combined ModelViewMatrix. This results in a new set of vertices, which I will call the transformed vertices. These coordinates are in what is typically called the clip space.

Rendering 3D Lines as 2D

Now that we have a series of vertices which are in a uniform coordinate space shared between all objects (world space) and have been altered by the Projection and View matrices (into clip space), we can render their associated lines to the screen.

If you are familiar with 3D, you have likely heard of the perspective divide. Since we have combined the ProjectionMatrix with our ObjectMatrix, it’s a simple step to convert our transformed vertices into 2D points that have been projected onto the screen’s 2D surface. Since W has been calculated for us, we can just divide the X and Y coordinates by the W factor. After that, we have screen-space coordinates and it’s relatively easy to draw lines between points. That’s all there is to it! (Or is there? We’ll cover clipping shortly.)

Drawing lines is straight-forward. I’m using PIXI as the rendering backend, so I utilize their Canvas-like drawing tools which can draw arbitrary 2D lines.

If I just wanted to draw 3D objects on the screen and don’t care about a lot of things – such as cropping to a visible view port or flying the camera so close that the object is rendered from the inside out – I can stop here. But of course, if we’re going to make a 3D engine, we have to do it right!

View Frustum Clipping

Once you have transformed the model’s local vertices by the model’s own world coordinates, the ViewMatrix and the ProjectionMatrix, you are now what is called clipping space. Here we can clip our vertices by the view frustum, which will result in 3D lines that do not extend outside of this region.

In the case of my 3D engine, I only cared about the near clipping plane. The reason for this is that anything which passes the near plane is behind the camera, and therefore if you move the camera inside an object (or move an object really close the camera), lines will extend into the other side of this view plane and will become distorted.

To solve this, I implement view frustum clipping after we’ve calculated the transformed vertices. Since I only care about the near plane, I take the W coordinates of the two end-points for my line and then check them against the plane. If they are both behind it, I’m done: Don’t render the line. If both are in front of it: I’m also done, as they are both in front of the near plane. If either one is behind it, I perform a single line to plane intersection clip that shortens the line so it never extends behind the near plane.

This generates a clipped W coordinate for the end point behind the plane and then I am able to draw the line normally using this coordinate.

View Port Clipping

It is possible to clip the lines to the dimensions of the view point. This allows you to have a viewing area smaller than the actual rendered surface. While this is not strictly necessary (as the PIXI Graphics object will automatically clip lines draw to its viewing region), I included it in my engine for completeness.

Note that view port clipping is a purely 2D operation; It occurs after all 3D->2D conversions have been performed and works in what is called screen space; The final 2D mapping of lines to your screen coordinates.

I use the Cohen-Sutherlane method for line clipping. In short, this method takes the end points of the line and checks to see if they are outside of any of the edges of the view port. If so, it clips them. The algorithm continues until no more clipping events occur, since it’s possible for a line to be clipped by two edges.

The 3D View, aka Camera

Earlier I mentioned the camera. It’s a bit different from the ProjectionMatrix, as the camera has a distinct location and orientation in our 3D world. Therefore, it is handled separately.

The ViewMatrix is calculated much like the ObjectMatrix, by combining these properties (scale, rotation and position) into a single matrix. We also calculate an inverse version of the ViewMatrix, which we will call the InvViewMatrix. When you apply the InvViewMatrix to an object’s world coordinates, it converts to view space – Centered at the origin of the world, looking out along the view direction of the camera.

Doing it this way gives us two benefits: First, we can control the camera much in the same way as we control an object. Secondly, it simplifies a lot of the math used above for things like view frustum clipping because we’re always viewing the world from the origin. (Therefore, calculating things like the planes used to make up the view frustum are easy.)

A little aside about how I handle creating the InvViewMatrix: I do not actually calculate the inverse of the ViewMatrix. I can get away with a cheat, because in my engine the camera is never scaled. In this case, I am able to calculate the transpose of the camera rotation matrix, and then combine it with a negated version of the camera’s translation matrix.

The Result

There was a lot of cover here! I’m going to talk more about this engine as I work on. My code-name for it is the Smoking Mirror.

[Demo of Smoking Mirror]

As always you can find my code on GitHub.