Last time we described the `graphics pipeline` and the `Metal pipeline`. It is time we looked deeper inside the pipeline, and understand how vertices are really processed at a lower level. For this, we need to learn a few `3D math` concepts such as transformations.

In the world of `3D graphics` we often think in terms of 3 or 4 dimensions for our data. As you remember from our previous episodes, `location` and `color` were both of type vector_float4 (4-dimensional). In order to draw 3D geometry on the screen, vertices suffer a series of transformations - from `object space` to `world space`, then to `camera/eye space`, then to `clipping space`, then to `normalized device coordinates` space, and finally to `screen space`. We are only looking at the first stage in this episode.

The vertices of our `triangle` are expressed in terms of an `object space` (local coordinates). They are currently specified about the triangle’s origin which lies at the center of the screen. In order to position and move the triangle in a larger scene (world space), we need to apply `transformations` to these vertices. The `transformations` we will look at are: scaling, translation and rotation.

The translation matrix is similar to an identity matrix (with values of 1 on its main diagonal) and where positions [12], [13] and [14] (in `column-major order` they are the equivalent of the `[3]`, `[7]` and `[11]` positions) are populated with the values of a D vector representing the distance the vertex would be moved to, on the respective x, y, z axes.

The scaling matrix is also similar to an identity matrix where positions [0], [5] and [10] are populated with the values of a S vector representing the scale the vertex would be zoomed in/out to. The x, y, z vector values are usually the same float value since scaling is done proportionally on all axes.

The rotation matrix is also similar to an identity matrix where depending on which axis we are rotating about, different positions are being populated with either the sinus or cosinus of the angle we are rotating with. If we are rotating about the x axis, positions [5], [6], [9] and [10] are populated. If we are rotating about the y axis, positions [0], [2], [8] and [10] are populated. Finally, if we are rotating about the z axis, positions [0], [1], [4] and [5] are populated. Remember, these positions need to be transposed into `column-major order`.

Alright, we had enough math for a whole week, so let’s put these matrices into code. We will continue with the code from where we left off after part 3. It comes in handy for us to create a `struct` named Matrix that will include these `transformations`:

Let’s walk over this code. We first create the `struct` and declare an `array` of `floats`. Then we provide an initializer for it, which is the identity matrix (all 1’s on the diagonal). Next, we create the transformation matrices. Finally, we create a modelMatrix which will combine all the transformations into a single output matrix.

In order to have these transformation work, we need to send them to the `GPU` via a `shader`. In order to do that we first need to create a new buffer. Let’s name it uniform_buffer. `Uniforms` are constructs we can use when we want to send data the applies to the entire model rather than to each vertex. It only makes sense that we save space by using `uniforms` instead and sending one final `model matrix` containing all the transformations. So at the very beginning of our `MetalView` class, create the new buffer:

Inside the createBuffers() function, allocate memory for the buffer, enough to hold a 4x4 matrix:

Inside the sendToGPU() function, after setting the `vertex_buffer` in the `command encoder`, also set the uniform_buffer:

Finally, let’s move to Shaders.metal for the last part of the configuration. Below the `Vertex` struct, create a new struct named Uniforms that will hold our model matrix:

Modify the `vertex shader` to include the transformations we passed along from the `CPU`:

All we did here was to pass uniforms as the 2nd argument (buffer), and then multiply the model matrix with the vertices. If you run the app now, you will see our good old triangle friend, taking the entire space of the view.

Let’s scale it down to a quarter of its original size. Add this line to the modelMatrix function:

Run the app again and notice that the triangle is way smaller now:

Next, let’s translate the triangle up on the y axis by moving it up half the screen size:

Run the app again and notice that the triangle is now higher than before:

Finally, let’s rotate the triangle about the z axis:

Run the app again and notice that the triangle is now also rotated:

Next week we will finally get to drawing 3D objects (such as cubes or spheres). The source code is posted on Github as usual.

Until next time!