Last time we looked at how to quickly prototype a particle-like object directly inside a shader, using distance functions. That was acceptable for moving an object based on time elapsed. However, if we want to work with vertices we would need to define the particles on the CPU and send the vertex data to the GPU. We use again a minimal playground we used in the past for 3D rendering, and we start by creating a Particle struct in our metal view delegate class:

struct Particle {
    var initialMatrix = matrix_identity_float4x4
    var matrix = matrix_identity_float4x4
    var color = float4()

Next, we create an array of particles and a buffer to hold the data. Here we also give each particle a nice blue color and a random position to start at:

particles = [Particle](repeatElement(Particle(), count: 1000))
particlesBuffer = device.makeBuffer(length: particles.count * MemoryLayout<Particle>.stride, options: [])!
var pointer = particlesBuffer.contents().bindMemory(to: Particle.self, capacity: particles.count)
for _ in particles {
    pointer.pointee.initialMatrix = translate(by: [Float(drand48()) / 10, Float(drand48()) * 10, 0])
    pointer.pointee.color = float4(0.2, 0.6, 0.9, 1)
    pointer = pointer.advanced(by: 1)

Note: we divide the x coordinate by 10 to gather particles inside a small horizontal range, while we multiply the y coordinate by 10 for the opposite effect - to spread out the particles vertically a little.

The next step is to create a sphere that will serve as the particle’s mesh:

let allocator = MTKMeshBufferAllocator(device: device)
let sphere = MDLMesh(sphereWithExtent: [0.01, 0.01, 0.01], segments: [8, 8], inwardNormals: false, geometryType: .triangles, allocator: allocator)
do { model = try MTKMesh(mesh: sphere, device: device) } 
catch let e { print(e) }

Next, we need an updating function to animate the particles on the screen. Inside, we increase the timer each frame by 0.01 and update the y coordinate using the timer value - creating a falling-like motion:

func update() {
    timer += 0.01
    var pointer = particlesBuffer.contents().bindMemory(to: Particle.self, capacity: particles.count)
    for _ in particles {
        pointer.pointee.matrix = translate(by: [0, -3 * timer, 0]) * pointer.pointee.initialMatrix
        pointer = pointer.advanced(by: 1)

At this point we are ready to call this function inside the draw method and then send the data to the GPU:

let submesh = model.submeshes[0]
commandEncoder.setVertexBuffer(model.vertexBuffers[0].buffer, offset: 0, index: 0)
commandEncoder.setVertexBuffer(particlesBuffer, offset: 0, index: 1)
commandEncoder.drawIndexedPrimitives(type: .triangle, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: 0, instanceCount: particles.count)

In the Shaders.metal file we have a struct for the incoming and outgoing vertices, as well as one for the particle instances:

struct VertexIn {
    float4 position [[attribute(0)]];

struct VertexOut {
    float4 position [[position]];
    float4 color;

struct Particle {
    float4x4 initial_matrix;
    float4x4 matrix;
    float4 color;

The vertex shader uses the instance_id attribute which we use to create many instances of the same one sphere we sent to the GPU in the vertex buffer at index 0. We then assign to each instance one of the positions we stored and sent to the GPU in the buffer at index 1.

vertex VertexOut vertex_main(const VertexIn vertex_in [[stage_in]],
                             constant Particle *particles [[buffer(1)]],
                             uint instanceid [[instance_id]]) {
    VertexOut vertex_out;
    Particle particle = particles[instanceid];
    vertex_out.position = particle.matrix * vertex_in.position ;
    vertex_out.color = particle.color;
    return vertex_out;

Finally, in the fragment shader we return the color we passed through in the vertex shader:

fragment float4 fragment_main(VertexOut vertex_in [[stage_in]]) {
    return vertex_in.color;

If you run the app, you should be able to see the particles falling down like a water stream:

alt text

There is yet another, much more efficient approach to rendering particles on the GPU. We’ll look into that next time. I want to thank Caroline for her valuable assistance with instancing. The source code is posted on Github as usual.

Until next time!