You’re looking at a beautiful 4K ultra widescreen monitor, admiring the glorious scenery and intricate detail.
Ever wondered just how those graphics got there?
Curious about what the game made your PC do to make them?

We’ll start with the end result and ask ourselves: “What am I looking at?”
From there, we’ll analyze each step performed to get that picture we see.
You’re playing the latest games at beautiful 4K ultra res.
Did you ever stop to wonder just how those graphics got there?
In the image below, we’re looking a camera shot of the monitor displaying the game.
This picture is typically called aframe, but what exactly is it that we’re looking at?
And just like with cooking, rendering needs some basic ingredients.
But this allows us to more easily see what this asset is made from.
), how shiny it is, whether it is translucent or not, and so on.
One specific set of values that vertices always have are to do withtexture maps.
So in a 3D rendered world, everything seen will start as a collection of vertices and texture maps.
This all forms the required framework that will be used to create the final grid of colored pixels.
Let’s open up with a very basic ‘game’: one cuboid on the ground.
One triangle or even one whole object is known as aprimitive.
Then, it’s time for a spot of coloring.
Every object in this image is modelled by vertices connected together, so they make primitives consisting of triangles.
Onto the next stage.
For most games, this process involves at least two steps:screen space projectionandrasterization.
The removing of primitives is calledcullingand can make a significant difference to how quickly the whole frame is rendered.
The above image shows a very simple example of a frame containing one primitive.
This has resulted in a problem calledaliasing, although there are plenty of ways of dealing with this.
Yet more math is employed to account for this, but the results can generate some weird problems.
The result is a jarring mess, with aliasing rearing its ugly head again.
In lots of games, the pixel stage needs to be run a few times.
Fortunately, there is help in the form of what is called anapplication programming interfaceor API for short.
While there are differences in terms of the wording of instructions and operations (e.g.
Where there will be a difference comes to down to what hardware is used to do all the rendering.
CPUs aren’t ultimately designed for this, as they’re far too general by required design.
all of the routines done to move and color triangles) using the CPU.
In other words, it was more than 20 times faster.
With CPU processed vertex shaders, the average result was a paltry 3.1 fps!
Bring in the GPU and the average frame rate rises to 1388 fps: nearly 450 times quicker.
What would it be like if it was modern and the whole lot was done in software?
The reason for such a difference lies in the math and data format that 3D rendering uses.