# Why FPS is a bad performance metric

Hello everyone.

This time I am writing about a more programming than modelling concept. I will try to make clear why FPS is a bad performance metric and help you understand what we should measure in a graphics engine.

Frames per second (or FPS for most) is a simple metric to provide the end-users a way to examine the overall performance of an application. While it provides understandable information, it is not that precise and helpful if we want to measure the actual performance of the application.

For the simple end-user FPS could be measured by counting how many frames were produced in the range of 1 second of execution. The problem with this method is that we don’t have any information about how much time any individual frame needed. No frame is identical to the rest; some may have more geometry to process or more physics calculations and these introduce discrepancies between the frame times. We can’t identify and optimize these work loaded frames!

Enter frametime. We can calculate the time needed to generate every frame and compute an average of those times of all frames over a second. Then we can have an average number of frames that were generated by dividing 1 with the average frametime (in seconds). If the frametime was measured in milliseconds (ms), we would need to divide 1000 with the average frametime. This way we can obtain information about each individual frame and have a means to display FPS to the end-user.

If we ask the graphics card a question about time, it will answer in nanoseconds. We can convert the answer to milliseconds or seconds or not convert it at all; up to us. Sample of how much time a sequence of frames could take. Notice the spikes that take longer could be work loaded frames that need optimization! These can either have major or less obvious discrepancy.

Let’s examine a more qualitative difference between FPS and frametime. Imagine that we have two graphics engines: graphics engine A which produces steady frames at 25ms each and graphics engine B which produces steady frames at 15ms each. What this means is that all individual frames generated from A need 25ms and all individual frames generated from B need 15ms. Then, to calculate the FPS for each engine we simply use the method shown before:

• Engine A: 1000 / 25 = 40 FPS
• Engine B: 1000 / 15 = 66.6 ~= 67 FPS

These two engines, among their calculations, contain an exactly same operation and that is the lighting calculations (just from the top of my head). The lighting calculations need 4ms to complete and is the same for each engine. Somehow (after sweat and hard work) we managed to optimize the lighting calculations and bring the time needed to 2ms! We saved 2 whole milliseconds from each frame calculation. If we apply the optimized version to both our engines we get the new metrics:

• Engine A: 1000 / (25 – 2) = 1000 / 23 = 43.5 ~= 44 FPS
• Engine B: 1000 / (15 – 2) = 1000 / 13 = 76.9 ~= 77 FPS

You notice that something is not quite right. We saved 2 seconds in both engines but we didn’t get the same increase in both. For A we won 4 FPS and for B we won 10 FPS! We see that there is no linearity in the way these values change and we can’t count on FPS as a metric when we decide to analyze our application (the process known as profiling).

By now I should have convinced you to look further into this matter if you develop a graphics application and haven’t thought of this. But let’s discuss one more example of where measuring frametime when profiling is going to come in handy.

By measuring frametime we can place upper bounds on how much time each operation of our engine can take in order to produce a frame. Most game development companies set a target of maximum frametime per frame (or a minimum number of FPS) that they want to achieve to produce a plausible (and playable) result and, then, try to work their way through it. Imagine that we indeed are a game dev company (ah sweet), having created a new flashy engine and set our target to be a minimum of 60 fps. This translates to a maximum target of 1000 / 60 = 16.6ms to complete each frame regardless of what each frame has to show! That means that if we have a frame where there is huge geometry load (like a city with many cars and pedestrians) or a frame with huge physics load (like many objects crashing upon each other) we have the same maximum target time and we cannot exceed it!

In order to do this, we set a maximum time target for each of these operations so that we know exactly what we generate and how long it takes to draw it. Let’s divide a frame from our new engine to the following operations with their maximum times:

1. Networking: 4ms
2. Physics: 5ms
3. Lighting: 6ms
4. Post-process: 1.6ms

These operations run sequentially and are independent of the others. If we manage to drop the time needed for the physics calculations by 1ms, it means that we have a maximum frametime of 15.6ms. The rest of the operations remain at the same maximum time since they are not affected by the change. Seems we are full ahead of our target time and we can clearly leave the engine as is… or not? The distribution of time after the 1ms reduction of the physics calculations.

Why not use that 1ms that we saved from the physics operation to improve any of the other operations? We would still be at out target time. After (not so much) thought the decision is made to improve the visual fidelity of our game. Our lighting operations programmers now have 1 additional millisecond to squeeze in more processing so that we still hit our target time and deliver a more beautiful game! Our times would be:

1. Networking: 4ms
2. Physics: 4ms
3. Lighting: 7ms
4. Post-processing: 1.6ms