I completely agree with this blog post by Humus on the uselessness of the performance numbers in most rendering papers. This is something that often comes up when reviewing papers. Frames-per-second (fps) numbers are less than useless, since they include extraneous information (the time taken to render parts of the scene not using the technique in question) and make it very difficult to do meaningful comparisons. The performance measurement game developers care about is the time to execute the technique in milliseconds.
Some papers do get it right, for example this one. The authors use milliseconds for detailed performance comparisons, only using fps to show how overall performance varies with camera and light position (which is a rare legitimate use of fps).
“Therefore, SSDO is actually 3% more costly than SSAO, rather than 2.4%.”
Hmmm… I think another example would have been more compelling.
It’s also worth mentioning *how* you determine the time spent. A naive implementation will measure a “frame” as the time between the first rendering call and presenting the back buffer (e.g. Present()), or the time it takes to make a single graphics API call (e.g. DrawIndexedPrimitive()). Since the CPU and GPU are working asynchronously, that won’t work: the call may return immediately while the GPU processes it. Generally you want to measure over several frames, since the CPU is only allowed to get a few frames “ahead” of the GPU.
I should add that this above comment has implications for profiling. For example, a profile may indicate that Present() is taking a long time, when the GPU is in fact spending that time processing previous work, not the back buffer presentation.
I don’t know. When I see a msec value for a GPU technique it always makes me a bit nervous. How can I be sure they actually timed the GPU part of the operation to the end and didn’t just time how long it took to submit the commands to the command stream? Whereas for FPS, assuming they’re measuring the whole period from frame start to frame start, then I know there’s less wiggle room for measurement error.
The discussion assumes the researchers do the measurement properly. It is quite possible to correctly measure true GPU costs of individual operations at sub-millisecond precision; tools such as PIX do it all the time.