I have been recording CPU and disk usage for the entire duration of this run at a rate of one sample per second. I'm faced with a dilemma about how to represent it graphically. Since it's obviously impractical to display one x on the graph for each sample, I have to determine how to combine multiple samples. It is the decision to pick between average and peak utilization that I'm not sure of. Average would certainly show the overall load better, but it does little justice to the spiky utilization of a program such as y-cruncher in swap mode and is misleading in terms of making the program appear not to be maximizing system resources. I'll show you what I mean:

Averaged Graph, Peaked Graph

For a graph that's even as wide as the ones above, each "macrosample" still represents just shy of two minutes of time.

Fun Stats: 2.39 quadrillion cycles of CPU time consumed so far, 24.1 trillion bytes read, 23.5 trillion bytes written