First, here’s the tl;dr summary … Benchmarking is for losers, Profiling rulez!
Both of these entries discuss examples of benchmarking. Programmers love benchmarks. After all, it’s a great chance to whip out one’s performance-penis and compare sizes, trying to come up with the fastest algorithm.
Unfortunately, this is pointless posturing. Who cares that one version of a strip-whitespace operation is three times faster than another? The important question is whether the speed matters.
Until you answer that question, all the benchmarking in the world won’t help you, and that brings us to profiling.
Profiling is a lot harder than benchmarking, which may be why people talk about it less often. Profiling doesn’t compare multiple versions of the same operation, it tells us where the slowest parts of our code base are.
In order to make profiling useful, we need to write code that simulates typical end user use of the code we’re profiling. Then we run that code under a profiler, and we know what’s worth optimizing.
Once we know that, then we can start speeding up our code. At this point, benchmarking might be handy. If, for example, on some crazy bizarro world, our program spent a lot of its runtime trimming whitespace from strings, we could benchmark different approaches, and use the fastest.
Of course, in the real world, this will never be the slowest thing your program is doing. In most cases, the slowest parts of the program are usually the parts with IO, such as reading files, talking to a DBMS, or making network calls. If this isn’t the case, we may be operating on a lot of data in memory with some sort of non-trivial algorithm, and that’s the slowest part.
Either way, without profiling, benchmarking is just a pointless distraction.
Of course, I’d be remiss if I didn’t point out that Perl has an absolutely fantastic profiler available these days, Devel::NYTProf. It actually works (no segfaults!), and produces fantastically useful reports, so use it.