In my last entry, I proposed doing away with DateTime::Locale entirely.

I’ve since realized that I will want to keep it around as a place to integrate both CLDR and glibc locale data in one unified interface. I’m still going to work on my new Locale::CLDR module, but the DateTime::Locale API will probably stick around more or less as-is.

The one thing I will want to get rid of is the custom locale registration system. However, custom locales would still be usable. They would be loadable by id, or you could pass an already-instantiated custom locale object to a DateTime object.

I’m planning to end-of-life DateTime::Locale sometime in the future, in favor of a new distribution, Locale::CLDR.

This new distro will be designed so that it can provide all the info from the CLDR project (eventually), rather than just datetime-related pieces.

My plan is to have DateTime use Locale::CLDR directly, rather than continue maintaining DateTime::Locale.

To that end, I’m wonder how people are using DateTime::Locale. I’m not interested in people only using it via DateTime.pm. That form of usage will continue to work transparently. You specify a locale for a DateTime.pm object and you get localized output.

All of the information available from DateTime::Locale will be available from Locale::CLDR, although the API will be a little different.

In particular, is anyone out there using custom in-house locales at all?

That would be the biggest potential breakage point, since upgrading DateTime.pm to a version that uses Locale::CLDR will end up making your custom locales invalid.

I’m planning to support some form of custom locales in Locale::CLDR as well, of course.

None of this will happen in the very near future. I still need to get DateTime::Format::Strptime not using DT::Locale first, which is its own painful

project ;)

Please reply in the comments or send me email.

First, here’s the tl;dr summary … Benchmarking is for losers, Profiling rulez!

I’ve noticed a couple blog entries in the Planet Perl Iron Man feed discussing which way of stripping whitespace from both ends of a string is fastest.

Both of these entries discuss examples of benchmarking. Programmers love benchmarks. After all, it’s a great chance to whip out one’s performance-penis and compare sizes, trying to come up with the fastest algorithm.

Unfortunately, this is pointless posturing. Who cares that one version of a strip-whitespace operation is three times faster than another? The important question is whether the speed matters.

Until you answer that question, all the benchmarking in the world won’t help you, and that brings us to profiling.

Profiling is a lot harder than benchmarking, which may be why people talk about it less often. Profiling doesn’t compare multiple versions of the same operation, it tells us where the slowest parts of our code base are.

In order to make profiling useful, we need to write code that simulates typical end user use of the code we’re profiling. Then we run that code under a profiler, and we know what’s worth optimizing.

Once we know that, then we can start speeding up our code. At this point, benchmarking might be handy. If, for example, on some crazy bizarro world, our program spent a lot of its runtime trimming whitespace from strings, we could benchmark different approaches, and use the fastest.

Of course, in the real world, this will never be the slowest thing your program is doing. In most cases, the slowest parts of the program are usually the parts with IO, such as reading files, talking to a DBMS, or making network calls. If this isn’t the case, we may be operating on a lot of data in memory with some sort of non-trivial algorithm, and that’s the slowest part.

Either way, without profiling, benchmarking is just a pointless distraction.

Of course, I’d be remiss if I didn’t point out that Perl has an absolutely fantastic profiler available these days, Devel::NYTProf. It actually works (no segfaults!), and produces fantastically useful reports, so use it.