I realized that the migrations I wrote were very buggy. Now I’ve written a test system to help me test future migrations, but the existing releases are problematic. I can create a set of schema changes to fixup a schema which has been migrated, but the changes will have to be applied manually. Note that if you’re comfortable wiping your existing schema because you’re just playing with Silki then this is a non-issue.
There’s been a lot of discussion about the role of TPF lately, both at YAPC and on blogs. The most recent discussion is in the comments of a recent blog post by Gabor Szabo asking people to weight in on what TPF should be doing. In the comments, Casey West says: It’s a striking sign that The Perl Foundation is expected to pay for open source contributors … Right now TPF is using money to demotivate the Perl Community!
In a comment on my entry about Dist::Zilla pros and cons, Phred says: I’m not clear on the value Dist::Zilla provides other than some versioning auto-incrementing and syntactic sugar for testing. This brings a up a good question. What the heck to does dzil do? Let’s walk through a dist.ini file from a real project. I’ll use the dist.ini from my Markdent distribution. This should answer the “what does it do” question quite well.
Edit October 25, 2018: I wasn’t really correct about the immutable cons. While by default dzil acts as a giant pre-processor, there are ways to use it that minimize the differences between the code released on CPAN and the code in your repo. You can have a $VERSION in all your modules, you can have a Makefile.PL in your repo, you can have a LICENSE file. And you can do all this while still letting dzil manage these things for you.
I released the first version of Text::TOC, so now we can revisit my earlier design in light of an actual implementation. From a high level, what’s released is pretty similar to what I thought I would release. Here’s what I said the high level process looked like: Process one or more documents for “interesting” nodes. Assemble all the nodes into a table of contents. Annotate the source documents with anchors as needed.
A while ago, I wrote an entry on the idea of breaking problems down as a strategy for building good tools. Today, I started writing a new module, Text::TOC. The goal is to create a tool for generating a table of contents from one or more documents. I’m going to write up my initial design thoughts as a “how-to” on problem break down. First, a little background. I’ve already looked at some relevant modules on CPAN.
In my last entry, I proposed doing away with DateTime::Locale entirely. I’ve since realized that I will want to keep it around as a place to integrate both CLDR and glibc locale data in one unified interface. I’m still going to work on my new Locale::CLDR module, but the DateTime::Locale API will probably stick around more or less as-is. The one thing I will want to get rid of is the custom locale registration system.
I’m planning to end-of-life DateTime::Locale sometime in the future, in favor of a new distribution, Locale::CLDR. This new distro will be designed so that it can provide all the info from the CLDR project (eventually), rather than just datetime-related pieces. My plan is to have DateTime use Locale::CLDR directly, rather than continue maintaining DateTime::Locale. To that end, I’m wonder how people are using DateTime::Locale. I’m not interested in people only using it via DateTime.
First, here’s the tl;dr summary … Benchmarking is for losers, Profiling rulez! I’ve noticed a couple blog entries in the Planet Perl Iron Man feed discussing which way of stripping whitespace from both ends of a string is fastest. Both of these entries discuss examples of benchmarking. Programmers love benchmarks. After all, it’s a great chance to whip out one’s performance-penis and compare sizes, trying to come up with the fastest algorithm.
If you’ve been bitten by the testing bug, you’ve surely encountered the problem of testing a database-intensive application. The problem this presents isn’t specific to SQL databases, nor is it just a database problem. Any data-driven application can be hard to test, regardless of how that data is stored and retrieved. The problem is that in order to test your code, you need data that at least passably resembles data that the app would work with in reality.