I found a bug in Perl 6 recently. Really I independently discovered one that was already reported.

Here’s how to trigger it:

Any numerator of 231 or greater causes that error. Note that Perl 6 is perfectly happy to represent rationals of that size or larger:

So the problem was clearly somewhere in the compiler.

Here’s a quick guide to how I fixed this.

First, I added a test to the Perl 6 test suite. Unlike many programming languages, the primary test suite for Perl 6 is not in the same repo as the compiler. Instead, it has its own repo, roast. Roast is The Official Perl 6 Test Suite, and any toolchain which can pass the test suite is a valid Perl 6.

In this particular case, I added a test to S32-num/rat.t. The test makes sure that a Rat value with a large numerator can round trip via the .perl method and EVAL:

Running this with the latest Rakudo caused the test to blow up with the same error as my first example. Success! Well, failure, but success at failing the way I wanted it to.

Next I had to figure out where this error was happening. This can be a bit tricky with compiler errors like this. The best way to get a clue to the problem’s location is to pass --ll-exception as a CLI flag:

If we look at the top of the trace we see references to Perl6/Actions.moarvm:bare_rat_number. Looking in the rakudo repo, we can find a src/Perl6/Actions.nqp file that contains method bare_rat_number($/) {...}. This seemed like a pretty good guess at where the error was coming from.

Here’s the method in full before my patch:

After doing some dumping of values in the AST with code like note($.dump), I realized that the numerator could end up being passed in as either a QAST::Want or QAST::WVal object. What are these and how do they differ? Why is there a break at 231? I have no clue!

However, I could see that while a QAST::Want object could be treated as an array, a QAST::WVal could not. Fortunately, both objects support a compile_time_value method. Looking at this method’s implementation in QAST::Want I could see that it was getting the first array element from the object’s internals, while QAST::WVal implemented this differently. But since they both implement the same method why not just call it and be done with it?

Here’s the patched method:

All the tests passed, so I think I fixed it.

Overall, this wasn’t too hard. Because much of Perl 6 is either written in Perl 6 or in NQP (a subset of Perl 6), fixing the core can be much easier than with many other languages, especially most dynamic languages which are implemented in C.

My article on Stepford for the Perl 5 advent calendar is now live. Maybe I can write an article on Perl 4 or Perl 7 for the trifecta?

Stepford is a tool we wrote at MaxMind, Inc. to help automate our database build process. It’s like make but in Perl, and instead of writing a set of rules, you write a set of step classes and it puts them all together. See the article for more details.

I wrote an article for the Perl 6 advent calendar, Perl 6 Pod, that just went live earlier this evening.

I’ll also have a Perl 5 advent calendar article coming up soon on December 16. Am I the only person to write an article for both the Perl 5 and Perl 6 advent calendars this year? I guess we’ll find out on December 25.

This post got a lot of discussion on Hacker News that you might find interesting.

I’ve been writing a fair bit of Perl 6 lately, and my main takeaway so far is that Perl 6 is fun.

Pretty much everything I love in Perl 5 is still part of Perl 6, but almost everything I hate is gone too.

Here are some of the things that I’ve been having fun with in Perl 6 …

Built-In OO and Types

I really love that I can write this in native Perl 6:

Of course, you can already do pretty much the same thing with Moose in Perl 5, except now I don’t have to debate Moose vs Moo vs Moops vs STOP MAKING SO DARN MANY “M” MODULES!

Roles work just as well with a simple role Foo { ... } declaration.

Multiple Dispatch

If you’ve ever written an API for parsing a text format as a stream of events in Perl 5, you’ve probably ended up with something like this:

And of course to dispatch it you write something like this:

That’s not terrible, but it’s so much more elegant with multiple dispatch in Perl 6. Here’s our listener with multiple dispatch:

given/when and smartmatching

There was an attempt to put this in Perl 5 but it never worked out because this feature really needs a solid type system and ubiquitous OO to work properly.

Smartmatching also dovetails nicely with Perl 6’s junctions:

Easy Threading

Spinning of a few threads to do work in parallel is pretty easy. Just make a Supply and call its throttle method. Tell it the maximum number of threads to use and give it a Routine to do the work.

So Many Little Things

Did you catch that $prog.?update($i) call in the method above? If $prog has the method I’m looking for, the method is called, otherwise it does nothing. If the object wasn’t created, then $prog is an Any object, which doesn’t have an update method.

And I haven’t even had a chance to use features like grammars, built-in set operations, or a native call interface that lets you define the mapping between Perl 6 and C with some trivial Perl 6 code. If you’ve ever written XS you will appreciate just how wonderful that interface is!

Also, the Perl 6 community has been great to work with, answering all my questions (dumb or not), and even improving an error message within about 10 minutes of my suggestion that it was unclear! Of course, the Perl 5 community is pretty great for the most part too, so that’s nothing all that new (although no one can patch anything in the Perl 5 core in 10 minutes ;).

For a long time, the DateTime::Locale distribution has been rather stale. It is built from the CLDR project data, which came in XML form. And not just any XML, but one of the most painful XML formats I’ve ever experienced. It’s a set of data files with complicated inheritance rules between locales (both implicit and explicit). Any data file can contain references to any other file. There are “alternate” and “variants” for various items. It’s complicated.

To make it worse, the format kept changing between releases and breaking my hacktastic tools to read the data. I gave up on dealing with it, thinking that I’d either need to implement a full CLDR XML reader in Perl or link to the libicu C library. The latter might still be useful, but for now there’s an alternative. At YAPC this summer, I was talking about localization with Nova Patch and they told me that there was now a JSON version of the CLDR data!

I took a look and realized this would make things much easier. The JSON data resolves all the crazy aliases and inheritance into a very simple set of files. Each locale’s file contains all of the data you need in one spot. It took me just a few days of work to build a new set of tools to read the files and generate a new DateTime::Locale distro.

I’ve also taken this opportunity to update the code in the distribution. I’ve deprecated some bits of it and sped up the load time for the main module (as well as many locales) quite a bit. While the Changes file has many changes, none of them will affect the vast majority of users. My goal for this release is to make it 100% backwards compatible in terms of the interaction between DateTime and DateTime::Locale. If your code does not use locale objects directly, then you shouldn’t need to change anything.

Of course, much of the locale data has changed, so if your code relies on a specific month or day name in a given language, or a specific format string, that can change (and always could). But the API that DateTime uses should continue to work.

There are a few test failures in the DateTime suite from the new version, but that’s solely due to the DateTime tests themselves making certain assumptions about how locales work. These failures should not be relevant to the vast majority of code.

So with all that said, I’d greatly appreciate some testing. Please install the new trial release (0.93) and test your code with it. Please report any bugs you find. I plan to release a non-trial version (along with a new DateTime to go with it) in a few weeks if no major problems are found.

In a discussion group about animal activism on Facebook, someone recently shared an article titled The Myth of the Ethical Shopper. It’s a really interesting piece about some of the problems with consumer advocacy aimed at encouraging people to buy sweatshop-free products. I highly recommend reading it.

The discussion in the Facebook group was about how this piece might relate to efforts targeting consumers on behalf of animals, but I think the discussion got off on the wrong foot. I’ll try to address that with this essay, which is much longer than is appropriate for a Facebook comment.

First of all, “consumer advocacy” is not a great term to use when discussing contemporary animal advocacy. It’s much too broad. There are a couple different types of animal advocacy that can fall under this heading, and we need to break this down.

But first let’s look at the anti-sweatshop campaigns. We can see that consumers were targeted in two ways. First, they were asked to purchase goods labeled as sweatshop-free. Second, there were also campaigns asking consumers to specifically not purchase goods from companies using sweatshops. This type of consumer boycott campaign was typically done in parallel with asking the companies being boycotted to take some specific action, such as adopting standards for worker treatment that they make suppliers enforce.

The The Myth of the Ethical Shopper brings up a number of problems with both of these approaches.

First of all, the supply chains for clothing and other similar goods are quite complex. We have suppliers subcontracting to suppliers who further subcontract who buy thread from one place and cloth from yet another. This situation is constantly changing, and spans many countries and nested levels of subcontracting. It has become effectively impossible for a company like Nike (to pick one) to enforce any sort of labor standards when they don’t even have a direct relationship with much of the supply chain.

Second, there is a strong incentive for suppliers to merely give the appearance of improving standards, rather than improving them.

Third, many people will “choose” to work in a sweatshop even though it’s a terrible place to work, because the alternatives are even worse.

Fourth, a lot of the demand for cheap goods is now coming from developing countries rather than from developed countries, and there are no anti-sweatshop campaigns in these developing countries to target those consumers.

So how closely does this parallel “consumer advocacy” in the animal advocacy movement? Before we answer that, let’s talk about what we mean by “consumer advocacy”.

The campaigns that most closely parallel the anti-sweatshop campaigns are campaigns that target companies selling animal products to enforce standards for their suppliers. Consumers are asked to boycott these companies until the companies make the demanded changes. One notable difference is that there is not usually a corresponding push asking consumers to purchase so-called “humane” products. The vast majority of animal advocacy groups do not promote the consumption of animal products, period, even if they work on incremental campaigns targeting specific abuses.

So how closely do these particular campaigns parallel the anti-sweatshop campaigns? There are definitely some similarities.

There is clearly an incentive for suppliers to give the appearance of improvement while doing as little as possible. We see this with so-called “humane” and “free-range” products already. The improvements that these labels represent to animal well-being are quite minimal, way out of line with the image producers are trying to sell.

Also as with sweatshop goods, there is a rising demand for animal products in developing countries where these sort of consumer campaigns simply do not exist yet.

But there are also differences. First of all, the supply chains for animal products are simpler. The depth of subcontractor relationships that characterize clothing production are not necessary or feasible for animal products. It’s a bit more reasonable to suggest that an inspecting organization could inspect a representative sample of a producer’s animal facilities, though this would take a very large number of inspectors. This will remain true only as long as animals continue to be farmed in the same countries as the campaigns occur in, and it’s possible that these campaigns could primarily serve to push production to countries with worse standards.

Of course, animals definitely do not choose to be used in these ways. In fact, they have no choice at all, from conception to death.

Nonetheless, the parallels that do exist are worth considering, and should prompt deeper questions about the effectiveness of campaigns focused on specific practices, suppliers, or sellers.

But this isn’t the only type of activism that gets lumped under the “consumer advocacy” label in the animal advocacy movement. We also have advocacy that encourages individuals to simply reduce, or ideally eliminate, their consumption of animal products. These campaigns arevery different from the campaigns I just discussed.

Reducing demand for animal products will reduce the number of animals being abused by humans. This is basic economics, and the mechanism by which this reduces suffering is infinitely simpler than the one for campaigns targeting specific practices or seller. You don’t need to tell people to boycott a company, nor do you need to talk to animal product sellers or producers at all. There is no need for inspections to ensure compliance either.

It’s worth noting that this sort of advocacy is not a boycott. We are not asking people to change their behavior in order to punish suppliers and force them to change. We’re asking them to change their lifestyle in order to eliminate the animal abusers entirely.

I don’t think The Myth of the Ethical Shopper speaks to advocacy targeted at reducing animal product consumption in any meaningful way.

It’s always worthwhile to look at other social justice movements for parallels, both in cases where those movements have succeeded and in cases where they haven’t succeeded yet, but at the same time we should be careful of finding parallels where none exist.

The organization formerly known as “autarch-code” is now called “houseabsolute”. I think some folks may not have wanted to transfer a repo to an organization named “autarch-code”. The new name is hopefully a little less “all about Dave”. I also changed the picture, though I really miss the old one, because I thought it was hilarious. I’ve saved it here on this blog for posterity.

Am I insane? No, I'm not. Clearly. This is the product of a perfectly sane mind. Trust me.
Am I insane? No, I’m not. Clearly. This is the product of a perfectly sane mind. Trust me.

If you have a lot of distributions, you may also have a lot of .travis.yml files. When I want to update one file, I often want to update all of them. For example, I recently wanted to add Perl 5.22 to the list of Perls I test with. Doing this by hand is incredibly tedious, so I wrote a somewhat grungy script to do this for me instead. It attempts to preserve customizations present in a given Travis file while also imposing some uniformity. Here’s what it does:

  • Finds all the .travis.yml files under a given directory. I exclude anything where the remote repo doesn’t include my username, since I don’t want to do this sort of blind rewriting with shared projects or repos where I’m not the lead maintainer.
  • Ensures I’m using the right repo for Graham Knop’s fantastic travis-perl helper scripts. These scripts let you test with Perls not supported by Travis directly, including Perl 5.8, dev releases, and even blead, the latest commit in the Perl repo. These helpers used to be under a different repo, and some of my files referred to the old location.
  • If possible, use --auto mode with these helpers, which I can do when I don’t need to customize the Travis install or script steps.
  • Make sure I’m testing with the latest minor version of every Perl from 5.8.8 (special-cased because it’s more common than 5.8.9) to 5.22.0, plus “dev” (the latest dev release) and “blead” (repo HEAD). If the distro has XS, it tests with both threaded and unthreaded Perls, otherwise we can just use the default (unthreaded) build. If the distro is not already testing against 5.8.8, this won’t be added, since some of my distro are 5.10+ only.
  • Add coverage testing with Perl 5.22 and allow blead tests to fail. There are all sorts of reasons blead might fail that have nothing to do with my code.
  • If possible, set sudo: false in the Travis config to use Travis’s container-based infrastructure. This is generally faster to run and way faster to start builds. If I’m using containers, I take advantage of the apt addon to install aspell so Test::Spelling can do its thing.
  • Clean up the generated YAML so the blocks are ordered in the way I like.

Feel free to take this code and customize it for your needs. At some point I may turn this into a real tool, but making it much more generic seems like more work than it’s worth at the moment.

In a discussion on #moose-dev today, ether made the following distinction:

author tests are expected to pass on every commit; release tests only need to pass just before release

I think this is a good distinction. It also means that almost every single “xt” type test you might think of should probably be an author test. The only one we came up with in #moose-dev that was obviously a release test was a test to check that Changes has content for the release.

I’m sending PRs to various dzil plugins to move them to author tests, with the goal of being able to safely not run release tests under Travis.