This post got a lot of discussion on Hacker News that you might find interesting.

I’ve been writing a fair bit of Perl 6 lately, and my main takeaway so far is that Perl 6 is fun.

Pretty much everything I love in Perl 5 is still part of Perl 6, but almost everything I hate is gone too.

Here are some of the things that I’ve been having fun with in Perl 6 …

Built-In OO and Types

I really love that I can write this in native Perl 6:

Of course, you can already do pretty much the same thing with Moose in Perl 5, except now I don’t have to debate Moose vs Moo vs Moops vs STOP MAKING SO DARN MANY “M” MODULES!

Roles work just as well with a simple role Foo { ... } declaration.

Multiple Dispatch

If you’ve ever written an API for parsing a text format as a stream of events in Perl 5, you’ve probably ended up with something like this:

And of course to dispatch it you write something like this:

That’s not terrible, but it’s so much more elegant with multiple dispatch in Perl 6. Here’s our listener with multiple dispatch:

given/when and smartmatching

There was an attempt to put this in Perl 5 but it never worked out because this feature really needs a solid type system and ubiquitous OO to work properly.

Smartmatching also dovetails nicely with Perl 6’s junctions:

Easy Threading

Spinning of a few threads to do work in parallel is pretty easy. Just make a Supply and call its throttle method. Tell it the maximum number of threads to use and give it a Routine to do the work.

So Many Little Things

Did you catch that $prog.?update($i) call in the method above? If $prog has the method I’m looking for, the method is called, otherwise it does nothing. If the object wasn’t created, then $prog is an Any object, which doesn’t have an update method.

And I haven’t even had a chance to use features like grammars, built-in set operations, or a native call interface that lets you define the mapping between Perl 6 and C with some trivial Perl 6 code. If you’ve ever written XS you will appreciate just how wonderful that interface is!

Also, the Perl 6 community has been great to work with, answering all my questions (dumb or not), and even improving an error message within about 10 minutes of my suggestion that it was unclear! Of course, the Perl 5 community is pretty great for the most part too, so that’s nothing all that new (although no one can patch anything in the Perl 5 core in 10 minutes ;).

For a long time, the DateTime::Locale distribution has been rather stale. It is built from the CLDR project data, which came in XML form. And not just any XML, but one of the most painful XML formats I’ve ever experienced. It’s a set of data files with complicated inheritance rules between locales (both implicit and explicit). Any data file can contain references to any other file. There are “alternate” and “variants” for various items. It’s complicated.

To make it worse, the format kept changing between releases and breaking my hacktastic tools to read the data. I gave up on dealing with it, thinking that I’d either need to implement a full CLDR XML reader in Perl or link to the libicu C library. The latter might still be useful, but for now there’s an alternative. At YAPC this summer, I was talking about localization with Nova Patch and they told me that there was now a JSON version of the CLDR data!

I took a look and realized this would make things much easier. The JSON data resolves all the crazy aliases and inheritance into a very simple set of files. Each locale’s file contains all of the data you need in one spot. It took me just a few days of work to build a new set of tools to read the files and generate a new DateTime::Locale distro.

I’ve also taken this opportunity to update the code in the distribution. I’ve deprecated some bits of it and sped up the load time for the main module (as well as many locales) quite a bit. While the Changes file has many changes, none of them will affect the vast majority of users. My goal for this release is to make it 100% backwards compatible in terms of the interaction between DateTime and DateTime::Locale. If your code does not use locale objects directly, then you shouldn’t need to change anything.

Of course, much of the locale data has changed, so if your code relies on a specific month or day name in a given language, or a specific format string, that can change (and always could). But the API that DateTime uses should continue to work.

There are a few test failures in the DateTime suite from the new version, but that’s solely due to the DateTime tests themselves making certain assumptions about how locales work. These failures should not be relevant to the vast majority of code.

So with all that said, I’d greatly appreciate some testing. Please install the new trial release (0.93) and test your code with it. Please report any bugs you find. I plan to release a non-trial version (along with a new DateTime to go with it) in a few weeks if no major problems are found.

In a discussion group about animal activism on Facebook, someone recently shared an article titled The Myth of the Ethical Shopper. It’s a really interesting piece about some of the problems with consumer advocacy aimed at encouraging people to buy sweatshop-free products. I highly recommend reading it.

The discussion in the Facebook group was about how this piece might relate to efforts targeting consumers on behalf of animals, but I think the discussion got off on the wrong foot. I’ll try to address that with this essay, which is much longer than is appropriate for a Facebook comment.

First of all, “consumer advocacy” is not a great term to use when discussing contemporary animal advocacy. It’s much too broad. There are a couple different types of animal advocacy that can fall under this heading, and we need to break this down.

But first let’s look at the anti-sweatshop campaigns. We can see that consumers were targeted in two ways. First, they were asked to purchase goods labeled as sweatshop-free. Second, there were also campaigns asking consumers to specifically not purchase goods from companies using sweatshops. This type of consumer boycott campaign was typically done in parallel with asking the companies being boycotted to take some specific action, such as adopting standards for worker treatment that they make suppliers enforce.

The Myth of the Ethical Shopper brings up a number of problems with both of these approaches.

First of all, the supply chains for clothing and other similar goods are quite complex. We have suppliers subcontracting to suppliers who further subcontract who buy thread from one place and cloth from yet another. This situation is constantly changing, and spans many countries and nested levels of subcontracting. It has become effectively impossible for a company like Nike (to pick one) to enforce any sort of labor standards when they don’t even have a direct relationship with much of the supply chain.

Second, there is a strong incentive for suppliers to merely give the appearance of improving standards, rather than improving them.

Third, many people will “choose” to work in a sweatshop even though it’s a terrible place to work, because the alternatives are even worse.

Fourth, a lot of the demand for cheap goods is now coming from developing countries rather than from developed countries, and there are no anti-sweatshop campaigns in these developing countries to target those consumers.

So how closely does this parallel “consumer advocacy” in the animal advocacy movement? Before we answer that, let’s talk about what we mean by “consumer advocacy”.

The campaigns that most closely parallel the anti-sweatshop campaigns are campaigns that target companies selling animal products to enforce standards for their suppliers. Consumers are asked to boycott these companies until the companies make the demanded changes. One notable difference is that there is not usually a corresponding push asking consumers to purchase so-called “humane” products. The vast majority of animal advocacy groups do not promote the consumption of animal products, period, even if they work on incremental campaigns targeting specific abuses.

So how closely do these particular campaigns parallel the anti-sweatshop campaigns? There are definitely some similarities.

There is clearly an incentive for suppliers to give the appearance of improvement while doing as little as possible. We see this with so-called “humane” and “free-range” products already. The improvements that these labels represent to animal well-being are quite minimal, way out of line with the image producers are trying to sell.

Also as with sweatshop goods, there is a rising demand for animal products in developing countries where these sort of consumer campaigns simply do not exist yet.

But there are also differences. First of all, the supply chains for animal products are simpler. The depth of subcontractor relationships that characterize clothing production are not necessary or feasible for animal products. It’s a bit more reasonable to suggest that an inspecting organization could inspect a representative sample of a producer’s animal facilities, though this would take a very large number of inspectors. This will remain true only as long as animals continue to be farmed in the same countries as the campaigns occur in, and it’s possible that these campaigns could primarily serve to push production to countries with worse standards.

Of course, animals definitely do not choose to be used in these ways. In fact, they have no choice at all, from conception to death.

Nonetheless, the parallels that do exist are worth considering, and should prompt deeper questions about the effectiveness of campaigns focused on specific practices, suppliers, or sellers.

But this isn’t the only type of activism that gets lumped under the “consumer advocacy” label in the animal advocacy movement. We also have advocacy that encourages individuals to simply reduce, or ideally eliminate, their consumption of animal products. These campaigns are very different from the campaigns I just discussed.

Reducing demand for animal products will reduce the number of animals being abused by humans. This is basic economics, and the mechanism by which this reduces suffering is infinitely simpler than the one for campaigns targeting specific practices or seller. You don’t need to tell people to boycott a company, nor do you need to talk to animal product sellers or producers at all. There is no need for inspections to ensure compliance either.

It’s worth noting that this sort of advocacy is not a boycott. We are not asking people to change their behavior in order to punish suppliers and force them to change. We’re asking them to change their lifestyle in order to eliminate the animal abusers entirely.

I don’t think The Myth of the Ethical Shopper speaks to advocacy targeted at reducing animal product consumption in any meaningful way.

It’s always worthwhile to look at other social justice movements for parallels, both in cases where those movements have succeeded and in cases where they haven’t succeeded yet, but at the same time we should be careful of finding parallels where none exist.

The organization formerly known as “autarch-code” is now called “houseabsolute”. I think some folks may not have wanted to transfer a repo to an organization named “autarch-code”. The new name is hopefully a little less “all about Dave”. I also changed the picture, though I really miss the old one, because I thought it was hilarious. I’ve saved it here on this blog for posterity.

Am I insane? No, I'm not. Clearly. This is the product of a perfectly sane mind. Trust me.
Am I insane? No, I’m not. Clearly. This is the product of a perfectly sane mind. Trust me.

If you have a lot of distributions, you may also have a lot of .travis.yml files. When I want to update one file, I often want to update all of them. For example, I recently wanted to add Perl 5.22 to the list of Perls I test with. Doing this by hand is incredibly tedious, so I wrote a somewhat grungy script to do this for me instead. It attempts to preserve customizations present in a given Travis file while also imposing some uniformity. Here’s what it does:

  • Finds all the .travis.yml files under a given directory. I exclude anything where the remote repo doesn’t include my username, since I don’t want to do this sort of blind rewriting with shared projects or repos where I’m not the lead maintainer.
  • Ensures I’m using the right repo for Graham Knop’s fantastic travis-perl helper scripts. These scripts let you test with Perls not supported by Travis directly, including Perl 5.8, dev releases, and even blead, the latest commit in the Perl repo. These helpers used to be under a different repo, and some of my files referred to the old location.
  • If possible, use --auto mode with these helpers, which I can do when I don’t need to customize the Travis install or script steps.
  • Make sure I’m testing with the latest minor version of every Perl from 5.8.8 (special-cased because it’s more common than 5.8.9) to 5.22.0, plus “dev” (the latest dev release) and “blead” (repo HEAD). If the distro has XS, it tests with both threaded and unthreaded Perls, otherwise we can just use the default (unthreaded) build. If the distro is not already testing against 5.8.8, this won’t be added, since some of my distro are 5.10+ only.
  • Add coverage testing with Perl 5.22 and allow blead tests to fail. There are all sorts of reasons blead might fail that have nothing to do with my code.
  • If possible, set sudo: false in the Travis config to use Travis’s container-based infrastructure. This is generally faster to run and way faster to start builds. If I’m using containers, I take advantage of the apt addon to install aspell so Test::Spelling can do its thing.
  • Clean up the generated YAML so the blocks are ordered in the way I like.

Feel free to take this code and customize it for your needs. At some point I may turn this into a real tool, but making it much more generic seems like more work than it’s worth at the moment.

In a discussion on #moose-dev today, ether made the following distinction:

author tests are expected to pass on every commit; release tests only need to pass just before release

I think this is a good distinction. It also means that almost every single “xt” type test you might think of should probably be an author test. The only one we came up with in #moose-dev that was obviously a release test was a test to check that Changes has content for the release.

I’m sending PRs to various dzil plugins to move them to author tests, with the goal of being able to safely not run release tests under Travis.

During my Introduction to Go class last Thursday at YAPC::NA::2015, one of the class attendees, David Adler, asked a question along the lines of “why use Go?” That’s a good question, so here is my answer.

Let’s start by first talking about why we use Perl (or Ruby, Python, PHP, JS, etc.). Why use a dynamic language? There are a lot of reasons, but the basic answer is that these languages make it easy to get a system up and running quickly. They are easy to write, include a lot of useful features (regexes, IO, core libraries, etc.), they eliminate large classes of bugs, and generally get out of your way when coding. These languages perform well enough for many tasks, and so the fact that they are not as fast or memory efficient as they could be is not a concern.

But of course, sometimes speed and memory usage are a concern. I suspect that many dynamic language users reach for C or C++ when they need to optimize something. Here’s why …

In Perl, a basic scalar value is represented by a C struct called an SV (see perlguts for gory details). A quick check with Devel::Size tells me that a scalar containing the number 1 uses 24 bytes of memory on my system. A 3 byte string uses 42 bytes of memory. In a language like C, those values can use as little as 1 and 3 bytes respectively.

This isn’t an issue when dealing with hundreds or thousands of such values. The Perl program uses 24 times as many bytes for each integer, but when you’re just dealing with 5,000 integers, this only adds up to 120kib vs 5kib. However, once you start dealing with millions of values (or more), this can become a problem. The program has to allocate memory, usually doing many small allocations. What’s worse is that operations on these values are slower. Integer math in Perl goes through many more steps than in C. Again, for a small number of operations this isn’t a problem, but for millions or billions of operations, the cost becomes significant.

Of course, C and C++ have their own issues, including the difficulty of managing memory, the potential security holes, the segfaults, the double frees, and lots of other fun.

Enter Go. Go gives you a statically compiled language with the potential for carefully managing memory usage while also protecting you from the memory management bugs (and security holes) that C and C++ allow for.

So why use Go? I think that Go is a compelling option for any task that you’d do in C or C++ instead of a dynamic language. Go is fast to run, relatively easy to write, and comes with a pretty good set of core libraries. It gives you many of the niceties of a dynamic language while still offering memory efficiency and high speed.

As a huge plus, Go compiles down to static binaries that are incredibly easy to deploy. This will make your sysadmins or devops folks quite happy.

Of course, Go doesn’t replace C or C++ for all tasks. It’s a garbage collected language, which means that if you need complete control over memory allocation and freeing, it won’t cut it. I don’t expect to see an OS in Go any time soon.

Also, the language itself is missing out on some features that might be appealing for some systems. The example I often use is a database server. I would much rather try to write such a thing in a language like Rust than Go. Rust seems to combine low level optimizability with some nice high level features like generics and traits. If I were writing something complex like a database server (or a browser) I think I’d want those features. But Go is great for things like web application servers, command line tools, and anything else that isn’t a huge complicated system.

(And yes, I know there are people writing database servers in Go. I’m just saying that Go probably wouldn’t be my first choice for such a tool.)

I’ll be offering my new Introduction to Go class here in Minneapolis on Saturday, May 30. The cost is just $30 and I have 15 spots available.

I’m offering this class at such a low cost because I want to get some feedback on it before I give it at YAPC::NA::2015. If this goes well, I plan to give this class in Minneapolis again, but I’ll be charging more, so now’s your chance to take the class for as cheaply as it’ll ever be offered!

Ok, that alliteration is a stretch, but it’s the best I could do.

This blog post is a public announcement to say that my tuits for CPAN-related work will be in very short supply until after YAPC. I’m basically devoting all of my FOSS programming time to creating the slides and exercises for my Introduction to Go YAPC master class. As you might imagine, creating a one day class is a lot of work. My goal is to finish a teachable draft by May 29 so I can give the class here in Minneapolis on May 30 as a test run. If you’re interested in taking the class then, stay tuned to this blog for details.

This year at YAPC I’ll be giving two master classes. Why am I doing this? I don’t know, I think I may be insane. But that aside, here’s some info about said classes.

My first class is Introduction to Moose. I’ve been giving this class for a number of years, and it’s always been well-received. The class will take place on Sunday, June 7, the day before the conference proper begins. The cost of the class is a mere $175 for a full day! The format of the class consists of alternating lecture and exercise blocks, so you’ll be writing a lot of code over the course of the day. The class is aimed at intermediate Perl programmers with a basic understanding of OO who want to learn more about Moose.

Here’s what one past student said about the class:

Great class. I especially liked your problem sets. You gave out problems you expected your class to actually solve, and you allowed class time for solving them. This should be a basic expectation for any class, but it’s amazing how often teachers don’t do this.

You can find more details about the class content on the master class page on the YAPC::NA::2015 site.

The second class is Introduction to Go. This is a new class for me, and I’m excited to offer it. This class will take place on Thursday, June 11, the day after the conference proper ends. This class is also $175. Like the Moose class, the format is alternativing lecture and exercise blocks, so you’ll get hands-on experience writing Go code. This class is aimed at people who already know one programming language and want to learn Go.

You can register for these classes, as well as several other excellent master classes from other instructors, by going to the YAPC::NA::2015 purchasing page. The class size for both classes is limited to about 15 people (I don’t remember the exact limit), so register now to reserve your spot.