Have you seen my new module, Params::ValidationCompiler? It does pretty much everything that MooseX::Params::Validate and Params::Validate do, but way faster. As such, I don’t plan on using either of those modules in new code, and I’ll be converting over my old code as I get the chance. I’d suggest that you consider doing the same. The speed gains are quite significant from my benchmarks.

As such, these two modules could use some maintenance love. Please contact me if you’re interested.

I’ve been thinking about DateTime recently and I’ve come to the conclusion that the Perl community would be much better off if there was a DateTime core team maintaining the core DateTime modules. DateTime.pm, the main module, is used by several thousand other CPAN distros, either directly or indirectly. Changes to DateTime.pm (or anything that it in turn relies on) have a huge impact on CPAN.

I’ve been maintaining DateTime.pm, DateTime::Locale, and DateTime::TimeZone as a mostly solo effort for a long time, but that’s not a good thing. The main thing I’d like from other team members is a commitment to review PRs on a regular basis. I think that having some sort of code review on changes I propose would be very helpful. Of course, if you’re willing to respond to bugs, write code, do releases, and so on, that’s even better.

Please comment on this blog post if you’re interested in this. Some things to think about include …

  • What sort of work are you comfortable doing? The work includes code review, responding to bug reports, writing code to fix bugs and/or add features, testing on platforms not supported by Travis, and doing releases.
  • How would you like to communicate about these things? There is an existing datetime@perl.org list, but I generally prefer IRC or Slack for code discussion.
  • Would you prefer to use GH issues instead of RT? (I’m somewhat leaning towards yes, but I’m okay with leaving things in RT too)?

The same request for maintenance help really applies to anything else I maintain that is widely used, including Params::Validate (which I’m no longer planning to use in new code myself) and Log::Dispatch. I’d really love to have more help maintaining all of this stuff.

If you have something to say that you’re not comfortable saying in a comment, feel free to email me.

My employer MaxMind is hiring for two engineering positions. We have a positions for a Software Engineer in Test and a Software Engineer. If you’ve always wanted to work with me, here’s your chance. If you’ve always wanted to avoid working with me, now you have the knowledge needed to achieve that goal. It’s a win-win either way!

Note that while this is a remote position, we’re pretty limited in what US states we can hire from (Massachusetts, Minnesota, Montana, North Carolina, and Oregon). All of Canada is fair game. I’m trying to figure out if we can expand the state pool somehow. If you think you’re the awesomest candidate ever, send your resume anyway. That way if something does change, we have you on our list.

I recently released a new parameter validation module tentatively called Params::CheckCompiler (aka PCC, better name suggestions welcome) (Edit: Now renamed to Params::ValidationCompiler). Unlike Params::Validate (aka PV), this new module generates a highly optimized type checking subroutine for a given set of parameters. If you use a type system capable of generating inlined code, this can be quite fast. Note that all of the type systems supported by PCC allow inlining(Moose, Type::Tiny, and Specio).

I’ve been working on a branch of DateTime that uses PCC. Parameter validation, especially for constructors, is a significant contributor to slowness in DateTime. The branch, for the curious.

I wrote a simple benchmark to compare the speed of DateTime->new with PCC vs PV:

Running it with master produces:

autarch@houseabsolute:~/projects/DateTime.pm (master $%=)$ perl -Mblib ./bePreview
nch.pl 
Benchmark: timing 100000 iterations of constructor...
constructor:  6 wallclock secs ( 6.11 usr +  0.00 sys =  6.11 CPU) @ 16366.61/s (n=100000)

And with the use-pcc branch:

autarch@houseabsolute:~/projects/DateTime.pm (use-pcc $%=)$ perl -I ../Specio/lib/ -I ../Params-CheckCompiler/lib/ -Mblib ./bench.pl 
Benchmark: timing 100000 iterations of constructor...
constructor:  5 wallclock secs ( 5.34 usr +  0.01 sys =  5.35 CPU) @ 18691.59/s (n=100000)

So we can see that’s there’s a speedup of about 14%, which is pretty good!

I figured that this should be reflected in the speed of the entire test suite, so I started timing that between the two branches. But I was wrong. The use-pcc branch took about 15s to run versus 11s for master! What was going on?

After some profiling, I finally realized that while using PCC with Specio sped up run time noticeably, it also adds an additional compile time hit. It’s Moose all over again, though not nearly as bad.

For further comparison, I used the Test2::Harness release’s yath test harness script and told it to preload DateTime. Now the test suite runs slightly faster in the use-pcc branch, about 4% or so.

So where does that leave things?

One thing I’m completely sure of is that if you’re using MooseX::Params::Validate (aka MXPV), then switching to Params::CheckCompiler is going to be a huge win. This was my original use case for PCC, since some profiling at work showed MXPV as a hot spot in some code paths. I have some benchmarks comparing MXPV and PCC I will post here some time that show PCC as about thirty times faster.

Switching from PV to PCC is less obvious. If your module is already using a type system for its constructor, then there are no extra dependencies, so the small compile time hit may be worth it.

In the case of DateTime, adding PCC alone adds a number of dependencies and Specio adds a few more to that. “Why use Specio over Type::Tiny?”, you may wonder. Well, Type::Tiny doesn’t support overloading for core types, for one thing. I noticed some DateTime tests checking that you can use an object which overloads numification in some cases. I don’t remember why I added that, but I suspect it was to address some bug or make sure that DateTime played nice with some other module. I don’t want to break that, and I don’t want to build my own parallel set of Type::Tiny types with overloading support. Plus I really don’t like the design of Type::Tiny, which emulates Moose’s design. But that’s a blog post for another day.

If you’re still reading, I’d appreciate your thoughts on this. Is the extra runtime speed worth the compile time hit and extra dependencies? I’ve been working on reducing the number of deps for Specio and PCC, but I’m not quite willing to go the zero deps route of Type::Tiny yet. That would basically mean copying in several CPAN modules to the Specio distro, which is more or less what Type::Tiny did with Eval::Closure.

I’d also have to either remove 5.8.x support from DateTime or make Specio and PCC support 5.8. The former is tempting but the latter probably isn’t too terribly hard. Patches welcome, of course ;)

If I do decide to move forward with DateTime+PCC, I’ll go slow and do some trial releases of DateTime first, as well as doing dependent module testing for DateTime so as to do my best to avoid breaking things. Please don’t panic. Flames and panic in the comment section will be deleted.

Edit: Also on the Perl subreddit.

It’s not too late to sign up for my Introduction to Moose class at YAPC::NA 2016. This year’s class will take place on Thursday, June 23. I’m excited to be doing this course again. It’s gotten great reviews from past students. Sign up today.

There are lots of other great courses. For the first time ever, I’m also going to be a student. I’m looking forward to attending Damian Conway’s Presentation Aikido course on Friday, June 24.

My Introduction to Moose class is back at YAPC::NA 2016. This year’s class will take place on Thursday, June 23. I’m excited to be doing this course again. It’s gotten great reviews from past students. Sign up today.

And of course, there are tons of other great offerings this year too, including several from the legendary Damian Conway! I already signed up for his Presentation Aikido course on Friday, June 24.

What sort of things can you learn when interviewing someone for a technical position? What questions are useful?

This is a much-discussed and sometimes hotly debated topic in the tech world. I’ve done a fair bit of interviewing for my employer over the past few years. We’ve built an excellent technical team, either because or in spite of the interviews I’ve done.

Here’s my unsubstantiated theory about unstructured interviews and what they’re good for. (My personal opinion, not my employer’s!)

First of all, I know all about the research that says that unstructured interviews don’t predict performance for technical positions. I agree 100%. This is why we give candidates some sort of technical homework before scheduling an interview. We expect this to take no more than a few (2-3) hours at most. We review this homework before we decide whether or not to continue with the interview process. I weight this fairly heavily in the process, and I’ve rejected candidates simply based on reviewing their homework submission.

But the unstructured interview is still important. Here are some of the things I think I can learn from the iinterview, and some of the questions I use to figure those things out.

Does the candidate actually want this particular job? Enthusiasm matters. I’m not looking for a cheerleader, but I also don’t want someone who’d be happy with absolutely any job. One question I might ask to get a sense of this would be “What appeals to you about this position?” I can also get a sense of this based on the questions that the candidate asks me.

Is this position a good fit for this particular candidate? I want to make sure that the candidate has a clear understanding of the position, specifically their work duties, time requirements, expectations, etc. Some questions along these lines would be “What are the most important things for you in a position?” and “What do you need from the rest of the company in order to do your best work?” If someone says that the most important thing is working on mobile apps written in Haskell and we don’t do that (because no one does), then that’s a good reason not to hire them!

Can they telecommute and/or work with telecommuters effectively? Most of our team is remote, so even if a team member works at the office, they are effectively telecommuting. I just want to be sure that they either have experience with telecommuting or some idea of what this entails. If they haven’t done it before, do they have a work space that they can use? Do they have a plan for dealing with the challenges of working at home?

Can the candidate communicate effectively? Are they pleasant to talk to? Some people are not good communicators. Sometimes two people just don’t mesh well and rub each other the wrong way. Maybe I interview someone and just don’t enjoy talking to them. This doesn’t mean they’re a bad person or bad at their job, but it does mean that we shouldn’t work together.

Can they communicate effectively about technical topics? One question I’ve asked people is simply “What is OO?” There are many right answers to this question, but the real goal is to make sure that they can communicate about a technical topic clearly. If someone doesn’t know any of the terminology around OO (“class”, “instance”, “attribute/field/slot”, etc.) then it’s going to be hard to provide code review on OO code. Note that some people can write code well and still not be able to communicate about it.

Can they communicate effectively with non-technical people? For most positions I’ve hired for, our expectation is that the person being hired will be working not just with the engineering team, but also with a product manager, sales and marketing, support, etc. I want to make sure that the candidate can communicate with these people. We do a short role-playing exercise where one of the interviewers pretends to be a non-technical customer asking them to build a specific product. Then we have them ask the interviewer questions to get a sense of the product requirements, constraints, etc.

Do they care about technical stuff? I want people who actually have opinions about doing their job well. I may ask them what tools they like and dislike, what they’d change about the tools they’ve used, etc. On the flip side, I don’t want someone who’s dogmatically opinionated either. (Or at the least, no more dogmatically opinionated than I am.)

Do they ask good questions? I expect candidates to come to the interview with questions of their own. If they don’t ask any, that’s a red flag that makes me wonder if they don’t care about their work environment, work process, etc. This does not make for an engaged coworker.

There are also things I don’t look for in an interview.

Cultural fit. What is this? I have no idea. It’s way too broad and an easy excuse to simply reject people for not being enough like me.

Code-writing skill. That was already covered by the homework. I never ask specific technical questions unless I have a good reason to believe that the candidate knows about the topic. I might think that based on something specific in their cover letter or resume, or better yet based on their homework or FOSS work.

Will they work out long term? You can’t really answer this confidently from a short interview. What you can do is screen for obvious red flags that indicate that this person will not work well with the team you have, or that they will not like this position. In the latter case, I hope that this is a mutual decision. In my own job searches, I’ve had interviews where I came out knowing I didn’t want the job, and I consider those interviews quite successful!

Will any of this guarantee that I will always find the best people? No, obviously not. Hiring is a difficult thing to do. But if you consider the goals of the interview carefully, you can make the best use of that time to improve your chances of finding the right people.

I’ve been on vacation for the past week, and I decided to take a look at using Test2 to reimplement the core of Test::Class::Moose.

Test::Class::Moose (TCM) lets you write tests in the form of Moose classes. Your classes are constructed and run by the TCM test runner. For each class, we constructor instances of the class and then run the test_* methods provided by that instance. We run the class itself in a subtest, as well as each method. This leads to a lot of nested subtests (I’ll tell you why this matters later). Here’s an example TCM class:

Currently, TCM is implemented on top of the existing Perl test stack consisting of Test::Builder, Test::More, etc.

The fundamental problem with the existing test stack is that it is not abstract enough. The test stack is all about producing TAP (the Test Anything Protocol). This is the text-based format (mostly line-oriented) that you see when you run prove -v. It’s possible to produce other types of output or capture the test output to examine it, but it’s not nearly as easy as it should be.

TAP is great for end users. It’s easy to read, and when tests fail it’s usually easy to see what happened. But it’s not so great for machines. The line-oriented protocol isn’t great for things like expressing a complex data structure, and the output format simply doesn’t allow you to express certain distinctions (diagnostics versus error messages, for example). Even worse is how the current TAP ecosystem handles subtests, which can be summarized as “it doesn’t handle subtests at all”. Here’s an example program:

If we run this via prove -v we get this:

~$ prove -v ./test.t 
./test.t .. 
ok 1 - test 1 
# Subtest: this gets weird
    ok 1 - subtest 1
    not ok 2
    not ok 3
    1..1
ok 2 - this gets weird
ok 3 - test 3
1..3
ok
All tests successful.
Files=1, Tests=3,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.03 cusr  0.00 csys =  0.05 CPU)
Result: PASS

What happened there? Well, the TAP ecosystem more or less ignore the contents of a subtest. Any line starting with space is treated as “unknown text”. What Test::Builder does is keep track of the subtest’s pass/fail status in order to print a test result at the next level up the stack summarizing the subtest. That’s the ok 2 - this gets weird line up above. Because it’s not actually parsing the contents of the subtest, it doesn’t see that the test count is wrong or that some tests have failed.

In practice, this won’t affect most code. As long as all your tests are emitted via Test::Builder you’re good to go. It does make life much harder for tools that want to actually look at the contents of subtests, in particular tools that want to emit a non-TAP format.

The core test stack tooling around concurrency is also fairly primitive. The test harness supports concurrency at the process level. It can fork off multiple test processes, track their TAP output separately, and generate a summary of the results. However, you cannot easily fork from inside a test process and emit concurrent TAP.

This concurrency issue really bit Test::Class::Moose. Unlike traditional Perl test suites, with TCM you normally run all of your tests starting from a single whatever.t file. That file contains just a few lines of code to create a TCM runner. The runner loads all of your test classes and executes them. Here’s an example:

Ovid is a smart guy. He realized that once you have enough test classes, you’d really want to be able to run them concurrently. So he wrote TAP::Stream. This modules let you combine multiple streams of subtest-level TAP into a single top-level TAP stream.

This is completely and utterly insane! This is not Ovid’s fault. He was doing the best he could with the tools that existed. But it’s terribly fragile, and it’s way more work than it should be. It also made it incredibly difficult to provide feature parity between the parallel and sequential TCM test execution code. The parallel code has always been a bit broken, and there was a lot of nearly duplicated code between the two execution paths.

Enter Test2, which is Chad Granum (Exodist’s) project to implement a proper event-level abstraction on top of all the test infrastructure. With Test2, our fundamental layer is a stream of events, not TAP. An event is a test result, a diagnostic, a subtest, etc. Subtests are proper first class events which can in turn contain other events.

Working at this level makes writing TCM much easier. There’s still some trickiness involved in starting a subtest in one process but executing it’s contents in another, but the amount of duplicated code is greatly reduced, and it’s much easier to achieve feature parity between the parallel and sequential paths.

As a huge, huge bonus, testing tools built on top of Test2 is a pleasure instead of a chore. The sad truth about TCM is that it was never as well tested as it should have been. The tools for testing with Test::Builder are primitive at best, and because of the fact that subtests are ignored by TAP, the testing tools were nearly useless for TCM.

With Test2 we can capture and examine the event stream of a test run in incredible detail. This lets me write very detailed tests for the behavior of TCM in all sorts of success and failure scenarios, which is fantastically useful. Here’s a snippet of what this looks like:

The test_events_is sub is a helper I wrote using the Test2 tools. All it does is add some useful diagnostic output if the event stream from running TCM contains Test2::Event::Exception events. And the diagnostics from Test2 are simply beautiful:

It’s a lot to read but it’s incredibly detailed and makes understanding why a test failed much easier than the current test stack.

Chad is currently working on finishing up Test2 and making sure that it’s stable and backwards-compatible enough to replace the existing test suite stack. Once Test::More, Test::Builder, and friends are all running on top of Test2, it will make it much easier to write new test tools that integrate with this infrastructure.

The future of testing in Perl 5 is looking bright! And Perl 6 isn’t being left behind. I’ve been working on a similar project in Perl 6 with the current placeholder name of Test::Stream. This is a little easier than the Perl 5 effort since there’s no large body of test tools with which I need to ensure backwards compatibility. I want Perl 6 to have the same excellent level of test infrastructure that Perl 5 is going to be enjoying soon.

The 1.22 trial release includes some small backwards incompatible changes in how DateTime->from_epoch handles floating point epoch values. Basically, these values are now rounded to the nearest microsecond (millionth of a second). This release also fixes a straight up bug with the handling of negative floating point epochs where such values were incremented by a full second.

I’ve tested many downstream DateTime dependencies in the DateTime::* namespace. The only thing that broke was DateTime::Format::Strptime, for which I will release a backwards compatible fix shortly.

If no one tells me that DateTime 1.22 breaks their code I will release a non-trial version on or after Sunday the 28th.