Somehow people seem to keep breaking into my Netflix account. Calling Netflix achieves little. Their go to answer is to have me change my password and sign out all devices. In theory, this should keep hackers out. I’ve done this a number of times to no avail. Last night I changed the email associated with the account, as well as the password, and they’re back in tonight.

Edit: Someone on HackerNews asked how I know that the account was hacked. We only have two people in my household, my wife and I, and we each have a Netflix account on our profile. I have never shared the password with anyone. I see activity on my profile of things that neither my wife nor I watched. Netflix also now shows you the devices that have been used with your account. I see devices from unknown IPs around the world.

Let me first dismiss some other possibilities before settling on Netflix itself having a problem.

Was my email account hacked? If the account (or the server hosting it) was hacked, the attacker would still need to change the password, which they haven’t done. So that’s ruled out.

Was my desktop computer from which I changed the password hacked? Possibly, but if so, these are the world’s most unambitious hackers. They haven’t bothered stealing any other account login info, including things like my Amazon info or credit cards stored in Chrome. If someone had hacked my desktop I’d have much bigger problems than someone using my Netflix account!

Edit: How do I know for sure my desktop wasn’t hacked? I haven’t done a forensic investigation, but it seems unlikely. I’m running an up to date Ubuntu machine and I use Chrome as my browser. I also have a reasonably sane firewall in place, fail2ban, and other security thingamabobs. It’s not impossible to break into (nothing is) but it’s not a particularly soft target.

How about the Xbox 360 we mostly use for watching Netflix? I don’t see how that’s possible without physical access to the machine. I doubt someone broke in just to hack our Xbox 360 and didn’t steal anything.

Did someone guess my Netflix password? Possible, but I use rather long passwords that would be pretty hard to brute force. If Netflix doesn’t have rate limiting in place, that’s a huge problem. That said, I don’t know how someone would know what email address is associated with my account. It’s not an address I’ve used for anything else, ever, and I changed it last night to a new, never-before-used address!

Did someone exploit a flaw in WPA2 to intercept wireless traffic from the Xbox 360, or otherwise intercept traffic between me and Netflix? If Netflix’s authentication system is entirely on SSL, I don’t see how this could possibly work.

So what possibilities does that leave? My guess is that there’s some fundamental brokenness in the authentication system that Netflix uses. Either that or put your conspiracy theory hat on and we can talk about inside men and women at Amazon and/or Netflix. Either way, I’m blaming this on Netflix, and I’m tempted to just cancel the account. Netflix could probably help improve security quite a bit by supporting 2-factor auth in order to authenticate a new device.

That all said, I’d love to hear a better theory, especially if it came with a solution.

Assuming that the failure happens more than once every few thousand test runs, here’s a handy shell snippet:

while prove -bv t/MaxMind/DB/Writer/Tree-freeze-thaw.t ; do reset; done

This will run the relevant test in a loop over and over, stopping at the first failure. The reset in between each run makes it easy to hit Ctrl-Up in the terminal and go to the beginning of the test run that failed, rather than having a monster scrollback buffer.

About a million years ago (ok, more like 6 months) a kind soul by the name of Polina Shubina reported a small bug in my Markdent module. She was even kind enough to submit a PR that fixed the issue, which was that the HTML generated for Markdown tables (via a Markdown extension) always used </th> to close table cells.

However, there was one problem, there was no test for the bug. I really hate merging a bug fix without a regression test. I know myself well enough to know that without a test the chances of me reintroducing the bug again later are pretty good.

Even more oddly, I thought for sure that this was already tested. Markdent is a tool for parsing Markdown, and includes some libraries for turning that Markdown into HTML. I knew that I tested the table parsing, and I didn’t think I was quite dumb enough to hand-write some HTML where I used </th> to close all the table cells.

I was correct. This was tested, and the expected HTML in the test was correct too. So what was going on?

It turned out that this problem went way back to when I first wrote the module. Comparing two chunks of HTML and determining if they’re the same isn’t a trivial task. HTML is notoriously flexible, and a simple string comparison just won’t cut it. Minor differences in whitespace between two pieces of HTML are (mostly) ignorable, tag attribute order is irrelevant, and so on.

I looked on CPAN for a good HTML diffing module and found squat. Then I remembered the HTML Tidy tool. I could run the two pieces of HTML I wanted to compare through Tidy and then compare the result. Tidy does a good job of forcing the HTML into a repeatable format.

Unfortunately, Tidy is a little too good. It turns out that Tidy did a really good job of fixing up broken tags! It turned my </th> into </td>, so my tests passed even when they shouldn’t. Using Tidy to test my HTML output turned out to be a really bad idea, since I wasn’t really testing the HTML my code generated.

This left me looking for an HTML diff tool again. I really couldn’t find much in the way of CLI tools on the Interwebs. CPAN has two modules which sort of work. There’s HTML::Diff, which uses regexes to parse the HTML. I didn’t even bother trying it, to be honest. (BTW, don’t blame Neil Bowers for this code, he’s just doing some light maintenance on it, he didn’t create it).

Then there’s Test::HTML::Differences. This uses HTML::Parser, at least. Unfortunately, it tries a little too hard to normalize HTML, and it got seriously confused by much of the HTML in the mdtest Markdown test suite.

I also tried using the W3C validator to somehow compare errors between two docs. I ended up adding some validation tests to the Markdent test suite, which is useful, but it still didn’t help me come up with a useful diff between two chunks of HTML.

I finally gave up and wrote my own tool, HTML::Differences. It turned out to be remarkably simple to get something that worked well enough to test Markdent, at least. I used HTML::TokeParser to turn the HTML into a list of events, and then normalized whitespace in text events (except when inside a <pre> tag).

Getting to this point took a while, especially since I was doing all of this in my free time. And that’s the story of why it took me six months to fix an incredibly trivial bug, and how testing HTML is trickier than I understood when I first started testing it with Markdent.

A little while back I asked people to test Params::Validate 1.14. Judging by the lack of bug reports I’m sure that many people tested it and it worked fine.

Ok, just kidding. I strongly suspect almost no one tested it and that someone will yell at me for breaking their software. But hey, I tried.

Now I’m asking folks to try out MooseX::Params::Validate 0.20. This release makes a rather major change to the exception thrown when a type constraint rejects a value. The exception is now an object, and it stringifies to something that includes the message generated by the type constraint object, just like Moose does internally.

This is a big improvement when debugging, since the generic Params::Validate message may be much less helpful than a custom message you provide. However, if you have code that traps exceptions and matches the message against a regex, that code might break.

So please test release this against your code base. If the only issue you find is a change in error text, please adjust your code accordingly. Of course, if you find some other unintended bug I introduced, please let me know about it. Once again, if I don’t hear anything in the next week or so, I’ll feel free to release a non-trial version.

I’ve just released a new version of Params::Validate that allows validation callbacks to die in order to provide a custom error message or exception object. This was a long-needed feature, and will enable me to make Moose::Params::Validate support the error messages provided by type objects, which has also been long-needed.

However, I’m a little nervous about any changes to Params::Validate, since it’s used a rather large chunk of CPAN. It has c. 350 direct dependents, and those include things like Log::Dispatch and DateTime, so the actual downstream reach is pretty huge. I’d rather not break some large of CPAN or your in-house applications.

That all said, the behavior of calling die in a callback sub has always been undefined and undocumented, so no one should have been doing it prior to now (I say with forced optimism and the realization that someone probably is doing it anyway).

So please take a moment to install the latest trial release and test it with your code base. If your apps are using a lot of CPAN modules there’s a good chance that Params::Validate is already in your stack, even if you don’t use it directly. If you find any breakage, please report it on rt.cpan.org.

If I don’t hear about any breakage in a week or so and CPANTesters looks good, I’ll release a non-TRIAL version.

We’ve actually been trying to hire someone for a while, but there was some question about what states we can hire from.

First of all, we’re hiring a Senior Software Engineer. This involves a lot of Perl, some Go, and the possibility of C and other languages from time to time. This is mostly backend work, building web services at an ever growing scale. We’re accepting applications for this position from all US states and Canada.

Perl experience is a plus, as is Go. However, we’re happy to consider anyone with some dynamic language experience. Perl is pretty easy to learn if you know Python or Ruby. Our code base is mostly written in Modern Perl using Moose, Plack/PSGI, DBIx::Class, and other good tools like that. We’re slowly weeding out the ancient crufty code, and we’ll be replacing our use of Apache::PageKit with Mojolicious in the future. We also have an extensive test suite built with Test::Class::Moose.

As a bonus or penalty (you pick), I’m the Software Engineering Team Lead, and we’ll be working together a lot.

The entire team is effectively remote, so we coordinate our work through HipChat and Google Hangouts. We also use Pivotal Tracker and GitHub Enterprise, and all new work is done in branches with code review before merging. All new code and bug fixes are expected to come with tests.

We’re also hiring Frontend Software Engineer to help us build single page apps using modern JavaScript. As the first Frontend Software Engineer at MaxMind, you’ll be in a position to set standards for frameworks, testing, and everything frontend-related. For now, this position is defined as being in the office part-time, so it’s local to Waltham, MA. That may change in the future.

Finally, we’re hiring an Interaction Designer to help make these new single page apps user-friendly. Also, you’ll get to make our website much less meh. Please note that the Interaction Designer position is onsite, not telecommuting.

I’ve been playing with the idea of making a new version of Log::Dispatch that breaks some things.

There are a few changes I’d like to make …

First, I want to trim back the core distro to only include a few outputs. This would just be those outputs which are truly cross-platform and don’t require extra dependencies. Right now that would be the Code, File, Handle, Null, and Screen outputs. I might also borg rjbs’s Log::Dispatch::Array, since it’s so darn useful for testing.

Here’s my plan for the other outputs:

  • Syslog – release it and maintain it myself
  • File::Locked – release it once and look for another maintainer
  • ApacheLog – up for grabs
  • Email outputs – up for grabs, but maybe I’d do a single release of the Email::Sender based output and look for a new maintainer

FWIW, I no longer have my programs send email directly. I log to syslog and use a separate log monitoring system to summarize errors and email me that summary.

I’d also like to change how newline appending is done so this has more sensible defaults. This means defaulting to appending a newline for the File & Screen outputs, but not for others like Code or Syslog.

As far as core API changes, while I think the core ->log() and ->debug()/info()/etc() methods would stay the same, I might want to make changes to some of the other methods.

I also plan to move to Moo internally, just to clean things up.

So given all this, what’s the best course of action? Should I just go ahead and release Log::Dispatch 3.0, along with Log::Dispatch::Syslog 3.0, etc.? Or should I actually rename the distro to Log::Dispatch3 or something like that so that the two can co-exist on CPAN? I’m leaning towards the latter right now.

Finally, if anyone has any other suggestions for improvements to Log::Dispatch I’d love to hear them.

Apparently my post on Perl 5’s overloading is deeply, deeply offensive. Here’s an email I got out of the blue today:

Perl isn’t your first language isn’t it?  You strike me as Java programmer.  Look.  Don’t do overloading.  If you need to do overloading then you are probably doing something wrong.

“If you don’t care about defensive programming, then Perl 5′s overloading is perfect, and you can stop reading now. Also, please let me know so I can avoid working on code with you, thanks.”

No.  I don’t, because we are not programming in Java where that type of mentality is needed.  So yeah, please feel free to memorize my name and lets make damn sure we never work with each other.

And people say there’s no hope for humanity!

I suspect I’m not the only person who does this.

I start writing an email because I’m angry/annoyed/outraged/indignant. I write the whole thing. I sign it. I look at it. Then I discard it.

There’s something therapeutic about this. I get all of the benefits of venting without actually escalating a conflict. I wonder if there’s a market for an email client app or plugin that helps with this?

“While you wrote this email your writing speed was 20% faster than your standard writing speed. Are you pissed off? Are you sure you want to send this?”

Clearly, I’m about to get rich!