If you’ve been bitten by the testing bug, you’ve surely encountered the problem of testing a database-intensive application. The problem this presents isn’t specific to SQL databases, nor is it just a database problem. Any data-driven application can be hard to test, regardless of how that data is stored and retrieved.

The problem is that in order to test your code, you need data that at least passably resembles data that the app would work with in reality. With a complex schema, that can be a lot of data spread out across many tables. I often find that trying to test each class in isolation becomes very difficult, since the data is not confined to one class.

For example, the app I’m working on now is a wiki. I’m trying to test the Page class, but that involves interactions with many tables. Pages have revisions, they have links to other pages, to files, and to not-yet-created pages. Pages also belong to a wiki, and are created by a user. To test page creation, I need to already have a wiki to add the page to, and a user to create the page.

There are a various solutions to this problem, all of which suck in different ways.

You can try mocking out the database entirely. I’ve used DBD::Mock for this, but I’ve never been happy with it. DBD::Mock has one of the most difficult to use APIs I’ve ever encountered. Also, DBD::Mock doesn’t really solve the fundamental problem. I still have to seed all the related data for a page. I’d even go so far as to say that DBD::Mock makes things worse. Because inserts don’t actually go anywhere, I have to re-seed the mock handle for each test of a SELECT, and since a single method may make multiple SELECT calls, I have to work out in advance what each method will select and seed all the data in the right order!

My experience with DBD::Mock has largely been that the test code becomes so complex and fragile that maintaining it becomes a huge hassle. The test files become so full of setup and seeding that the actual tests are lost.

I wrote Fey::ORM::Mock to help deal with this, but it only goes so far. It partially solves the problem with DBD::Mock’s API, but I still have to manage the data seeding, and that is still fragile and complicated.

The other option is to just use a real DBMS in your tests. This has the advantage of actually working like the application. It also helps expose bugs in my schema definition, and lets me test triggers, foreign keys, and so on. This approach has several huge downsides, though. I have to manage (re-)creating the schema each time the tests run, and it will be much harder for others to run my tests on their systems. Also, running the tests can be rather slow.

For the app I’m working on I’ve decided to mostly go the real DBMS route. At least this way the tests will be very useful to me, and anyone else seriously hacking on the application. I can isolate the minimal data seeding in a helper module, and the test files themselves will be largely free of cruft. Making it easier to write tests also means that I’ll write more of them. When I was using DBD::Mock, I found myself avoiding testing simply because it was such a hassle!

Some people might want to point out fixtures as a solution. I know about those, and that’s basically what I’m using now, except that there’s only one fixture for now, a minimally populated database. And of course, fixtures still don’t fix the problems that come with the tests needing to talk to a real DBMS.

I am going to make sure that tests which don’t hit the database at all can be run without connecting to a DBMS. That way, at least a subset of the tests can be run everywhere.

Are there any better solutions? I often feel like programming involves spending an inordinate amount of time solving non-essential problems. Where’s my silver bullet?

Recently, there was a question on stackoverflow that asked whether or not one should test that Moose generates accessors correctly.

Here’s an example class:

package Process;

use Moose;
has pid => (
    is       => 'ro',
    isa      => 'Int',
    required => 1,
);
has stdout => (
    is  => 'rw',
    isa => 'FileHandle',
);

Given that class definition, is there any value to writing tests like this?

can_ok( Process, 'new' );
can_ok( Process, 'pid' );
can_ok( Process, 'stdout' );
throws_ok { Process->new() } qr/.../, 'Process requires a pid';

Let’s look at why automated tests are useful.

First, they give us some assurance that the code we wrote does what we expect.

Second, tests protect us from breaking code as we change it. As we refactor, fix bugs, or add new features, we want to make sure that all the existing code continues to work.

Third, the tests can provide some hints to future readers of our code about the APIs of the code base.

So back to our original question, do we need to test Moose-generated code?

The tests seen above add absolutely nothing that isn’t already tested by Moose itself.

If the tests don’t test anything new, then they can’t be giving us any assurance about our code. Instead, they’re giving us assurance about Moose itself.

Let’s assume that Moose is itself well-tested. If it isn’t, why are you using it? There is no point in adopting a dependency on fragile code that you don’t trust. If you want to improve Moose’s reliability, the way to do that is by working on Moose itself, not by testing Moose in your application’s test suite.

Do these tests protect us from breaking code? Not really. If we change the Process class so that it no longer has the stdout attribute, the test will fail. But if we made that change, surely it was intentional. So now our tests are failing because we made an intentional change.

But what if other code in our code base expects the stdout attribute to exist? As long as that code is tested, we will find this problem quickly enough. If the stdout attribute is only ever referenced in the test up above, then what purpose does it serve?

Finally, the test above provides no guidance to future readers. The code in Process package provides more documentation than the test code, and if the module also has POD, that will provide even more documentation. The test doesn’t show how the code is used, it just provides another way to describe what the module is, a way that’s inferior to the Moose-based declarations or POD.

However, don’t confuse the tests above with testing code that you write. For example, if you create a new type with a custom constraint and coercion, you should definitely test that type. The Moose test suite obviously doesn’t test your specific type, it just tests that new types can be created.

So the answer is no, don’t bother with tests like the ones above. Test new code you create, not Moose is doing what you asked it to do.

I’ve been seeing some talk about MooseX::Method::Signatures and its speed. Specifically, Ævar Arnfjörð Bjarmason said says that MXMS is about 4 times slower than a regular method call. He determined this by comparing two different versions of a large program, Hailo. This is interesting, but I think a more focused benchmark might be useful.

Specifically, I’m interested in comparing MXMS to something else that does similar validation. One of the main selling points of MXMS is its excellent integration of argument type checking, so it makes no sense to compare MXMS to plain old unchecked method calls. Therefore, I made a benchmark that compares MXMS to MooseX::Params::Validate. Both MXMS and MXPV provide argument type checking use Moose types. That should eliminate the cost of doing type checking as a variable. If you don’t care about type checking, you really don’t need MXMS (or MXPV).

The benchmark has two classes with semantically identical methods doing argument validation. One uses MXMS and the other MXPV. All method calls are wrapped in eval since a validation failure causes an exception. I also tested both success and failure cases. My experience with Params::Validate tells me that there’s a big difference in speed between success and failure, and the results bear that out.

Here’s what the benchmark came up with:

Rate   MXMS failure   MXPV failure   MXMS success   MXPV success
MXMS failure  262/s             --           -41%           -81%           -94%
MXPV failure  448/s            71%             --           -68%           -90%
MXMS success 1393/s           431%           211%             --           -69%
MXPV success 4545/s          1634%           915%           226%             --

First, as I pointed out, there’s a big difference between success and failure. I can only assume that throwing an exception is expensive in Perl. Second, the difference between MXMS and MXPV is much greater in the success case. This makes sense if simply throwing an exception is costly.

It seems that in the success case, MXPV is about 3 times faster than MXMS in the success case. I think the success case is most important, since we probably don’t expect a lot of validation failures in our production code.

Benchmark code