I enjoy reading a good epic fantasy from time to time. Sure, it’s a well-worn genre, but I like a big story, and if it’s well-written, it can be fun.

I just finished re-reading Tad Williams’ Memory, Sorrow, and Thorn trilogy (for the first time since it was published 20 years ago). It was enjoyable, despite a bunch of cliche bits.

But it got me thinking about how ridiculous many fantasy worlds are when you look a little deeper.

The first example is the Sitha (Tad Williams’ elves). In these books, the Sitha are immortal, and it’s stated that they give birth approximately every 500 years. They migrated to the continent they’re on thousands of years ago, but it doesn’t say how many. For some reason, their population is ridiculously small, but that really doesn’t make much sense, especially considering that they were the unchallenged rulers of that continent for a long time.

If we assume that 1,000 Sitha were in the first migration, and that migration occurred 10,000 years ago, how many Sitha should there be “now”? Let’s assume that the every 500 years birth pattern is true. Let’s also assume, since they’re immortal, that the females can continue to have children indefinitely. That means that every 500 years, half of the population will give birth to a child, of whom half will be female, and so on and so forth.

In other words, every 500 years the population should increase by 50 percent. After 10,000 years, the initial population of 1,000 should be over 3,000,000 (that’s 3 million)! That’s a lot of Sitha! In the books, however, they’re a dying race. Yes, there’s been a bunch of wars and such, but those wars started a long time after their initial migration, when their population should already have been in the hundreds of thousands.

The other goofy bit of ecology is a dragon that supposedly lived in a system of tunnels underneath a castle. The dragon is described as being very large, presumably bigger than an elephant. While there are some big spaces in the tunnel system, there’s no giant pathway into the part where the dragon is, which seems to be pretty far into the tunnel system. Maybe it was born there and grew too big to leave? I can buy that, but what does it eat?

People know about this dragon, so excluding the occasional foolhardy hero, I don’t think there’s a lot of traffic down there. Certainly there is probably nothing bigger than mice and bats, and even they would probably avoid a large predator’s living space.

Stuff like this does kind of annoy me, because it seems like the author adopts some fantasy convention (immortal elves who are dying out) without actually figuring out how to make that make any sort of sense, other than “it’s a magic world, and I say so”.

A good example of doing better is Robin Hobb’s Elderlings trilogy of trilogies. She actually comes up with a very interesting and sane life-cycle for various fantastic creatures (I don’t want to be too specific), and even includes things like natural disasters in this fantasy ecology. It all makes sense and ties into the story very nicely.

I’m still stuck on the whole problem of the requirement that URIs for REST APIs be discoverable, not documented. It’s not so much that making them discoverable is hard, it’s that making them discoverable makes them useless for some (common) purposes.

When I last wrote about REST, I got taken to task and even called a traitor (ok, I didn’t take that very seriously ;) Aristotle Pagaltzis (and Matt Trout via IRC) told me to take a look at AtomPub.

I took a look, and it totally makes sense. It defines a bunch of document types, which along with the original Atom Syndication Format, would let you easily write a non-browser based client for publishing to and reading from an Atom(Pub)-capable site. That’s cool, but this is for a very specific type of client. By specific I mean that the publishing tool is going to be interactive. The user navigates the Atom workspaces, in the client finds the collection they’re looking for, POSTs to it, and they have a new document on the site.

But what about a non-interactive client? I just don’t see how REST could work for this.

Let me provide a very specific example. I have this site VegGuide.org. It’s a database of veg-friendly restaurant, grocers, etc., organized in a tree of regions. At the root of the tree, we have “The World”. The leaves of that node are things like “North America”, “Europe”, etc. In turn “North America” contains “Canada”, “Mexico” and “USA”. This continues until you find nodes which only contain entries, not other regions, like “Chicago” and “Manhattan”.

(There are also other ways to navigate this space, but none of them would be helpful for the problem I’m about to outline.)

I’d like for VegGuide to have a proper REST API, and in fact its existing URIs are all designed to work both for browsers and for clients which can do “proper” REST (and don’t need HTML, just “raw” data in some other form). I haven’t actually gotten around to making the site produce non-HTML output yet, but I could, just by looking at the Accept header a client sends.

Let’s say that Jane Random wants to get all the entries for Chicago, maybe process them a bit, and then republish them on her site. At a high level, what Jane wants is to have a cron job fetch the entries for Chicago each night and then generate some HTML pages for her site based on that data.

How could she do this with a proper REST API? Remember, Jane is not allowed to know that http://www.vegguide.org/region/93 is Chicago’s URI. Instead, her client must go to the site root and somehow “discover” Chicago!

The site root will return a JSON document something like this:

{ regions:
  [ { name: "North America",
      uri:  "http://www.vegguide.org/region/1" },
    { name: "South America",
      uri:  "http://www.vegguide.org/region/28" } }
  ]
}

Then her client can go to the URI for North America, which will return a similar JSON document:

{ regions:
  [ { name: "Canada",
      uri:  "http://www.vegguide.org/region/19" },
    { name: "USA",
      uri:  "http://www.vegguide.org/region/2" } }
  ]
}

Her client can pick USA and so on until it finally gets to the URI for Chicago, which returns:

{ entries:
  [ { name: "Soul Vegetarian East",
      uri:  "http://www.vegguide.org/entry/46",
      rating: 4.3 },
    { name: "Chicago Diner",
    uri:  "http://www.vegguide.org/entry/56",
    rating: 3.9 },
  ]
}

Now the client has the data it wants and can do its thing.

Here’s the problem. How the hell is this automated client supposed to know how to navigate through this hierarchy?

The only (non-AI) possibility I can see is that Jane must embed some sort of knowledge that she has as a human into the code. This knowledge simply isn’t available in the information that the REST documents provide.

Maybe Jane will browse the site and figure out that these regions exist, and hard-code the client to follow them. Her client could have a list of names to look for in order: “North America”, “USA”, “Illiinois”, “Chicago”.

If the names changed and the client couldn’t find them in the REST documents, it could throw an error and Jane could tweak the client. A sufficiently flexible client could allow her to set this “name chain” in a config file. Or maybe the client could use regexes so that some possible changes (“USA” becomes “United States”) are accounted for ahead of time.

Of course, if Jane is paying attention, she will quickly notice that the URIs in the JSON documents happen to match the URIs in their browser, and she’ll hardcode her client to just GET the URI for Chicago and be done with it. And since sites should have Cool URIs, this will work for the life of the site.

Maybe the answer is that I’m trying to use REST for something inherently outside the scope of REST. Maybe REST just isn’t for non-interactive clients that want to get a small part of some site’s content.

That’d be sad, because non-interactive clients which interact with just part of a site are fantastically useful, and much easier to write than full-fledged interactive clients which can interact with the entire site (the latter is commonly called a web browser!).

REST’s discoverability requirement is very much opposed to my personal concept of an API. An API is not discoverable, it’s documented.

Imagine if I released a Perl module and said, “my classes use Moose, which provides a standard metaclass API (see RFC124945). Use this metaclass API to discover the methods and attributes of each class.”

You, as an API consumer, could do this, but I doubt you’d consider this a “real” API.

So as I said before, I suspect I’ll end up writing something that’s only sort of REST-like. I will provide well-documented document types (as opposed to JSON blobs), and those document types will all include hyperlinks. However, I’m also going to document my site’s URI space so that people can write non-interactive clients.