Burning Down

2014-03-05 02:41:43 -0500

Spring is now two weeks away (still gloomy, still below 30F, yard still covered in snow, no crocuses... but Spring is still around the corner.)

A few days back I actually hit "feature complete" in the sense that as far as I could tell everything I needed to do in order to Flip The Switch was implemented, and that while there is still a fair list of things that Would Be Interesting and Would Be Good Improvements, none of them truly block Being As Good As The Existing One combined with Actually Letting Five Months Of Backlog Out The Door... but I still had a lingering Fear Of Screwing It Up. Does this really all work? Will it look ok?

Well, it's never going to look good if I'm the only one working on the visual appearance, but at a glance it's at least cleaned up a bit. The real concern is having things that could be called Broken or otherwise not as I intended. What's the standard software engineering way of increasing confidence? Well, from the outside, you'd be forgiven for thinking "delay actually shipping anything" was the Best Practice :-) but what I'm getting at is Testing. Pick a few things that I'm worried about, and implement tests for them. (And because This Is Me, we're talking fully automated tests...)

The first step was dependency tracking. Not because I'm trying (yet) to do this as an incremental build - even on the crufty slow machines with buckets of spinning rust that I use as servers, a full rebuild only takes 15 seconds, a full build to an empty directory takes 20 - but because it let me figure out what things in the output directory were spurious leftovers from a previous build (and should trigger a clean build) and what things in the source tree were getting ignored (usually by not being properly attributed in the dependency graph, but it did expose some actual bugs.)

The second step was building the link graph - I did a codes-well-with-others pass on the easily available ones, linkchecker is nicely packaged in Debian, under active development (yet already quite feature-rich) but the default (fixed in the 9.0 release that went out this week) was to fetch and check external links too, which is a good thing to have in general, except that

linkchecker does have some nice features like the ability to report output as a directly usable sitemap, which I will probably revisit when 9.0 comes out.

It only took an hour to do a trivial walk from the top level index.html of the output tree using lxml.html and record what paths it saw, filtering out HTML "anchors", links that were offsite, and normalize them all to in-tree pathnames. It took very little longer to match that up against the output side of the dependency checker, and then (by hand) to check some of the "missing" files in google... leading me to conclude that a bunch of stuff is accessible due to being included in RSS feeds, even though it's not actually linked anywhere. Enough things were reachable to convince me that the test worked and that the site was basically OK, and that more significant linking is actually a content project, not a deployment one...

So these confidence-building steps have gone in, they've built confidence appropriately, and the only reason I haven't switched DNS over is that I hang out with enough operations people that I Accept As Truth that I shouldn't do this right before bedtime :-)

Flipping the switch tomorrow...