Yay for hatemail

So following Florian’s chastisement of my “threatening” fellow Debian developers, Charles Plessy becomes annoyed by a bug in apt-file (its default configuration expects curl, but wget is what’s installed on most systems), at which point Luk Claes then starts threatening to NMU whether the maintainer likes it or not. Naturally that’s not the correct thing to do for a report the maintainer’s addressed and said is not a bug, 0-day NMU policy or not. Naturally, pointing this out brings Charles back into the fray, to complain further. A followup to that produces this off-list reply from Charles:

On Fri, Jan 13, 2006 at 05:13:15PM +1000, Anthony Towns wrote :
> I'm not disputing whether it's a bug or not, the maintainer is. If
> you are *helping* the maintainer, then fine: do an NMU.

Dear Anthony,

I would love to, but I am not a developer. And I am amazed to see that
more energy is spent in arguing rather than solving the problem. I hope
Luke will NMU this package and close those shameful bugs.

> In my experience you almost always get a better response from people if
> you assume they've got a good reason for doing what they have been doing,
> rather than just trying to add extra punctuation to your sentences.

> Admittedly, punctuation is pretty cool...

That kind of sentence reflects your inclination for ad-hominem attacks.
They poison the -devel list.

Best,

So yay for people who aren’t developers, in the n-m queue or maintaining a package, who don’t understand Debian’s processes, yet still think it’s great to pontificate about what Debian’s processes are, go on about how developers are “arrogant experts” and who think “punctuation is pretty cool” is a “poisonous ad hominem attack”. But what I hate most is people who think they’re contributing to Debian by mailing people privately to tell them how horrible they are. Gag.

For those playing along at home, the proper process to follow in a dispute with a maintainer is to bring it up to the technical committee, not to try forcing the situation, whether that be by reopening bugs or playing bug ping-pong. It’s really not complicated. It’s even documented (5.8.3 of the Developers Reference).

UPDATE 2006/02/06:

A random response here (the poster of which then sent me a personal email the next day), and exciting new developments here.

Let’s Talk About ‘Nix

Have you ever found yourself reading slashdot (what? you don’t read slashdot? well, play along) and coming across a banner ad (what? you block ads? well, play along) for the nth time and finally just, you know, snapping?

Sadly, today it happened to me. To the tune of a ditty by Salt n’ Pepa at that.

Let’s Talk About ‘Nix

(Yo, I don’t think they’re going to aggregate this on the planet;
And why not? Everybody hates ‘doze.
I mean everybody should be codin’ perl!
C’mon, how many guys you know codin’ perl?)

[Martin Fink tells it like it is]

(Punch it, Fink!)

Come on!

Chorus:
Let’s talk about ‘nix, baby
Let’s talk about gcc,
Let’s talk about all the GNU things,
And the FUD things that we see,
Let’s talk about ‘nix…
Let’s talk about ‘nix!

Let’s talk about ‘nix for now,
To the people at home or in the crowd,
It keeps comin’ up anyhow.
Don’t decoy, avoid or make void the topic,
Coz that ain’t gunna stop it!

Now we talk about ‘nix on the web and on the audio casts,
Everyone blasts, throwin’ out LARTs,
But we’ll tell it like it is, and how it could be —
How it was, and of course how it should be!

Those who think it’s boring, use your mice,
Shut down your browser, change sites,
Or break your internet link —
Will that stop us, Fink? I doubt it!
All right then! Come on, spin!

(Chorus)

Hot to slot, make any rack’s breaker pop,
They use what they got, to fake whatever they don’t got,
Admin’s drool but then again they’re only human,
These blades are a hit, and the market is boomin’.
Java, C, Pascal, crazy Fortran,
Nothing they run is ever dawdlin’!

Their fate: eliminate wait, actin’ in haste,
Genomes, finance, nothin is too great for these,
To be tasked, they’ll even surpass,
In a blur, they purr, you infer the gist.
And believe me you, it’s as good as true:
There just ain’t a job around that these beasties can’t do.

They’ve got it all in the bag, so they oughta be glad
But they’re mad and sad and feelin’ bad
Thinkin’ ’bout the kernel they never had.
No ‘nix, just ‘doze, followed next with a crack and a patch,
That new box is pwned… (pwned, pwned, pwned)

Take it easy…

(Chorus)

On the technical committee

And thus the entrapment's completed,
With the motion so newly anointed.
Wichert, Jason, and Guy? 
   They're deleted!
Steve, Andy and I? 
   We're appointed!

Inspired by Mr Srivastava:

<Manoj> congrats vorlon, aba, aj (*snigger suckers *snigger)

Clint throws down

A friend o’ ne’er do wells,
Got flashbacks to a time in the car;
This master of izzard shells,
Was headed for dinner, huzzah!

In a car with those friends did he sit,
Seeking something to consume.
Through the streets of San Fran’ do they flit,
Hunting the sacred legume.

But out the car leapt the driver,
And sprints to a restaurant he spies,
Like something out of survivor,
This teamwork, from lord of the flies.

For a passenger took up the wheel,
Drove round the corner and carefully parked.
The ending we’re about to reveal,
It simply cannot be left unremarked.

The story continues to run,
But the details have to be missed.
Suffice to say, when the regrouping was done,
There was but one tiny twist:

They continued their searching for food,
Leaving the rendezvous quiet and still.
Surely they must be unglued,
Walking away from the Earthly Grill?

All in all it comes to one fact,
After making that claim on your blog.
Your challenge you’ll have to retract,
Coz to this lyrical prince, you’re a frog.

Private Declassifaction GR Results

In late November, I did up a proposal to provide a way of making some of the interesting posts hidden away on the developer-only debian-private mailing list more public — mostly on the basis that secret discussions aren’t good for Debian, and that there’s some really fascinating discussions on topics which continue to come up, that I’d like to be able to refer to directly.

In the end, that didn’t pass, although we instead ended up with a compromise position whereby we won’t release any of the old posts to that list, only the new ones; and there won’t be any practical result from the GR for another three years.

Since this wasn’t a leadership election, who voted for what is now public, so we can do a few interesting analyses. At the most basic, there were five major categories of voters:

  • 89 people who wanted as much declassification as possible (V: 123)
  • 54 people who wanted as little declassification as possible (V: 321)
  • 50 people who would accept future declassifaction, but not past (V: 312)
  • 38 people who preferred future declassification (V: 213)
  • 37 people who didn’t want declassification, but had no other preference (V: 221)

The 30 voters who didn’t fit into one of the other categories pushed past declassification past further discussion (19:2), and closed the gap between past and future disclosure (19:8). As it turns out, IRV (the preferential voting system used in Australia) would still have produced the same result, so I think we’re still yet to come up with an example where Condorcet acts differently.

An interesting question is whether this result’s a good compromise or not; the two most common sets of votes were exactly opposite, and the next most common was in favour of the compromise — which is exactly the right case for our voting system to produce a compromise result. OTOH, that means 202 of the 298 voters (a 2:1 supermajority) didn’t get their first preference and end up dissatisfied. I wonder if, in a voluntary organisation, it makes sense to leave most people dissatisfied like that, or to take a polarised result and leave the people who end up strongly dissatisfied to move on to other things. I also wonder how that’ll look in a few years, or even a few months: at that point a tentative compromise on this issue might seem entirely sensible and mature, and any more radical action seem absurdly headstrong.

Speaking of compromise, one thing that’s interesting is what effect the various compromises made while the proposal was being discussed actually had. There were three significant ones that got incorporated into the main proposal:

In addition, an alternative proposal to only deal with future posts was made in response to concerns expressed by John Lightsey and MJ Ray. Of those, Don Armstrong voted in favour of declassification (123), and John Lightsey voted in favour of future declassification only (312), while both Manoj and Bernhard voted against declassifciation (321), and MJ appears not to have voted at all.

Of the sponsors of the main proposal, all voted for declassifcation (123); of the four sponsors of the alternate proposal who didn’t also sponsor the main proposal, two then voted against declassifaction (Gregory Norris, David Garza; 321), leaving only Neil McGovern (213) and John Lightsey voting for the proposal as well as sponsoring it.

I’m not sure what all that actually means, but it does leave me wondering how much point there actually is to trying to address the concerns raised by people on the lists. Which is a bit of a let down, since that’s what I find most entertaining about Debian.

As usual, the votes came in blocks: the first hundred in the first day and a half after the initial call for votes, the next fifty over the following week, another twenty-five after the second call for votes, and another fifty in the week following up until the final call for votes, which generated an additional seventy-five or so votes. So day one and two, and thirteen and fourteen accounted for 29% of the time, but 59% of the vote. It’d be interesting to see how well (or badly) a vote worked that only lasted four days, perhaps with some sort of “postal vote” arrangement so people who are going to be away can provisionally vote while discussion is still ongoing.

Three hundred voters was actually a pretty good turn out — coming to 30.8%. In spite of being mostly over the Christmas/New Year period, that beats the turnout for most of the other elections we’ve had. The consitution had a 24% turnout, the logo vote had 28.1%, the logo swap had 21.2%, the logo license had 21.5%, the condorcet vote had 19.9%, the allow changes to the social contract vote had 26%, and the editorial changes vote had 23.5%. That leaves the only votes to have higher turnout being the non-free vote at 53.1%, the sarge release followup vote at 43.6%, and the DPL votes which ranged from 50.6% (2002) to 62.2% (2000).

dak dsa

So the final implementation detail in the embargoing scheme is providing a tool to move stuff from the embargoed and unembargoed queues into the archive. The existing tool the security team use is called “amber” (after the inimitable Amber Benson). amber’s pretty simple: it takes a DSA number, and the .changes files you’re looking at; then asks for confirmation, accepts the packages into the archive, regenerates Packages and Release files, fills out a template advisory with details from the deb and mails that off, and uploads the files to ftp-master for inclusion in the next stable release.

There are a few problems with that. One is it doesn’t allow for rejections. Another is it doesn’t provide the security team with the opportunity to edit advisories while the packages are being prepared. Another issue is that the entire program has to run under sudo with full archive priveleges.

Enter “dak dsa” aka “newamber”. The new tool aims to do more or less the same thing, but with a little more style. First, it provides a small interactive interface, so that processing an advisory now looks like:


$ newamber DTSA-25-1 smb4k_*.changes
Create new advisory DTSA-25-1? y
Advisory: DTSA-25-1
Changes:
 smb4k_0.6.4-0.0etch1_alpha.changes
 smb4k_0.6.4-0.0etch1_hppa.changes
 smb4k_0.6.4-0.0etch1_i386.changes
 smb4k_0.6.4-0.0etch1_m68k.changes
 smb4k_0.6.4-0.0etch1_mips.changes
 smb4k_0.6.4-0.0etch1_mipsel.changes
 smb4k_0.6.4-0.0etch1_s390.changes
 smb4k_0.6.4-0.0etch1_sparc.changes
Packages:
 smb4k 0.6.4-0.0etch1 (alpha, hppa, i386, m68k, mips, mipsel, s390, source, sparc)
Approve, [E]dit advisory, Show advisory, Reject, Quit? 

Choosing edit will grab a copy of the template and run vi — the template will only be filled out when you approve the upload though, since the values might change before then. Accept will do more or less what old amber did; though instead of mailing the filled in advisory draft it’ll just leave it in the filesystem instead.

Of course, running vi (well, $EDITOR) generally means you can get a shell too, so running the command with full archive priveleges is a bit much — at least if you’re trying to have any sort of granularity to your security regime, which was, after all, the whole point of this exercise. So instead of running the entire command as the katie user, “dak dsa” instead has to escalate its own priveleges, in this case using sudo and specific options, such as sudo dak dsa -A -- foo.changes to approve foo.changes. Fortunately sudo and the apt argument parser are cooperative enough to allow “dak dsa” users to invoke “dak dsa -A — *” as katie, and thus have only the very limited access we’re trying for.

Obviously the above is taken from the testing-security team — it’s the same source and i386 packages, recompiled on other architectures by the security.debian.org testing autobuilders. It’s shown up a few flaws in the autobuilding for etch: (a) the amd64 autobuilder isn’t active; (b) the arm buildd can’t seem to find its chroot in between running apt-get install and apt-get remove; (c) the s390 buildd only works if the source is on ftp-master; (d) of the five m68k buildds that will take packages for security.debian.org updates to testing, only two will succeed (a400t and poseidon). There’s also the notable problem that the chroots for the functional buildds have gotten out of date and that builds break somewhat obscurely as a consequenece. One of the test updates is also failing on hppa due to space restrictions. And of course, the above list is after a chunk of other problems have already been fixed.

It’s worth noting that even if the above isn’t fixed for testing now; we’ll still need etch chroots for security.debian.org when we release, so those problems have to be dealt with at some point. And that the brokenness is the result of six months’ divergence from sarge; after a year and a half when etch releases — or the three years between woody and sarge’s release — it’s probably fair to expect worse breakage.

Anyway, that’s just about it from me on this topic. Micah Anderson from the testing-security group is currently checking out the unembargoed facility, and has redone a couple of DTSAs on security.d.o. So presumably those guys will start working out whether security.d.o is something they want to make use of, and if so, working out what changes/tweaks are necessary for that. Though, unsurprisingly, I also still have to do some committing to CVS…

Changing The Security Infrastructure

One of the most exciting things about working on Debian is that since it’s developed in the open, when you want to make changes everyone sees them, so depending on what you’re working on, the risk of breaking stuff can provide a real adrenaline rush, while you’re just sitting in your chair. So what better adventure sport could there be than hacking up Debian’s security support?

One idea that comes to mind: doing it with only a minimal idea how it works at the moment.

Prior to finishing the plan I mentioned previously, then, the task was to get a decent idea of what the actual code looked like, and perhaps more importantly, get the code on security.debian.org synchronised to the code in CVS (which had only recently been synchronised with the code on ftp-master). Naturally that was a two way sync, with changes specific to security having to be kept, and changes from CVS and ftp-master having to be merged in; in the end it looked something like:

        * Merge of changes from klecker, by various people

        * amber: special casing for not passing on amd64 and oldstable updates
        * amber: security mirror triggering
        * templates/amber.advisory: updated advisory structure
        * apt.conf.buildd-security: update for sarge's release
        * apt.conf-security: update for sarge's release
        * cron.buildd-security: generalise suite support, update for sarge's release
        * cron.daily-security: update for sarge's release, add udeb support
        * vars-security: update for sarge's release
        * katie.conf-security: update for sarge's release, add amd64 support,
        update signing key

        * docs/README.names, docs/README.quotes: include the additions

From the looks of things, the security changes had actually been synced to a slightly later version of CVS than the ftp-master changes, which had subsequently been lost, months ago — hence the miscellaneous changes to the READMEs.

Along with that came the initial changes in preparing to support the new security queues, namely:

        * Changed accepted_autobuild to queue_build everywhere.
        * Add a queue table.
        * Add a "queue" field in the queue_build table (currently always 0)

        * jennifer: Restructure to make it easier to support special
        purpose queues between unchecked and accepted.

These had already been rolled out on ftp-master for a little while, and had been working fine, so it seemed fairly safe. But hey, one thing at a time let’s you at least limit the damage, if there’s going to be any, which of course there isn’t.

<Joey> dak@klecker is broken.

Or maybe there is. The error turned out to be:

File “/org/security.debian.org/katie/db_access.py”, line 269, in get_or_set_fingerprint_id
projectB.query(“INSERT INTO fingerprint (fingerprint) VALUES (‘%s’)” % (fingerprint));
pg.ProgrammingError: ERROR: duplicate key violates unique constraint “fingerprint_pkey”

Which is a relief — I hadn’t touched the fingerprint table, so it’s definitely not my fault. Of course, it also makes no sense: sure we want unique fingerprints, but that’s why we have a SELECT just before the above and only do the INSERT if we don’t already have that fingerprint. And indeed there’s no such fingerprint in the table, and running the INSERT statement by hand doesn’t work either. At this point I grabbed James “I don’t have a blog” Troup as the voice of experience, to see if he had any idea. He did have an idea, namely “maybe klecker ran out of space and the database is corrupt”, and proceeded to drop the database and restore it from the most recent backup. Sadly, that didn’t change anything. Poking around further, he noticed that the constraint above doesn’t actually limit the uniqueness of the fingerprint, but rather the primary key — in this case an autoincrementing id. Turns out somewhere along the line that had reset from around 140 back to 8, and the error was because there was already a fingerprint with id 8. It further turns out, as we discovered by poking through the regular database dumps to see when it had reset, that this happened mid November, which was coincidently when postgresql had been upgraded. The reason it hadn’t shown up for a few weeks since then, it seems, was that all the security uploads had been signed by keys already in the table — and the other autoincrementing ids that had been reset would only come into play when we next released, or added a new component or similar.

I’d like to emphasise the next point: all of this means that the problem was, demonstrably and conclusively, not my fault!!!

At around this point, clever as I am, I came up with a second way to make this more exciting, namely “tell the only security team member who’s actually talking to you that this is all taking too long and he’d better answer you now…. or else!”. I tell you, if you could bottle the heart-pounding effect of

<Joey> well, if you rather want to fuck up the security infrastructure, don’t expect the security team to love you.

drug authorities the world over would be declaring you enemy #1 and adventure sports junkies would be getting your face tattooed on their nipples. This is also the point at which I recall that I’m not, actually, an adventure sport junky. Oops.

Anyway, with the stakes thus raised, and fall guy determined, and more importantly, the dak install on security practically begging to be updated to actually include the real changes, we come to:

        * katie.py: Move accept() autobuilding support into separate function 
        (queue_build), and generalise to build different queues

        * db_access.py: Add get_or_set_queue_id instead of hardcoding accepted=0

        * jennifer: Initial support for enabling embargo handling with the
        Dinstall::SecurityQueueHandling option.
        * jennifer: Shift common code into remove_from_unchecked and move_to_dir
        functions.

        * katie.conf-security: Include embargo options
        * katie.conf-security: Add Lock dir
        * init_pool.sql-security: Create disembargo table
        * init_pool.sql-security: Add constraints for disembargo table

Along with some bugfixes in those changes and some other tweaks, that leaves us with the ability to upload a package into the “OpenSecurityUploadQueue” and have it go to the unembargoed queue rather than follow the embargoed uploads (currently still going into accepted), to then be autobuilt by the same buildds building embargoed uploads, and for the builds to automatically follow the source into the unembargoed queue if and when they’re authorised and uploaded.

Of course, following all that coolness, you then find in your mailbox:

Could somebody please explain us why not all security updates were uploaded to ftp-master? It is *REALLY* annoying having to veryfy whether files were uploaded or not and having to move files around manually.

This week, the following uploads were missing: […]

Sadly, at least for my blood pressure, this is one of the things I’d touched. Still, I hadn’t touched it that much, so it did seem a bit odd. Poking through the logs on ftp-master naturally showed that the uploads had been made… on the day Joey sent the above mail, all around the same time… and thus presumably by Joey himself, manually fixing the breakage before complaining about it. Not what I’m looking for. Looking at the security logs, though, shows no evidence of the package. In December, anyway, turns out the one I was looking at was an advisory for early November. Aha! That probably means it’s not my fault. Excellent. Looking through the November logs on ftp-master shows that an upload was made — but only for the sparc build, which naturally got rejected because the source wasn’t there. Same thing for another package. Oddly, the sparc upload for at least the first package had been delayed for a week compared to the other builds, presumably some buildd problem at the time. Nothing seemed obviously different between the changes files to make one get accepted and the rest ignored, but the fact it was the last upload getting through seemed odd — maybe the other changes files were just being ignored? But apparently not — running “amber -n” on some current changes files to see what would happen indicates all the debs and whatever getting uploaded. Oh well, let’s look at the code. Oh, hrm, the changes get uploaded at the end — which makes sense since you want all the stuff to be accepted at once, so you don’t mistakenly REJECT a build before the source is uploaded. And… ah.

if not changes.has_key(upload_uri):
    changesfiles[upload_uri] = [];
changesfiles[upload_uri].append(changes_file);

changes happens to be a real variable that stores the contents of the changes file we’re looking at, and never has the appropriate key, leaving changesfiles to be cleared everytime the loop runs, and hence only contain the very last entry it was meant to.

The best part? Not only was it obviously a bug since before my changes, but adding redundant trailing semicolons to python code are one Mr Troup’s serious problems, so putting those elements together, we come up with the following equation: 1 + 1 = not my fault!!!! Oh, also, pretty easy to fix once found, which is always nice.

Unfortunately we’ve run out of time to go into the minor, indeed utterly trivial, problem (if you could even call it that!) involving updating the dak install on security to new code that requires a lockdir without actually creating that lockdir, and thus causing security uploads to get stuck in the unchecked queue and random “jennifer broke” cron spam to get sent to ftp-master and the security team every five minutes once something got uploaded… And anyway, as long as it got fixed, who really cares whose fault it was?

Waiting on DSA

A brief case study in a flamewar diverted.

The release managers are currently preparing a followup to the architecture requalification, following the efforts made over the past couple of months. Four ports are currently looking shaky, namely m68k, arm, s390 and sparc. An early draft begins going into the details by saying the following:

ARM has been borderline on a lot of the criteria, which we were willing to consider waiving; but there are two points in particular which we aren’t willing to waive. First is that the porting machine for arm, debussy.debian.org, has been off-line now for a number of months, and a replacement machine has not yet been activated by DSA. […]

Even without the already existing tensions on debian-devel and elsewhere, I’d expect the RM posting the above to the developers announcement list to immediately result in hue and cry about DSA trying to kill arm and how unfair and inappropriate that was. And I can’t even say that’d be unreasonable.

OTOH, I can say it would’ve been unjustified, enough so that the above won’t actually make it out to -devel-announce. Why? Read on past the fold.

The draft, of course, was based on the arm requalification page in the wiki, which at the time said the following in regard to a porting machine for arm:

Vince Sanders (Kyllikki) maintains a machine to which developers can be given specific access. Accessibility by developers is waiting for DSA to set up, at which point it will become jennifer.debian.org. In the meantime it is jennifer.kyllikki.org port 1022. Mail vince@debian.org to ask for an account.

europa.debian.org, a netwinder(233Mhz), hosted at Xandros, adminned by Woody Suwalski <woody@xandros.com>. Ex-buildd machine. Awaiting DSA action to make it general-developer access.

elara.debian.org, a netwinder(233Mhz), hosted at Xandros, adminned by Woody Suwalski <woody@xandros.com>. Ex-buildd machine. Awaiting DSA action to make it general-developer access.

(It also listed a couple of other machines that will likely be available soon, but they’re irrelevant for this)

The “Awaiting DSA action” notes were added by Wookey, who shouldn’t be confused with the Woody mentioned above, a few weeks ago, probably to record a brief discussion about elara and europa that took place on the debian-arm last month, initiated by the aforemented Woody (not Wookey). Here are some key tracts:

I have a feeling that nobody has been using Elara and Europa since April.

Woody, Nov 03

Most urgently,arm port is missing a developer accessible machine. Basicly it would have sarge/etch/sid chroots debian developers could log into to debug problems in their packages.

Riku Voipio, Nov 04

For europa and elara, a word from James (former buildd admin and europa+elara admin) is required before the machines can be used for anything else. I don’t like hijacking machines only because he doesn’t respond.

Joey, Nov 04

Joey is Martin Schulze, who’s a DSA member among other things. The thread resumed later in the month:

To James: another 2 weeks have passed without any sign of life from you… Elara and Europa sit idle since April…

Woody, Nov 17

Last I heard, James was busy in Ubuntu Below Zero. Not sure how long that’s going to take, though.

Wouter Verhelst, Nov 18

The arm port badly needs a developer-accessible machine for people to use when they get arm-specific bugs. Does the hosting for your machines allow them to be used for this purpose (it has been suggested that this may not be permitted)? This would mean that all Debian developers with a key in the keyring had access to the box(es). If this is a problem then Steve McIntyre has offered to host and admin the machines for this purpose.

Wookey, Nov 22

And we’re up to the point of editing the wiki, and it’s pretty clear how we got there.

At this point I suggested that the release team might want to actually talk to James to see what was actually going:

<aj> vorlon: sounds like the arm developer machine isn’t a terribly hard problem to fix, particularly compared to binutils support?
<aj> (also, sounds like we could use some uptime graphs for the buildd machines)
<vorlon> aj: in theory; actually, kylliki suggests that pb may be doing binutils upstream, but for some reason that’s not yet documented on the wiki, whereas the porter box being down has been known for months and nothing’s happened yet…
<aj> vorlon: right, but that’s something for DSA to fix, not the arm porting community afaics? have you had a chat with elmo about it yet?
<vorlon> no
<aj> vorlon: that’d seem to be the thing to do, then?
<vorlon> Vancouver was supposed to make it all better so I didn’t have to get involved with the details of porter machines ;P
<aj> vorlon: you’re getting involved with it now by posting to d-d-a
<aba> aj: I had a chat with someone from the DSA team about the developer machine, and result is the porters need to find out what happens with the “old” machines …
<aj> vorlon: i’d go for something like “there hasn’t been a decision yet made on which machine should be activated by DSA so all developers can automatically activate it; the release team are working on resolving this, and have waived this requirement in the meantime”
<vorlon> uh
<aba> aj: basically, I don’t think the release team should work on that – and besides, I see it as an no-op to waive this requirement

Andy Barth is aba, Steve Langasek is vorlon, I’m aj. There’s a reasonable amount of stuff trimmed from that, I mostly wanted to capture the point that the RMs both wanted not to take on that role of facilitating communication. After all, communication’s an O(N^2) problem (or O(2^N) depending on how you count), and N’s already 1000 or more for Debian.

At this point I chatted with vorlon some more via private msg, and pinged James to see if he was around. I got a response ten minutes later or so noting that (a) elara and europa had been setup as buildds a couple of days ago, and, more importantly, that (b) the local admin, ie Woody, wasn’t willing to have them have logins for 900 developers, so they couldn’t be porting machines in the first place.

We passed this on to Steve, who it turned out had already been told about europa and elara, but with the wiki saying otherwise, and it being 6am localtime for him by now, hadn’t recalled that point. He then spoke to James about Kyllikki’s “jennifer” box, was told James knew nothing about it, and finally asked Kyllikki to mail DSA about setting it up. He also asked to be Cc’ed.

There are lots of things that could’ve been done to resolve this situation, but I think two are particularly relevant. The very first thing that could’ve been done was for Woody and Wookey (and the other -arm) folks to talk more to make sure they weren’t at cross-purposes on elara and europa’s suitability as a porting machine. At no point did anyone on -arm point out explicitly that a porting machine would have 900 or so accounts, and at no point was there an explicit indication that they could be porting machines — yet that ended up as a fact on the wiki anyway.

The second way to resolve it that I want to mention is what actually happened: interested bystanders who wanted to avoid a flamewar actively talking to relevant folks, and making sure that what we think’s going on is actually what’s going on, and trying to help resolve it.

On Joey on Permissions

At the end of an interesting piece on permissions, Joey Hess concludes:

I could give many more examples of subsystems in Debian that exist at different point in the spectrum between locked down unix permissions and a wiki. There seems to be a definite pull toward moving away from unix permissions, once ways can be found to do so that are secure or that allow bad changes to be reverted (and blame properly assigned). Cases of moving in the other direction are rare (one case of this is the further locking down of the Debian archive server and BTS server after the server compromise last year).

I think Joey’s focus on “unix permissions” is a bit misleading; you could make the same argument more generally by talking about permissive and restrictive regimes. Permissive systems have the obvious risk of vandalism in various forms, whether deliberate and malicious like adding false information to wikipedia, or well-intentioned contributions that are low quality due to simple ignorance or carelessness.

On the other hand restrictive systems have the somewhat subtler risks that good contributions will be turned away because the restrictions are more effort than they’re worth, and that the time it takes to administer the restrictions start detracting from the time to actually do the work in the first place.

So obviously there’s a tradeoff. But as well as that, there are ways of easing both risks: the risk of vandalism can be overcome if the stuff isn’t relied upon too heavily and can be changed relatively easily — so problems in unstable aren’t much of an issue since they can be fixed in a day, and don’t affect people running mission-critical systems because they should be using stable or testing or similar. Likewise, managing restrictions can be distributed too: the new-maintainer process is a good example of that, in that a range of people are involved in helping people pass through the qualification round, and in that sponsorship allows people who haven’t passed through the main lot of restrictions to get their contributions in by going through a different process.

One interesting approach, to my mind, is worrying less about permissions and more about space — so that different people with different ideas on how to do things can do them independently. That’s part of the idea behind usertags and usercategories: rather than having people try to find an imperfect compromise, let them work on the same stuff in the way they actually prefer. That reduces the risk of carelessness, in that you stop having any reason to bother other people, and also reduces the problem of restrictions, in that if you don’t have permission to work in someone else’s area, you can just setup your own area and work there.

Perhaps the worst problem is if the drawbacks feed on each other: a restrictive system turns away contributions, which causes prospective contributors to get frustrated and hence careless, which then reinforces the reasons that the restrictions were put their in the first place and diminishes the chance they’ll be reconsidered. That’s a hard cycle to break, but it’s not one where anyone really wins.

I’ll leave the last word to Joey:

Anyway, the point of this is that, if you survey the parts of dealing with the project where Debian developers feel most helpless and unempowered, the parts that are over and over again the subject of heated discussions and complaints, you will find that those are the parts of the project where unix permissions still hold sway. […]

The challenge, then is to find ways to open things up to everyone, without throwing security out the window. It has to be done a piece at time, and some of the pieces that are left are the ones most resistant to this change.

Security Infrastructure Changes

Delays suck.

I’m actually skipping ahead here, I’d meant to blog about updating the dak codebase into shape for whatever changes were coming up first, but it turns out I’m not in the mood for that. Besides which, most of this entry is preprepared. In the last few entries we went over the background for updating our security infrastructure to make it possible to have security team members that don’t worry about vendor-sec issues (process, overview, NIv2 aka NINJA, queue-build issues), all the while skipping fairly lightly over the question of what has to actually happen. That happens to be the way I prefer to work, lots of background tinkering, then spend as little time on actual implementation as possible; it has it’s drawbacks, but hey, I like it.

Anyway, the second last step is working out in detail what’s going to happen. The delay in question was trying to get the security team to sign off on it, but unfortunately they’re busy enough that that looks like it’ll being an indefinite delay, so I’m going with the “well, ftpmaster maintains the dak/katie infrastructure for the security team, so in the end it’s our call on what’s best anyway” line of thought, and just plunging in. Fortunately the current and final plan should make pretty minimal changes to how the existing security team operate anyway, even assuming a massive testing security operation happening in parallel with them — so the downside risk is already pretty minimal. (And it was, afterall, developed in consultation with Joey, and then shopped around a few folks who’ve been doing security work for testing and kernels)

What is that plan, you may well ask. It’s something like this:

First, let’s restate the idea:

Concept

Allow two teams to exist, the current security team as the “vendor-sec subteam”, and the full team which can expand to also include the testing-security people and others interested and able to help out with security work but who do not have access to vendor-sec.

Then, we want to think about the processes that are involved at a global level. First for uploads that can’t be made public until some later date, and can only be worked on by the “vendor-sec” enabled security team.

  • Source upload to queue/unchecked (as now)
    • Upload is authenticated etc
    • Upload is moved to queue/embargoed
  • Upload is autobuilt
    • Logs are sent to a procmail address
    • Logs are forwarded (by procmail) onto the vendor-sec subteam
    • Logs are signed and sent back
    • Binaries are uploaded to queue/unchecked
    • Binaries are moved to queue/embargoed
  • Uploads (source + binaries) are approved from queue/embargoed by
  • running amber
    • Uploads are moved into the archive
    • The archive is mirrored
  • The approver issues an advisory

And likewise for issues that don’t need so much secrecy:

  • Source upload to queue/unchecked-disembargo (new directory)
    • Upload is authenticated etc
    • Source+version added to unembargo table
    • Upload is moved to queue/unembargoed
  • Upload is autobuilt
    • Logs are sent to a procmail address
    • Logs are forwarded (by procmail, after checking unembargo table) onto the full team
    • Logs are signed and sent back
    • Binaries are uploaded to queue/unchecked
    • Noting that the source is in the unembargo table, Binaries are moved to queue/unembargoed
  • Uploads (source + binaries) are approved from queue/unembargoed by running amber
    • Uploads are moved into the archive
    • The archive is mirrored
  • The approver issues an advisory

Processes are great, but they have to be supported by proper information handling, so it’s worth looking at what the above actually means for the on-disk structures, too. This is particularly important, since filesystem level security is mostly how we wall off vendor-sec stuff from other users of the system (whether they’re also working on security issues, or doing, eg, web page maintenance or something equally random). Voila:

queue/unchecked, queue/unchecked-dismebargo
    world writable
    accessible only by katie
    contents automatically cleared every 15m

queue/embargoed
    writable only by katie
    accessible by vendor-sec subteam
    amber allows packages to be ACCEPTed
    [...] allows packages to be REJECTed

queue/unembargoed
    writable only by katie
    accessible by full team
    amber allows packages to be ACCEPTed
    [...] allows packages to be REJECTed

queue/accepted
    packages only sit in accepted briefly while amber is running

And finally, to sum things up, this is what a member of the security team should expect to actually do to release an advisory:

  • Upload package to queue/unchecked (or queue/unchecked-disembargo)
  • Wait for build logs to be sent to your address
  • Check and sign those build logs
  • Check the final packages in queue/embargoed (or queue/unembargoed)
  • Run amber over the packages to accept them
  • Fill in the template advisory generated and send it to the -announce list

The differences between that process and the current one used by the security team is pretty minor, ttbomk: currently uploads sit in queue/accepted not queue/embargoed, and currently the security team ignore the template advisories amber generates preferring to make their own — aiui, this is mostly due to delays in actually receiving the advisory template since amber mails it instead of leaving it somewhere convenient on disk.

Anyway, that’s what the plan looks like. We should see how well it survives contact with the enemy, or at least reality, remarkably soon.

(Of course, I guess “political issues” could always foul it up completely anyway, particularly going by the fact that it’s been more or less the only problem raised by the folks I’ve passed this on to… Oh well, reconstruct that bridge when we get to it, I guess)

Queue Building

The biggest difference between the hypothetical security queues and the existing byhand and new queues is that they need to be autobuilt; it’s not really feasible to try saying whether a security update is acceptable if there’s still some possibility it’s not going to build on half our architectures. Additionally, it means that there are two queues that need to be built, rather than just one.

The way accepted autobuilding works isn’t too complicated, fortunately. There’s a directory where symlinks are made to uploaded packages and sources, and a Packages files (for all architectures) is generated, as is a Sources file, all of which are made available to buildds. They’re also added to the regular unstable Packages and Sources files and passed to quinn-diff and wanna-build. In order to avoid unnecessary breakage at transitional points (such as when a package moves from accepted to unstable proper), packages are kept in the accepted autobuild directory for a day or so afterwards. There’s also the issue of including the .orig.tar.gz from the archive along with the .dsc and .diff.gz from accepted for uploads that aren’t new upstream versions.

Security accepted autobuilding adds two extra wrinkles. The first is that packages being autobuilt can’t be generally visible to other users on the security host, which includes the web server that’s supposed to get the packages to the security buildds. So instead of symlinking, copies have to be made, and care has to be taken to ensure that doesn’t accidently leak information in the process. The second trick is that security needs autobuilding of multiple suites (oldstable and stable, and hopefully testing as well), whereas for the regular archive, it’s sufficient to autobuild unstable packages from accepted, and leave the rest until the daily pulse run. This particularly means that uploads from accepted need to be moved into separate directories depending on the target suite, so that the buildds don’t accidently build a package for woody against the libc from sarge.

At a gross level, generalising to supporting autobuilding of multiple queues is therefore straightforward: extend the “accepted_autobuild” table to include information of what queue the files are from (which implies a name change is appropriate), extend the configuration file to allow you to specify which queues should get autobuild handling for which suites, and finally add the code to actually do this when packages are moved into and out of the appropriate queues. Hurray for simplicity!

But there’s one little bit of complexity still to bother us. We know we need different directories and Packages files for different suites, but do we need it for different queues as well? Worse, we need different chroots on each autobuilder for each suite, do we need different chroots for each suite for each queue? The latter could well be a complete showstopper.

One reason why we might need to treat the queues we’re considering separately is that they have different secrecy requirements: embaroged updates can’t be visible to anyone, even people working on the unembargoed security updates in parallel, and yet the buildd has to forward its logs on to someone to be signed. If embargoed and unembargoed packages are built in the same chroot, by the same buildd, from the same queue, how do the logs magically go to the right person?

That’s not necessarily easily answered; currently the hope is that it’ll be okay to have a single shared queue and set of buildds and chroots, and have all the logs forwarded to a single address, at which point they can be redirected to the appropriate security team members by procmail. The assumption this is based on is that there’ll already need to be some magic to do that for the builds themselves once they’re uploaded and need to be moved into either the embargoed or unembargoed queue, so the same thing should be possible with the build logs.

Pending further surprises though, that’s enough to move onto the next phases, which is generalising dak to cope with arbitrary queues.

The NIv2 Plot

So having found time to catch up satisfactorily on the implementation, time to get back into the blogging.

After working out what you actually want to do, the next step in implementing stuff, in my book, is to make sure you fully understand the context of the stuff you’re trying to change. In this case, the “unapproved” queue concept comes from some thoughts from quite a while ago that come under the heading “NIv2” for “new incoming v2”. For reference, the original “new incoming” was the switch to separate queues for new, byhand, and accepted packages, which was followed by a notable update shortly after to make accepted autobuilding possible.

That setup works, but has a few problems. One is that packages sit in accepted for quite a while (up to a full 24h), and occassionally in that time it can become inappropriate to actually add the package to the archive — for instance if it’s a build of a package that’s then removed from the archive. This results in an UNACCEPT, and requires manual intervention, since potentially it can result in weird inconsistencies, such as bug closures. Another is that files in the various queues aren’t tracked in the database, which can result in files (particularly upstream tarballs) being lost or left behind, and also requires some unpleasant special casing in the code. A final concern is that one of the bigger time sinks in the daily pulse is the apt-ftparchive scan, which is actually just doing work that’s already been done in order to allow packages in accepted to be autobuilt.

Put all those concerns together with the ones mentioned last week, and you get the impetus for NIv2. The principle of that is to put stuff in the queue into the database properly, simplify it, extend the queuing support to deal with different queues, and move stuff from accepted directly into the pool every quarter hour, though only pushing to buildds that frequently, leaving updates of Packages files and mirror pushes to still be daily, or every few hours or whatever.

This is what that bit looked like in the dak TODO:

[hard, long term] unchecked -> accepted should go into the db, not a suite, but similar. this would allow katie to get even faster, make madison more useful, decomplexify specialacceptedautobuild and generally be more sane. may even be helpful to have e.g. new in the DB, so that we avoid corner cases like the .orig.tar.gz disappearing ‘cos the package has been entirely removed but was still on stayofexecution when it entered new.

Obviously, it needed to get defined a little more clearly than that to work out how much the changes for the security queues I’m interested in depend on the other changes, and what they actually mean in detail. After some IRC discussion we came up with more or less the following summary of what NIv2 actually means:

How NIv2 should work.

q/accepted goes away; once packages are accepted, they get added straight into the pool and the database. At this point they’re added to a micro-suite for the buildds, and so apt-ftparchive can cache the package contents; and a symlink tree is made up so incoming.d.o works. The micro-suite might or mightn’t be a fully fledged suite in the database. Presumably it wouldn’t be visible in dists/ though. The micro-suite probably should contain packages uploaded to unstable for X hours, where X is 48 or 2x pulse-interval or similar.

This means UNACCEPT is impossible and apt-ftparchive takes less time in cron.daily.

q/byhand and q/new get new friends: q/sec-embargoed, q/sec-unembargoed, q/stable-updates. Rather than checking for “byhand” entries, or a lack of overrides; uploads get diverted to those queues if they’re uploaded to security or stable, and get moved out of there when a security team member or stable RM (ie, Joey) runs an amber/lisa-esque command.

This means sec-embargoed and sec-unembargoed need to be autobuilt.

This also means that proposed-updates can be a full suite, and point releases can be a matter of (a) adding proposed-updates to stable; (b) removing old versions from stable; (c) removing packages that have to be dropped; and users can put “stable + proposed-updates” in their sources.list reasonably sanely.

This also means that REJECTs from accepted (security) or the archive (p-u) aren’t needed.

It means the queue ends up looking like:

    unchecked      [uploads go here, insecure]

    holding        [secured]
    new            [waiting NEW processing]
    byhand         [waiting byhand processing]
    stable-updates [waiting stable RM approval]

    done           [.changes files for accepted packages]
    reject         [.changes, .reason and rejected stuff]

    bts_version_track [data for BTS]

with q/accepted disappearing and q/proposed-updates and q/old-proposed-updates being replaced by the stable-updates / oldstable-updates queues.

One of the nice parts about this is that it breaks down fairly neatly into three sets of changes: supporting additional queues, removing the accepted queue, and tracking queue stuff in the database. It does make those changes more or less dependent though — the new queues need to be autobuilt in the way accepted is at present; so it makes sense to add the queues before removing accepted so as not to lose the queue autobuild support.

The above actually can be simplified a little further — instead of managing a microsuite for changes since the last update, it’s possible to add accepted packages directly to the real suite; and generate the Packages files needed for autobuilding by querying packages accepted to unstable in the last 24 or 48 hours, simply by adding a field to track upload time for binaries as well as sources.

In any event, this thus makes generalising the queue autobuild support the next challenge.

Afraid of your Neighbour’s Disapproval

Continuing on from the SCC stuff, which I should probably get in the habit of calling the mirror split, then…

At the same time as I’ve been trying to work out ways to fit the mirror split stuff into the AJ market concept, I’ve been pondering on and off what can be done to improve the security situation. My thinking’s probably still biassed by my long time wish for official security updates to testing, but I’m somewhat inclined to think that if we can figure out a way of integrating the testing security team and the official security team we might end up closer to a solution — at the very least that’d provide us with a decent number of people who can actively help out with security updates that aren’t secet and get a few more people familiar with the technical aspects of how security updates are released, who’ll then have less of a learning curve if they end up being added to the security team proper. And heck, even if that doesn’t work, it at least gives people running testing an easier url to remember for security updates.

Of course, integrating the teams isn’t that easy, or it would’ve already been done. So after thinking about it, I had a chat with Joey (aka Martin Schulze, aka the guy who’s issued 317 of the 340 security advisories in the past 12 months) about the problems:

<aj> so, the other thing i wonder is what you think of the testing-security-support split?
<Joey> need more input
<aj> as in separate host, funny domain name, not available on security.debian.org/, extra effort to mirror, etc
<aj> i think it sucks :)
<Joey> there are a lot of sucking things…
<aj> it’s true
<aj> so, i figure the reasons for the split are (a) you don’t want to add a bunch of people to team@ and hence vendor-sec, and (b) there’s no way to make it so they can use dak-security without being full team/vendor-sec members; and maybe (c) you don’t want them officially associated with debian security coz they’re newbies
<Joey> a) and b) are true
<Joey> However, with security.debian.org being a push mirror, it would be possible to let it be pushed from a second source into a different directory, such as /testing-security/
<aj> hrm
<Joey> However, that would require all architectures to be supported and not just some.
<Joey> all as in “all that are in testing”
<aj> ah, that’s (c) then — they don’t do enough for the official debian stamp
<Joey> ah ok

Assuming those really are the key factors (which might or might not turn out true, but they seem likely) then that’s something that can be worked on. The first point is completely out of my domain — I’ve never been on vendor-sec, and it’s not likely something that ought to be tweaked much anyway; vendor-sec’s a very closed list by design, and for most security work it’s probably a distraction (testing security as is currently operates without concerning itself with vendor-sec, AIUI); the second points purely technical; and the third point is a combination of technical issues and an issue of training, both of which will be hopefully helped a lot by integrating testing-security and security.debian.org.

So the easiest point of attack is obviously the second one, since it’s “just” a matter of expanding dak’s featureset.

The problem here is embargoed fixes — the security team needs to be able to prepare fixes for vulnerabilities revealed on vendor-sec, but then keep them secret from everyone not on vendor-sec until the publication date. Which is fine, except that dak takes an all or nothing approach to that sort of secrecy: secuirty uploads go into accepted, get built, and then the security team run “amber” to publish the update. But what that means is the accepted directory and its contents can only be visible to people with vendor-sec access, which means only people with vendor-sec access can do security updates.

But we do actually have a different queuing mechanism that allows us to keep some things secret, while not hiding everything: thanks to the crypto-in-main transition, the NEW queue acts like that, more or less, only allowing ftpmaster to see the contents of NEW packages until they’ve passed license inspections and the appropriate export notifications have taken place. And actually, we’d already considered extending that sort of behaviour to add an “unapproved” queue that would prevent uploads from entering proposed-updates without the stable release manager’s prior approval.

The idea then, would be to have a couple of queues for the security team: an embargoed queue, accessible only to those with vendor-sec access that would store embargoed fixed ’til they’re ready to be released, and an unembargoed queue for preparation accessible by a wider group, for preparation of fixes that don’t need to be secret. Both queues would need to be autobuilt (and the build logs for embargoed uploads would need to stay restricted, while the build logs for unembargoed uploads would not), which is an added difficulty, since only accepted can be autobuilt presently. But otherwise, it seems basically plausible, both to myself and Joey.

Conveniently, Andrew Pollock‘s been interested in throwing some money through the AJ Market to see what happens, and since the initial idea we were tossing around fell through due to his move to the US, he’s put up the dosh to make the above happen. Well, strictly the dosh to make me make the above happen, or something.

Anyway, I’m already running behind both in implementation and blogging about it, so back to the grindstone…

(Okay, perhaps it’s not quite the feature Mark Twain would most like added to dak, but it’s still nifty.)

Hacking dak

Who can resist a good rhyme? Or a bad one?

So this round of dak hacking turned out to make the AJ Market scheme another notch more confusing — hence the delay in blogging, and the teaser in my last post. The issue leading to the confusion is that the major item on the list to start hacking on was SCC, which unlike the projects I’ve undertaken so far, is more than a day’s hacking. In fact, due to the need to give mirrors a chance to adapt to the new system between it being implemented and actually used, it’s actually more of a multi-week task. And doing it as a one-day-a-week project would extend that into a multi-month task afaics. I guess that’s still better than never, but obviously it’s worth looking into alternatives.

Naturally, then, the first phase of a longer project like this is threefold. We’ll call it “the three P’s”.

Planning

My theory at this point was to come up with a plan for what to do, try figuring out how much work it’d take, and then see what sort of financial arrangements might be plausible — not involving me cutting a few weeks out of my life for spare change, but without making the whole thing an unleapable chasm from what the AJ Market’s currently managing either. I figured writing it up as a semi-formal proposal makes most sense:

Summary

Implementation of the outstanding mirror split proposal for the Debian archive to allow new architectures, particularly amd64, to be included in the archive.

Benefit

In spite (or perhaps because) of its simplicity, this project has been languishing for over two years, and is not currently being worked on; so at present it’s not even possible to estimate when it would otherwise be completed.

It is most notably preventing amd64 from being integrated into the normal Debian development environment, causing derived distributions to maintain amd64 specific patches themselves.

In the longer term, reducing the constraints imposed on the archive size may allow the introduction of additional suites, such as backports or volatile, as well as additional architectures; though significant further discussion on this would be needed.

Background

Since at least mid-2003 the Debian archive has been closed to new architectures due to the already large amount of space and bandwidth required to become a Debian mirror. At present, the archive uses some 158GiB of disk, and about 1GiB per day; additional architectures are expected to require approximately an additional 10GiB each, and there are likely around half a dozen architectures that will be considered for addition once the moratorium on new architectures is rescinded (incl amd64, armeb, sh variants, kfreebsd and possibly partial architectures for arch variants such as s390x and ppc64).

The primary work needed to fix this involves:

  • ensuring the mirror network operates correctly when a majority of mirrors are partial; this reduces the impact on bandwidth and storage capacity
  • optimising portions of the archive maintenance software, particularly apt-ftparchive; this reduces the load on the archive server
  • providing appropriate guidelines on the qualification criteria new architectures need to meet in order to be added to the archive; this provides a limit on future increases, allowing growth to be appropriately controlled

Actual work

I expect there will be six phases to the project:

  1. cleanup of the archive as it stands, and establishing a clear categorisation of its contents to define what a partial mirror by architecture or suite should officially contain
  2. providing appropriate scripts to ensure mirror sites can easily comply with the previously defined expectations for partial mirroring
  3. devise an appropriate structure for the new mirror network, that can easily incorporate existing mirrors, and coexist with the existing structure
  4. provide information on the new structure to both mirror admins and users; assist with the transition, and resolve any problems found
  5. ensure the archive management software is appropriately optimised, and that archive inclusion criteria have been debated and established
  6. add new ports that have passed the qualification requirements to the archive

In theory, a couple of days for each of those sound plausible, so making that twelve days actual work (with a couple of week’s delay in between for mirrors to have some time to adapt to the new network). On the downside, twelve days at a day a week is over three months of real time, not counting the possibility of doing other things with the one day a week, or Christmas, or the aforementioned delay for mirrors. Yick.

So much for planning.

Preparation

So the next “p” is preparation. In this case that’s finally getting around to fix dak CVS, which has been slightly broken since May. The extent of the actual breakage was just the loss of the ChangeLog history, aiui (or at least, that was the unrecovered breakage), but the result of that was months of uncommitted changes on both ftp-master and security (and reportedly from Ubuntu’s dak installation too). The changelog for the first set of commits (not counting buildd changes from ftp-master, security changes or Ubuntu changes that haven’t made it to ftp-master) looks like:

        * tiffani: new script to do patches to Packages, Sources and Contents
        files for quicker downloads.
        * ziyi: update to authenticate tiffani generated files

        * dak: new script to provide a single binary with less arbitrary names
        for access to dak functionality.

        * cindy: script implemented

        * saffron: cope with suites that don't have a Priority specified
        * heidi: use get_suite_id()
        * denise: don't hardcode stable and unstable, or limit udebs to unstable
        * denise: remove override munging for testing (now done by cindy)
        * helena: expanded help, added new, sort and age options, and fancy headers
        * jennifer: require description, add a reject for missing dsc file
        * jennifer: change lock file
        * kelly: propogation support
        * lisa: honour accepted lock, use mtime not ctime, add override type_id
        * madison: don't say "dep-retry"
        * melanie: bug fix in output (missing %)
        * natalie: cope with maintainer_override == None; add type_id for overrides
        * nina: use mtime, not ctime

        * katie.py: propogation bug fixes
        * logging.py: add debugging support, use | as the logfile separator

        * katie.conf: updated signing key (4F368D5D)
        * katie.conf: changed lockfile to dinstall.lock
        * katie.conf: added Lisa::AcceptedLockFile, Dir::Lock
        * katie.conf: added tiffani, cindy support
        * katie.conf: updated to match 3.0r6 release
        * katie.conf: updated to match sarge's release

        * apt.conf: update for sarge's release
        * apt.conf.stable: update for sarge's release
        * apt.conf: bump daily max Contents change to 25MB from 12MB

        * cron.daily: add accepted lock and invoke cindy  
        * cron.daily: add daily.lock
        * cron.daily: invoke tiffani
        * cron.daily: rebuild accepted buildd stuff
        * cron.daily: save rene-daily output on the web site
        * cron.daily: disable billie
        * cron.daily: add stats pr0n

        * cron.hourly: invoke helena

        * pseudo-packages.maintainers,.descriptions: miscellaneous updates
        * vars: add lockdir, add etch to copyoverrides
        * Makefile: add -Ipostgresql/server to CXXFLAGS

        * docs/: added README.quotes
        * docs/: added manpages for alicia, catherine, charisma, cindy, heidi,
        julia, katie, kelly, lisa, madison, melanie, natalie, rhona.

        * TODO: correct spelling of "conflicts"

Ugh. Still, that’s enough to start work.

And the final “P”? Come on, be honest with yourself, you know what it’s going to be.

Procrastination

Okay, that’s not entirely fair; the irrelevant bit of work was actually on to TODO list before SCC (mostly because it was something that I could get done reasonably quickly) and in fact was this line of the above changelog:

        * dak: new script to provide a single binary with less arbitrary names
        for access to dak functionality.

All the various model/actress names have been getting more than a little confusing recently, with almost forty in the dak suite, and another two dozen or so in use elsewhere — and then there’s the fact that the whole hot babes thing is both a bit offensive, and getting a bit old. OTOH, you need something to rename them to. We ended up deciding on the “version control solution” and introducing a “dak” command that’d launch all the different little bits of functionality depending on arguments, in the same way cvs, svn, tla, bzr, darcs etc do.

The implementation’s kinda neat: we have a list of commands (like “ls”) and their description (“Show which suites packages are in”), along with the python module and function they’re in (“madison”, “main()”). That let’s us not actually have to change any of the other scripts immediately, and lets “dak ls foo” work the same as “madison foo”. It also means that down the track we don’t need to have separate modules for each subcommand, and that we can rename modules and functions without affecting the user interface.

Of course it also means that all the internal scripts haven’t changed to use the new names yet, leaving the new interface a bit underused, but hey, “dak ls” at least manages to be one character shorter than “madison”, so that’s a win!

To be continued…

Designing Intelligence

With the recent Kansas Board of Education decision and the results in the Dover, Pennsylvania Board of Education elections, the Intelligent Design debate seems to be all the rage. It’s not really that interesting a debate, mostly being a rerun of the standard evolution versus creationism stuff with some new catchphrases. Even as a religious debate it’s not really terribly interesting from what I can see; the idea that God created the universe, then allowed it to evolve to where it is today seems entirely satisfactory, and doesn’t even undercut the Bible as much as considerations like “why does God let evil exist?”

But to my mind, the real fun in religious debates is in accepting all the premises, and seeing where that really leads.

So let’s forget the fossil record, genetic science, knowledge about breeding animals, and whatever else that might be relevant, and think instead about Platonic ideals of God. If God is all that’s good, all that’s wise, all that’s loving and caring, and Man is made in His image, what does that mean for the question of whether we evolved from bacteria and apes or appeared fully formed, ejected from Eden? If you or I were perfectly loving, perfectly wise, knew everything and could do whatever we wanted, and what we wanted was to make beings in our own likeness, what would we do?

So God created man in his own image, in the image of God created he him; male and female created he them.

Genesis 1:27

And for reference, that’s a justifiable line of argument — if we are indeed made in God’s image, however flawed and imperfect we might be, when we look at what is best and brightest in our soul’s we must see the image of God, however blurred. For what else would we see?

Well, we don’t actually have to be omnipotent to try for that — we just have to be virile and fertile, and indeed people have children every day. And at their best, parents do many things for their children: provide for their well-being until they can look after themselves, pass on our knowledge and wisdom so they don’t have to face the world in ignorance and stupidity, help them grow strong and fit, provide challenges and support, and gradually introduce them to the problems of the real world, until eventually acknowledge them as equals — an adult themselves, whose opinions and worth are equal to our own.

If anything, that in fact understates things: as adults your children shouldn’t be your equal, they should be better: wiser, kinder and more able. They should benefit not only from your knowledge, but should build on it — it’s not for nothing that we expect each Olympics to involve records breaking, or every generation’s average IQ to be higher than that of the previous. At their best, debates like that over Kyoto don’t question whether the highest priority is to provide a better world for the next generation, but rather whether that’s best achieved by preserving the environment as it is, or by developing the economy so that we have the knowledge, the ability and the resources to better protect and recover it in the future.

For he established a testimony in Jacob, and appointed a law in Israel, which he commanded our fathers, that they should make them known to their children:

That the generation to come might know them, even the children which should be born; who should arise and declare them to their children:

That they might set their hope in God, and not forget the works of God, but keep his commandments:

And might not be as their fathers, a stubborn and rebellious generation; a generation that set not their heart aright, and whose spirit was not stedfast with God.

Psalm 78, 5-8

And that’s the point at which arguments for strict creationism seem to fall apart, to me. Certainly, creating a person from the void would be a grand achievement, but loving parents share the glory of their children’s achievements, and if creating mankind were a grand achievement for God, would it not be a grander achievement for a monkey? And for a loving parent, would it not be more loving and more joyful to watch your child make that triumph? Would your child’s flawed work not inspire greater wonder and pride, even than the more perfect form you might have created yourself? And as a grandparent, would your joy not grow further as you watched your children’s children grow and mature and achieve new heights?

If one were to imagine a bacterium made in God’s image, would you really expect it and its children to achieve anything less than what you see around you?

Which is more awe inspiring, more Godly? Creating the world, or creating a single cell that can create the world?

Another parable put he forth unto them, saying, The kingdom of heaven is like to a grain of mustard seed, which a man took, and sowed in his field:

Which indeed is the least of all seeds: but when it is grown, it is the greatest among herbs, and becometh a tree, so that the birds of the air come and lodge in the branches thereof.

Matthew 13:31-32