For Pete’s Sake

I was going to blog something, what was it? Oh yes, some musings on Mr Costello. He gave an interview with Ray Martin on 60 minutes on Sunday, which didn’t really say anything new — Ray tried to make it sound like there was some chance this was all some devious plot and Costello was really leaving himself open to become leader of the Liberals for the next election — and it is politics, so who really knows — but for my money, it’s going to take a miracle for him to do anything but spend another half year or so on the backbenches, then leave parliamentary life for the private sector.

Probably the most telling exchange was concerning former Treasurer and PM, Paul Keating:

RAY MARTIN: We come back to the spineless, the gutless, that have been thrown at you – why didn’t you do a Keating?

PETER COSTELLO: Well, first of all, I wouldn’t want to be compared in anything with Paul Keating,

RAY MARTIN: He had a go.

PETER COSTELLO: Well, I think he also illustrates that someone can get consumed by bitterness.

RAY MARTIN: He got the job.

PETER COSTELLO: Keating, Keating was a very lucky man.

Apart from partisan differences, I don’t think there’s a criticism you could make of Costello’s electability that didn’t apply equally or moreso to Keating in ’92; yet Keating got the top job, and Costello never got the chance. And while Costello may never have had the numbers in the party room, well, Keating didn’t either, until he’d challenged, lost, and dropped off to the back benches for a while, letting the rest of the party stew in their own juices.

As far as I can see, had Costello done that (well) prior to last year’s election, the Liberal caucus would’ve been skittish enough to have pushed Howard into retiring, and whether we’d have Costello as PM would’ve just been a question of how much people liked Kevin ’07 compared to Fightback! in ’93.

But that’s not the way it went, and the only reason I can see is that Costello put loyalty (presumably to the party, rather than the PM) over personal ambition. Maybe that counts as gutless. Heck, it might even ultimately count as selfish — if you view what happened as meaning the party lost out (not to mention John Howard’s legacy), simply because Costello didn’t want people calling him disloyal. But either way, it seems counter-intuitive that we seem to end up rewarding personal ambition over loyalty.

It’s similar when you look at other PMs, as far as I can see: Howard doesn’t come off as spectacularly loyal (except to his wife), but was certainly ambitious; Hawke similarly seemed to break faith with Keating, and depending on your point of view, broke with Labor traditions somewhat in privatisation, co-operating with business, and reducing strikes.

And maybe that’s how you get disloyalty as a good thing for a politician — you’re not elected to represent your party, after all, but a whole bunch of Australians, and if being loyal to your party is one of your priorities, maybe that’s evidence enough that you’re going to stick to your party’s line even when it’d be better for the country to take a bold step. And on the other hand, if you’re not loyal to your party — which will have almost certainly done a lot for you by the time you’re running for Prime Minister in Australia, your loyalty in general is probably going to be pretty questionable.

Anyway, to finish off, some random links to an extract from Costello’s memoirs, and Paul Sheehan’s take on matters.

Wow

So, either a black US president, or a female VP. Nifty. And not just because the latter nomination won me $4950 odd inkles.

Faith

Faith is an interesting concept to try to disentangle from religion.

faith
    n 1: a strong belief in a supernatural power or powers that
         control human destiny; "he lost his faith but not his
         morality" [syn: {religion}, {faith}, {religious belief}]
      2: complete confidence in a person or plan etc; "he cherished
         the faith of a good woman"; "the doctor-patient relationship
         is based on trust" [syn: {faith}, {trust}]
      3: an institution to express belief in a divine power; "he was
         raised in the Baptist religion"; "a member of his own faith
         contradicted him" [syn: {religion}, {faith}, {organized
         religion}]
      4: loyalty or allegiance to a cause or a person; "keep the
         faith"; "they broke faith with their investors"

Without religion and supernatural powers, you immediately lose the first and third definitions, and if you’re just left with “complete confidence” and “loyalty”, it seems like you’re missing out on a lot.

Faith seems, to me, to be somewhere in between belief and certainty — you can believe things without having faith in them, but you can’t have faith in something you don’t believe; and if you’re already certain, then you don’t need faith. Faith is obviously something you can lose, and it seems to be something you need if you’re going to be religious.

One of the ways in which you seem to need faith for religion is in unprovable beliefs, like “everything happens for a reason”, because no matter how many times you can find the silver lining to a cloud, there’s always a chance you won’t be able to find one the next time, and even if there really is one, it’s only through faith that you can hold onto that belief.

Of course, the only time you want to do that is if the belief is actually true (in spite of the contradictory evidence that caused you to doubt in the first place), or if holding the belief is at least helpful. In the cases where it’s not, well, that’s what the term “misplaced faith” is for.

I think, then, that leaves faith as something like “the ability to act on (or in accordance with) your beliefs, despite having doubts about whether they’re valid”.

Which from one perspective seems really scary (if you’ve got doubts, shouldn’t you resolve them first?), and from another seems simple appropriately cautious and anti-fanatical (better to act being aware of doubts, than to feign certainty, or to give in to inaction).

On Chaos

I’ve been a fan of chaos theory and emergent order for a long time now — the idea that simple rules repeatedly applied build complex and creative results is just beautiful to me, and seeing the same principles apparently apply in different areas — from evolutionary theory to the Wisdom of Crowds — for equally astounding results is outright awe inspiring. Ultimately, in various ways, it hits all my buttons — complicated maths, relevance to lots of different things, egalitarianism and making cool things happen; and as a result of all that, these days it’s a fundamental part of my belief set.

As should be the case with all good beliefs, that’s been challenged lately in a variety of ways. Debian, for example, has had a real mean streak for a while that just doesn’t mesh with my sense of the sort of good result that should spontaneously appear from a bunch of generous people working together. Likewise, Wikipedia seems to lately be suffering increasingly from something like censorship by bureaucracy rather than the wild free-for-all that somehow produces something better than the best experts. You could pretty easily call those errors of taste — maybe they’re good results, and I just happen not to like them — but I don’t think it’s a big call to think otherwise, and it’s certainly not a big call for me to prefer results I happen to like. And, of course, you could just call it an over-generalisation: maybe random anarchy is good to a point, but only to a point.

Ubuntu is a fascinating argument for that, both because it’s had obvious results in a short amount of time and because it shares a lot in common with Debian. Some things in common: both Debian and Ubuntu are trying to release a top notch, multi-platform, free, Linux based operating system; both are based on the deb package format, dpkg and apt; both had a large number of key developers in common over 2005-2006; both are based on almost exactly the same source. The key differences, then, seem to me to be these: Ubuntu is younger; Ubuntu spends money; Ubuntu relies on proprietary software; and Ubuntu has a self-appointed benevolent dictator for life.

From what I can see, one of those has to be responsible for the differences between Ubuntu and Debian. There might be other reasons in between them — perhaps it’s Ubuntu’s code of conduct or marketing nous or some particular feature set; but as far as I can see Debian could have done any or all of those things — the question is why it didn’t (or why it hasn’t). And the only ways Debian differs from Ubuntu that it can’t change, that I can think of, are the one’s above: it’s an older project, maybe that stops it from being as fun; it’s dedicated to free software so maybe that stops it from building a site as slick as Launchpad; it’s reluctant to spend money; and it’s somewhere between a democracy and an anarchy, but there certainly isn’t anyone who could be called dictator for life, benevolent or otherwise.

If it’s just the age of the projects that’s the issue, that’s not very interesting. There have been lots of newcomers through the years that have grabbed attention from Debian folks then levelled (or died) off: Progeny, Storm Linux, GenToo, Lindows and others, and if it’s just that one of them’s finally come along that’s going to level off higher than Debian, that’s no big deal. But that doesn’t seem all there is to it, to me: there seems like there’s an much bigger difference than that can explain. And likewise for Launchpad; I’ve no doubt there are some benefits from keeping it proprietary, but if it were freed tomorrow or two years ago, I don’t think it would change the state of the distros substantively. Maybe I’m wrong, of course.

Maybe it’s money, in which case, well, that’s relatively easy to fix. There’s plenty of money in the world, and getting it into the right hands might not be a trivial problem, but it’s at least tractable. But on the other hand, I tried that in Debian, and it turned out a lot less tractable than I expected — so maybe even if it is a cause, it’s not a root cause, but another symptom, just like differences in marketing.

Which would leave it down to differences in leadership, and the conclusion that at least in this circumstance, democracy or anarchy is doomed to failure, when put up against a sufficiently clever and resourced dictatorship.

To be fair, that’s possibly true anyway — at worst, you could just keep adding resources to the potential dictator (and thus people they can hire, or things they can buy), or taking them away from your democracy and your dictator’s going to win eventually. But in this case, Ubuntu started with (well) under US$500M (the Ubuntu Foundation was founded with $10M), so it seems fair to say that it had no more than 100 to 1000 times Debian’s monetary resources (there’s about US$100,000 in Debian’s bank account most of the time, and DebConf tends to cost about US$50,000 each year). And conversely, Ubuntu started with only a handful of people, while at the same time Debian had hundreds of developers, and thousands of contributors.

So that leaves me with a quandry: the simplest explanation of Ubuntu’s success seems to come down to being a well-run benevolent dictatorship, but the thing that most excites me about the whole free software deal is the spontaneous emergence of order and quality from undirected contributions.

To quote Ian Murdock from a while ago:

Bravo, AJ. And keep at it. It takes a steely sort of person to stick their necks out like this, but strong leadership is absolutely essential to the success of any project, open source or otherwise. After all, success doesn’t just happen at random.

Where does your leadership go, when your entire aim is exactly the sort of success that does just happens at random?

Of course, that’s a simple explanation, but it’s not the only one possible, whatever Occam’s razor might have to say about it. One other is that it’s not that Ubuntu has a dictator that’s made it such a phenomenon, but that underneath that, it has a community that is actually more effective and vibrant than Debian’s. That’s a somewhat awkward claim to make though, because you can’t simply look at it today, but you need to go back right to the beginning, and say that even when it first came out of hiding, with only dozens of contributors compared to Debian’s hundreds or thousands, that even then it had a better community — one that better empowered people at every level to participate and influence the project’s direction — than Debian did, and that that’s continued since.

That’s not without some plausibility though; Debian’s had plenty of problems accepting new developers, and even when there aren’t any problems at all, it doesn’t approach the evangelical nature of Ubuntu — whose very name, and its “humanity to others” implies outreach, and whose first goal is about converting others. The accessibility of the website, or the online blueprints and specifications and the travelling development conferences and sprints, also seem like they probably provide for better community input and interaction than anything equivalent Debian manages.

Assuming that’s true, a lot of those attributes are fairly simple results of having a new project that doesn’t have inbuilt resistance to those ideas, and a dictator who can make decisions even if there is resistance, and who generally manages to make good ones. There are some consequent limitations you’d expect, though; for instance that nobody will contribute as much to Ubuntu as Mark does/has. Fortunately for Ubuntu, in both dollar terms, and time/insight terms, that’s a pretty high benchmark.

The downside is that the form that conclusion takes is a good one for justifying all sort of stupid, extremist positions in the presence of contradictory evidence. “What? It didn’t work? You just have to try harder!”

Oh, a PS. Google Trends is probably a better measure of people new to a topic, than total interest; though it’s obviously imperfect in lots of ways at that too. If you assume nobody ever gets bored of your topic, that works as the derivative of actual interest, but that’s not a great assumption either. Anyhoo, it’s also kinda interesting to see just how closely Fedora and Debian are tracking.

Nosce te ipsum

(Alternative title: I aten’t dead)

My name’s Anthony and, like a lot of other people, I suffer from depression. It’s not something I like to talk about — my normal philosophy is just to filter out that part of my life and just talk and think about the interesting and fun parts of life. The downside is that when it gets particularly bad, everything gets filtered out and I more or less just vanish.

Sometimes you see depression and similar things linked to creativity, pointing at various artists or thinkers throughout history, and saying “maybe without the suffering, the would never have achieved so much”; or occassionally you get the more pessimistic view: maybe smart people are depressed because the world’s cruel and that’s just the rational response. Ignorance isn’t just bliss, but in this life it’s the only way to be happy at all.

I’m not really in a position to critique those arguments — I tend to find them both fundamentally vile, to the point that any arguments I might make against them might just as easily be illogical justifications of pre-existing bias than anything sound. Which isn’t to say I haven’t been attracted to them — being able to say “this is just how anyone creative ends up, it’s not my fault” is a lot better than “not only do I suck, but I suck even more because I can’t deal with how much I suck”, and the idea that there’s a harmless way out — that you just have to have a different lifestyle and this will all get easier — can be pleasant too.

These days at least, I tend to look at it more like a hamstring injury; it happens, medicine might help you get better, and the only relationship it has to who you are is that you might be unlucky enough to be more easily injured than some people, and the harder you push yourself, the more likely you are to have some sort of injury. The difference, for me, is that with that view it becomes a risk than you can reduce, or some damage that you can probably eventually heal from, rather than the part of your personality that allows you to be who you are.

Getting medical help isn’t easy, or at least wasn’t for me. The first time I tried was in my last year of uni. I’d stopped being able to talk to anyone, I couldn’t see the point of anything I was doing, I’d not do assignments, or do them, then just not hand them in (which was… confusing). At one time at least, I couldn’t even make it more than a few steps out the door trying to go to class out of sheer existential terror — the blue sky and sunlight was just too much for me to take. At some point it was all more than I could handle and I went to see my local doctor; who ran me through the depression questionnaire of the time, diagnosed me with mild depression, and gave me a prescription for, I think, Effexor XR, which is a SNRI anti-depressant. Possibly at the same time, or maybe a little later, I also got a prescription for diazepam to help with the panic attacks I was having (“Do you have panic attacks?” “N… wait. Does randomly falling to the floor at the top of some stairs because life terrifies you count? I guess so then…”). I looked up both the drugs on the internet when I got home, finding out all the scary stories about increased suicide risk and other side effects from the anti-depressants, and finding out that diazepam is more commonly known as valium. Finding out I was the sort of person who took valium and anti-depressants was unpleasant in and of itself.

At some point, after a few months I guess, I gave up on the anti-depressants. I’m not entirely sure why; I still had some pills left, so it wasn’t just out of fear of going back to the pharmacy or the doctor to get more, I think it was just that I hated trying to use drugs to, as I saw it, modify my personality, and hated the implied admission that I couldn’t cope on my own. I stopped pretty much cold turkey, which as I understand it is one of the worst things you can do, but oddly don’t think I ended up having any effects from the drug at all, either taking it or not.

That, and the original comment that it was just a “mild” depression, left me not really sure if it was even depression as compared to me just somehow beaing even lamer than I’d thought — how can it be depression if anti-depressants have no effect? And if I’m this hopeless — dropping out of uni, not talking to anyone for months or more — with just a mild depression or none at all, that makes me pretty pathetic, because there are plenty of folks out there with more major depressions or generally worse circumstances who deal with their lives better than I was doing with mine.

Of course, even if you’re letting your life slip through your fingers, you can’t just do nothing; least of all having just been used to being busy from doing double load semesters at uni. For me, that meant distracting myself by hacking on Debian stuff — which had the benefits that it was somewhat social without having to actually talk to anyone, that it was probably a useful contribution to the world, and that it was technically difficult. Lying in bed, afraid of the sunlight, not talking to friends, on drugs that didn’t seem to do anything, with a laptop and needing some way to distract myself from how horrible I was or some way to prove I wasn’t completely useless, is the other side of how testing and ifupdown came to actually be, right at the very start.

Since then, I gradually got to the point where I could deal with people again, and while I still regret it, not finishing my second degree or honours at uni became a fait accompli, and not something I had to keep stressing myself over. Which isn’t to say there aren’t a million other things to stress myself over, and my life since then has been a more or less defined by struggling with that. Anything at all useful or clever you can ever do won’t be perfect, and any particular flaw could’ve been avoided if you weren’t a horrible person. Anyone who ever says anything nice, is only doing so because they’re just naturally kind and don’t really understand why their kindness is misplaced in this case, or is really only trying to manipulate you, or maybe both. On the other hand, anyone who says something mean or tells you how stupid you are, well, they might be right, but on the other hand, it’s hard to be really bothered because ultimately, they don’t know the half of it.

So I pretty much orbited there for a while, just taking a valium every now and then when I’d get too scared, and putting up with the bad times, and trying to make the good times worth the effort. Eventually that got too hard, and I went to the doctor to see about trying anti-depressants again. It was a different doctor in a different area this time, one that I was a lot more comfortable with (which is to say, not terribly comfortable), and the prescription this time was for Lovan, which, again when I looked it up at home, turned out to be an SSRI called fluoexetine, also known as prozac.

On the upside, it actually had an effect — after about a month or two of taking it I had what would probably have to be called a manic episode over about a weekend. Which sounds bad from the outside, and possibly is, but when you’re on the inside a couple of days of slightly crazy bliss and loving absolutely everything after years of just finding different reasons for being down is a really pleasant change. Fortunately, it was just a one off, and my mood settled back to having regular ups and downs, but without going anywhere near as far down, and weathering stresses with a bit more personal equanimity. It also meant that I really had been depressed, so even if I was coping badly, at least there really was something bad to cope with in the first place.

Anyway, blahblah, moving on, etc, getting my prescription renewed last year the doctor suggested that having been on prozac for a while now, I should be trying to wean myself off it, and basically give my body a chance to handle naturally what the prozac was assisting. I tried that, and it worked fairly well — over a period of some months I dropped down about a third dosage with no particular ill effects. With that having worked okay, rather than getting a renewal when that prescription ran out, I just stopped. There were a few withdrawal effects this time — some overly vivid dreams, and a weird feeling of dizziness that wasn’t quite dizziness every now and then, but that was about it.

Except that that wasn’t it for all that long. A bunch of stresses had been adding up over the past year or two, and by January I was more or less aching for some sort of a break, and not really being sure I was actually capable of anything really interesting anyway. The hackfest and some of the general chatting around lca was a real boost on the latter score, but every time I found chances to just do any of the less stressful things I thought I wanted, for various reasons, I found myself not taking them. I don’t really know where things go from there; at its simplest I found myself getting frustrated and confused, quit most of my roles in Debian, went back on Lovan, stopped talking to people, stopped doing more or less anything, then wrote a blog post. I think, maybe, I know what I’m going to choose for myself next, and I’m kinda looking forward to it, but I don’t think I’m quite ready to commit to a whole bunch of future stress yet either.

Anyway, like I said, it’s something I prefer to filter out, and there’s more than a little filtered out from the above too. So I guess in conclusion, for anyone wondering what I’ve been up to lately, the answer’s “nothing much”.

Inflation

There’s been a few major issues lately where inflation and central banks have been key elements: Zimbabwe’s collapse under Mugabe, the recent Australian Federal election, and the US sub-prime mortgage crisis. What I find fascinating is that it all seems to be treated as an absolute black art by almost everyone, in spite of economics supposedly being reasonably well understood these days.

It’s particularly weird, because at a fundamental level, inflation is absolutely trivial to deal with: if you don’t want inflation, stop printing more money; if you want to print more money, you’ll get inflation. Tin-pot dictators like Mugabe can’t manage that fairly simple recipe because it’s the only way they can ensure they have more money (and thus control) than their subjects — since they don’t create anything of value themselves no one gives them money by choce, and since they run their country into the ground, tax revenues don’t work well enough, leaving printing more money as the only way to replace whatever they’ve just wasted.

In Australia the situtation’s under better control: the Reserve Bank controls the money supply, and the bank’s major “pet project” is to have “low and stable inflation over the medium term”. Which it’s been doing pretty well — since 1991, consumer-price inflation has averaged 2.5% inflation, with a maximum of 6.1% — compare that to the period between 1978 and 1990, where the minimum inflation was 2.6%, the average was 8.3% and the maximum 12.5%. Go back earlier, and you get higher figures.

Yet despite that, there’s lots of fear and propaganda going on about how we’re somehow facing an inflation crisis, and, depening on your political leanings, that’s it’s either all the previous government’s fault and the new government will fix things; or how only the nous of the previous government kept it at bay, and the new government is going to make it even worse. But that’s all a load of nonsense because that’s just plain not what inflation is: it’s nothing more than a result of how much money’s floating around in the economy, and that’s under the control of the reserve bank. The government can move money from one person’s pocket to another, or transfer costs from some products to others, but that’s all it can do.

Inflation’s used as the boogeyman from the demand perspective too; like the story in the paper today Unions in rises push, wage talks spark inflation fears. The theory being that once Alice gets a higher salary, her boss Bob will charge his customer Carol more to pay for it, then Carol will demand higher wages, and thus everyone will end up getting the same salary increase and paying the same extra costs and it’ll all be for nought: thus, inflation. But it doesn’t actually work that way — whoever gets in first gets an advantage, because they get the higher salary before everyone else starts charging more; but more importantly, at some point there just isn’t any more money and you can’t give Zack the higher salary he needs. And that means you don’t end up with inflation, you just end up with some people getting higher salaries, and other people having to go without. Maybe that means the rich getting richer and the poor eating dog food, maybe it means the workers standing up for the rights while others don’t spend as much on SMS bills voting on reality tv shows. But either way, it’s just a normal part of life, not a massive inflationary crisis.

Though apparently that’s completely beyond most of the folks in the press or government; which I guess is why it’s a damn good thing we’ve got an independent reserve bank managing the currency, and seeming to do a decent job of it.

What’s been more confusing to me is the different behaviours between the US central bank and the Australian one — there’s been talk of a “global credit crunch”, which should mean that nobody’s willing to lend money, and thus that supply and demand would dictate that you should be able to raise your interest rates — if no one else is lending money, then borrowers have to come to you no matter what you charge. And that’s the way things are going in Australia, and more or less the point of reserve banks — to be the lender of last resort.

The US, on the other hand, has dropped its rates — from 5.25% up until September 2007, to 2.00% at the end of last month. That’s saying “please, sir, can you borrow some more?” at a time when, supposedly, too many people have borrowed too much, and almost no one’s willing to lend more money. The Fed however, is apparently happy to send good money after bad.

That in turn seems guaranteed to cause inflationary effects — of the sort where Alice gets more money without doing more work and until everyone else catches up, lives like a queen, until eventually everyone is back where they were, and the US dollar’s simply worth less than it used to be, or the Fed draws a line in the sand and stops pumping money into the economy, and Zack ends up paying the piper.

In a sense, that’s pretty much how “sub-prime” mortgages look: the Fed loans banks a lot of money at low rates, the banks have already given all the reliable borrowers as much as they can afford to repay so in order to make a profit have to give it to ever less-reliable borrowers, and that also means that if your borrower starts looking like defaulting there’s probably another bank that’ll refinance them. And everyone’s happy until suddenly Zack’s bank can’t find someone to refinance the loan again, and it all collapses.

I went to the library the other day and read most of Greenspan’s Bubbles, by William Feckenstein. It basically documents a few decades worth of incompetence from Alan Greenspan, and lays at his feet both the dot-com bubble and the sub-prime crisis, with the conclusion that the Fed has become too willing to bail out Wall Street when it screws up, rather than letting the market punish firms when they make bad judgements; without which you lose the selection pressure for good decisions, and you’re setting yourself up for quite a fall… And given the Fed still seems to be bailing out

On freedom

One of the freedoms I value is the freedom to choose what you spend your time on and who you spend it with. And while I’ve spent a lot of time arguing that people in key roles in Debian still have those freedoms (hey, 2.1(1), don’t you know), reality these days seems to be otherwise. But hey, solving that quandry just requires a mail to DSA.

To folks on the core teams I’ve been involved with: it’s been a pleasure and an honour working with you; if not always, at least mostly. Best of luck, and I hope y’all accept patches.

Jigdo Downloads

Last month we had a brief discussion on debian-devel about what images would be good to have for lenny — we’re apparently up to about 30 CDs or 4 DVDs per architecture, which over 12 architectures adds to about 430GB in total. That’s a lot, given it’s only one release, and meanwhile the entire Debian archive is only 324GB.

The obvious way to avoid that is to make use of jigdo — which lets you recreate an iso from a small template and the existing Debian mirror network. I’ve personally never used jigdo much, half because I don’t usually use isos anyway, but also because the few times I have tried jigdo it always seemed really unnecessarily slow. So the other day I tried writing my own jigdo download tool focussed on making sure it was as fast as possible.

The official jigdo download tool, ttbomk, is jigdo-lite — which you give a .jigdo file, and the url of a local mirror. It then downloads the first ten files using wget, and once they’re all downloaded, it calls jigdo-file to get them merged into the output image. This gets repeated until all the files have been downloaded.

By doing the download in sequence like this, you miss out on using your full network connection in two ways: one during the connection setup latency when starting to download the next package, and also while jigdo-lite stops downloading to run jigdo-file. And if you’ve got a fast download link, but a slower CPU or disk, you can also find yourself constrained in that you’re maxing those out while running jigdo-file, but leaving them more or less idle while downloading.

To avoid this, you want to do multiple things at once: most importantly, to be writing data to the image at the same time as you’re downloading more data. With jigdodl (the name I’ve given to my little program), I went a little bit overboard, and made it not only do that, but also manage four downloads and the decompression of the raw data from the template. That’s partly due to not being entirely sure what needed to be done to get a speedy jigdo program, and partly because the communicate module I’d just written to deal with this sort of parallelism making that somewhat natural.

In the end, it works: from wireless over ADSL to my ISP’s Debian mirror, I get the following output:

Jigsaw download:
  Filename: debian-40r3-amd64-CD-1.iso
  Length:   675477504
  MD5sum:   d3924cdaceeb6a3706a6e2136e5cfab2
Total: 679 s; d/l: 586 MB at 883 kB/s; dump: 57 MB at 57 MB/s          

Finished!

which is only slightly short of maxing out my downstream bandwidth, taking a total of about 11m20s. Running jigdodl with a closer mirror works pretty well too, though evidently some of my more recent changes weren’t so great, because I’ve gone from 9153 kB/s on a 100 Mbps link down to 7131 kB/s or lower. The CPU usage also seems a bit high, hovering at between five to ten percent at 900 kB/s.

For comparison, running jigdo-lite on the same file took 17m41s, which is about 566 kB/s, with the overhead being about 6m20s. What that means is if I doubled my bandwidth to about 20Mbps, jigdodl would halve its time for the download to about 5m50s, while jigdo-lite would still have about the same non-download overhead, and thus take 12m10, which is still 69% of its original speed. Going from 10Mbps ADSL speed to 100Mbps LAN gets jigdodl down to 1m31s (13% of the time, with optimal being 10%), while jigdo-lite would be expected to still be about 7m51s (43% of its original time).

I suspect the next thing to do is to rewrite the downloading code to use python-curl instead of running curl, and thus downloading multiple files with a single connection, and tweaking the code so that it writes the file in order, rather than updating whichever parts are ready first.

Anyway, debs are available for anyone who wants to try it out, along with source in the new git source package format.

A New DPL…

In a couple of days, DPL-elect Steve McIntyre takes over as DPL, after being elected by around four hundred of his peers… Because I can’t help myself, I thought I might poke at election numbers and see if anything interesting fell out.

First the basics: I get the same results as the official ones when recounting the vote. Using first-past-the-post, Steve wins with 147 first preference votes against Raphael’s 124, Marc’s 90 and NOTA’s 19 (with votes that specify a tie for first dropped). Using instant-runoff / single transferable vote, the winner is also Steve, with NOTA elimited first and Marc collecting collecting 5 votes, Steve 4 and Raphael 2, followed by Marc getting eliminated with Steve collecting 50 votes, against Raphael’s 26.

So, as usual, different voting systems would have given the same result, presuming people voted in basically the same way.

NOTA really didn’t fare well at all in this election, with a majority of voters ranking it beneath all candidates (268 of 401, 53.5%). For comparison, only 18 voters ranked all candidates beneath NOTA, with 9 of those voters then ranking all candidates equally. (For comparison, in 2007, 312 of 482 voters (about 65%) ranked some candidate below NOTA, though that drops to 225 voters (47%) if you ignore voters that just left some candidates unranked. Only 98 voters (20%) voted every candidate above NOTA)

With NOTA excluded from consideration, things simplify considerably, with only 13 possible different votes remaining. Those come in four categories: ranking everyone equal (17 votes, 9 below NOTA as mentioned above, and 8 above NOTA), ranking one candidate below the others (13 votes total, 7 ranking Raphael last, 3 each for Steve and Marc), ranking one candidate above the others (66 votes; 30 ranking Steve first, 18 each ranking Raphael and Marc first), and the remainder with full preferences between the candidates:

     70 V: 213
     63 V: 123
     56 V: 132
     52 V: 231
     38 V: 312
     26 V: 321

The most interesting aspect of that I can see is that of the people who ranked Raphael first, there was a 1.8:1 split in preferring Steve to Marc, and for those who preferred Marc first, there was a 2:1 split preferring Steve to Raphael. For those who preferred Steve, there was only a 1.1:1 split favouring Raphael over Marc.

I think it’s fair to infer from that that not only was Steve the preferred candidate overall, but that he’s considered a good compromise canidate for supporters of both the alternative candidates (though if all the people who ended up supporting Steve hadn’t been voting, Raphael would have won by something like 26 votes (129:103) with a 1.25:1 majority; if they had been voting, but Steve hadn’t been a candidate, Raphael’s margin would’ve increased absolutely to 33 votes (192:159) but decreased in ratio to a 1:1.2 majority.

Select and Python Generators

One of the loveliest things about Unix is the select() function (or its replacement, poll()), and the way it lets a single thread handle a host of concurrent tasks efficiently by just using file descriptors as work queues.

Unfortunately, it can be a nuisance to use — you end up having to structure your program as a state machine around the select() invocation, rather than the actual procedure you want to have happen. You can avoid that by not using select() and instead just having a separate thread/process for every task you want to do — but that creates a bunch of tedious overhead for the OS (and admin) to worry about.

But magically making state machines is what Python’s generators are all about; so for my little pet project that involves forking a bunch of subprocesses to do the interesting computational work my python program wants done, I thought I’d see if I could use that to make my code more obvious.

What I want to achieve is to have a bunch of subprocesses accepting some setup data, then a bunch of two byte ids, terminated by two bytes of 0xFF, and for each of the two byte inputs to output a line of text giving the calculation result. For the time being at least, I want the IO to be asynchronous: so I’ll give it as many inputs as I can, rather than waiting for the result before sending the next input.

So basically, I want to write something like:


def send_inputs(f, s, n):
        f.write(s) # write setup data
        for i in xrange(n):
                f.write(struct.pack("!H", i))
        f.write(struct.pack("!H", 0xFFFF))

def read_output(f):
        for line in f:
                if is_interesting(line):
                        print line

Except of course, that doesn’t work directly because writing some data or reading a line can block, and when it does, I want it to be doing something else (reading instead of writing or vice-versa, or paying attention to another process).

Generators are the way to do that in Python, with the “yield” keyword passing control flow and some information back somewhere else, so adopting the theory that: (a) I’ll only resume from a “yield” when it’s okay to write some more data, (b) if I “yield None” there’s probably no point coming back to me unless you’ve got some more data for me to read, and (c) I’ll provide a single parameter which is an iterator that will give me input when it’s available and None when it’s not, I can code the above as:


def send_inputs(_):
        # s, n declared in enclosing scope
        yield s
        for i in xrange(n):
                yield struct.pack("!H", i))
        yield struct.pack("!H", 0xFFFF)

def read_output(f):
        for line in f:
                if line is None: yield None; continue
                if is_interesting(line):
                        print line

There’s a few complications there. For one, I could be yielding more data than can actually be written, so I might want to buffer there to avoid blocking. (I haven’t bothered; just as I haven’t worried about “print” possibly blocking) Likewise, I might only receive part of a line, or I might receive more than one line at once, and afaics a buffer there is unavoidable. If I were doing fixed size reads (instead of line at a time), that might be different.

So far, the above seems pretty pleasant to me — those functions describe what I want to have happen in a nice procedural manner (almost as if they had a thread all to themselves) with the only extra bit the “None, None, continue” line, which I’m willing to accept in order not to use threads.

Making that actually function does need a little grunging around, but happily we can hide that away in a module — so my API looks like:


p = subprocess.Popen(["./helper"], stdin=PIPE, stdout=PIPE, close_fds=True)
comm = communicate.Communication()
comm.add(send_inputs, p.stdin, None)
comm.add(read_output, None, p.stdout, communicate.ByLine())
comm.communicate()

The comm.add() function takes a generator function, an output fd (ie, the subprocess’s stdin), an input fd (the subprocess’s output), and an (optional) iterator. The generator gets created when communication starts, with the iterator passed as the argument. The iterator needs to have an “add” function (which gets given the bytes received), a “waiting” function, which returns True or False depending on whether it can provide any more input for the generator, and a “finish” function that gets called once EOF is hit on the input. (Actually, it doesn’t strictly need to be an iterator, though it’s convenient for the generator if it is)

The generator functions once “executed” return an object with a next() method that’ll run the function you defined until the next “yield” (in which case next() will return the value yielded), or a “return” is hit (in which case the StopIteration exception is raised).

So what we then want to do to have this all work then, is this: (a) do a select() on all the files we’ve been given; (b) for the ones we can read from, read them and add() to the corresponding iterators; (c) for the generators that don’t have an output file, or whose output file we can write to, invoke next() until either: they raise StopIteration, they yield a value for us to output, or they yield None and their iterator reports that it’s waiting. Add in some code to ensure that reads from the file descriptors don’t block, and you get:


def communicate(self):
    readable, writable = [], []
    for g,o,i,iter in self.coroutines:
        if i is not None:
            fcntl.fcntl(i, fcntl.F_SETFL, 
                        fcntl.fcntl(i, fcntl.F_GETFL) | os.O_NONBLOCK)
            readable.append(i)
        if o is not None:
            writable.append(o)
    
    while readable != [] or writable != []:
        read, write, exc = select.select(readable, writable, [])
        for g,o,i,iter in self.coroutines:
            if i in read:
                x = i.read()
                if x == "": # eof
                    iter.finish()
                    readable.remove(i)
                else:
                    iter.add(x)

            if o is None or o in write:
                x = None
                try:
                    while x is None and not iter.waiting():
                        x = g.next()
                    if x is not None:
                        o.write(x)
                except StopIteration:
                    if o is not None:
                        writable.remove(o)
    return

You can break it by: (a) yielding more than you can write without blocking (it’ll block rather than buffer, and you might get a deadlock), (b) yielding a value from a generator that doesn’t have a file associated with it (None.write(x) won’t work), (c) having generators that don’t actually yield, and (d) probably some other ways. And it would’ve been nice if I could have somehow moved the “yield None” into the iterator so that it was implicit in the “for line in f”, rather than explicit.

But even so, I quite like it.

Dak Extensions

One of the challenges maintaining the Debian archive kit (dak) is dealing with Debian-specific requirements: fundamentally because there are a lot of them, and they can get quite hairy — yet at the same time, you want to keep them as separate as possible both so dak can be used elsewhere, and just so you can keep your head around what’s going on. You can always add in hooks, but that tends to make the code even harder to understand, and it doesn’t really achieve much if you hadn’t already added the hook.

However, dak’s coded in python, and being an interpreted language with lots of support for introspection, that more or less means there’s already hooks in place just about everywhere. For example, if you don’t like the way some function in some other module/class works, you can always change it (other_module.function = my_better_function).

Thus, with some care and a bit of behind the scenes kludging, you can have python load a module from a file specified in dak.conf that can both override functions/variables in existing modules, and be called directly from other modules where you’ve already decided a configurable hook would be a good idea.

So, at the moment, as a pretty simple example there’s an init() hook invoked from the main dak.py script, which simply says if userext.init is not None: userext.init(cmdname).

But more nifty is the ability to replace functions, simply by writing something like:

# Replace process_unchecked.py's check_signed_by_key
@replace_dak_function("process-unchecked", "check_signed_by_key")
def check_signed_by_key(old_chk_key):
    changes = dak_module.changes
    reject = dak_module.reject
    ...
    old_chk_key()


That’s made possible mostly by the magic of python decorators — the little @-sign basically passes the new check_signed_by_key function to replace_dak_function (or, more accurately, the function replace_dak_function(...) returns), which does the dirty work replacing the function in the real module. To be just a little bit cleverer, it doesn’t replace it with the function we define, but its own function with simply invokes our function with an additional argument to whatever the caller supplied, so we can invoke the original function if we choose (the old_chk_key parameter — the original function takes no arguments, so our function only takes one).

Right now, we don’t do much interesting with it; but that should change once Ganneff’s little patch is finished, which should be RSN…

Hopefully, this might start making it easier to keep dak maintained in a way that’s useful for non-Debian installs — particularly if we can get to the point where hacking it for Debian generally just implies changing configuration and extension stuff — then we can treat updating all the real scripts as a regular software upgrade, just like it is outside Debian.

The second half…

Continuing from where we left off

The lower bound for me becoming a DD was 8th Feb ’98 when I applied; for comparison, the upper bound as best I can make out was 23rd Feb, when I would have received this mail through the debian-private list:

Resent-Date: 23 Feb 1998 18:18:57 -0000
From: Martin Schulze <joey@kuolema.Infodrom.North.DE>
To: Debian Private <debian-private@lists.debian.org>
Subject: New accepted maintainers

Hi folks,

I wish you a pleasant beginning of the week.  Here are the first good
news of the week (probably).

This is the weekly progress report about new-maintainers.  These people
have been accepted as new maintainer for Debian GNU/Linux within the
last week.

[...]

Anthony Towns <ajt@debian.org>

    Anthony is going to package the personal proxy from
    distributed.net - we don't have the source... He may adopt the
    transproxy package, too.

Regards,

        Joey

I never did adopt transproxy — apparently Adam Heath started fixing bugs in it a few days later anyway, and it was later taken over by Bernd Eckenfels (ifconfig upstream!) who’s maintained it ever since. Obviously I did do other things instead, which brings us back to where we left off…

  • Crypto in main: It’s getting hard to remember the days when strong crypto software was classed as munitions and even getting a web browser to do SSL properly was a legal challenge; but for a long while that’s how it was, and as a result we had to keep all our security tools hosted separately to our main, US-based archive. The laws for that changed in early 2000, introducing a new exemption to the prohibition for software whose source code was available royalty-free (I think they called it “public source”). Unfortunately there were still drawbacks: it wouldn’t apply to some stuff in non-free, it wasn’t clear how far you had to go with the source code (is libssl’s source enough, or do you need the source for the apps that use ssl too?), and the biggest problem was you had to notify the export bureau and the NSA for everything you wanted to distribute.

    And all of that’s couched in the sort of confusing language you’d expect to find in, well, export regulations. So we ended up getting nowhere on merging crypto into the main archive for a long while because we couldn’t be confident of what we actually needed to do. We’d more or less already missed the opportunity to do the merge for the potato (2.2) release because it was frozen more or less exactly when the new regulations came out, and according to my mail archives Ben Collins (DPL at the time) got a team organised to review the legal issues by mid-May 2001.

    By July 2001, though, we still weren’t really getting anywhere; the lawyer we were talking to was looking at the ENC exception (which was for closed source stuff, and requires an active review by the BXA), instead of the TSU exception (which just requires notification). While it wasn’t terrible legal advice — following the ENC exception had a lower risk of misunderstandings on our part leading to problems later — it wasn’t useful in achieving our aims, ie, getting crypto stuff into main with a minimal level of ongoing effort, as well as a minimal level of risk.

    So moving onto August, we’d moved on from the lawyer we’d been using for this and other issues, to talking to a lawyer who specialised in export issues (and whose time was being covered by HP). That actually got us somewhere, with the end results being a matter of public record — we got the advice we wanted by November, we got a new incoming system so we could make sure we didn’t publish anything we hadn’t advised the NSA and BXA about, and most of all, we got cryptographic software in main for woody’s release. ssh at standard priority, apt by default checking archive signatures before downloading code to run as root, iceweasel coping with https:// urls out of the box all suddenly became possible, rather than artificially blocked due to pointless legal constraints. Yay!

    The way we notified BXA/NSA was somewhat interesting — we ended up notifying once for each package, when it passed through NEW: which is easy to automate, and limits the amount of software we block from review by other developers as best we can. OTOH, it also means a lot of notifications: the initial notification was a huge box of paper that got faxed off (there should be pictures somewhere on the net, but I don’t know where), and the vague hope that the ongoing reporting would not only comply with the regs, but manage to be something of a denial-of-service by compliance. As it turns out it apparently got noticed, with the following mail arriving in September 2005:

    From: Encryption <Crypt@bis.doc.gov>
    Date: Fri, 23 Sep 2005 12:51:02 -0400
    Subject: Addition to Debian Source Code updates
    
    Dear Mr. Collins,
    
    Your diligent reporting of updates to the Debian source code to the Bureau
    of Industry and Security has been an excellent example of perspicacity.
    However, you no longer need to do so per the following update to the EAR:
    
    [...]
    

    (The mails have always been sent with Ben’s name attached)

    That mail was followed by another suggesting a phone conversation; unfortunately nobody seemed to notice the mail, at least until a year later (to the day!) when Joerg pointed out he was getting bounces form the BXA email address we’d been using… (Since then, based on the above mail, we’ve been recording notifications, but not passing them on; I sent a reply to the mails above indicating that, but didn’t receive a response. There’s been about 5000 notifications since then)

  • Security and the architecture explosion: with crypto-in-main out of the way, along with lots of RC bug squashing, woody was looking good to come out in April or May, but unfortunately that all fell apart because of a lack of automation for handling security updates.

    Taking a step back, woody managed to introduce five new architectures compared to the previous release — HP’s pets, hppa and ia64; the IBM mainfram arch, s390, and the SGI/embedded mips and mipsel chips. That came just short of doubling our supported architectures (which had been i386, m68k, alpha, arm, sparc and powerpc), and was working really impressively well: testing was watching that they kept in sync, and the buildd network was keeping them all up to date with the latest source uploads, and the toolchain maintainers were making sure new kernels and compilers didn’t cause massive breakages. It was great fun to be a part of, and I’m not sure how much is just blurry memories, but from here, it sure feels like it was a golden age on that score…

    Anyway, all the extra architectures freaked the security team out, given they were already having to worry about ssh’ing to half a dozen machines and manually do builds, and fix them when they broke, and to deal with that there’d been plans afoot for “rbuilder” to get rolled out, which never actually happened. All of which means it escalated from being the security team’s problem, to a problem for everyone, particularly the release manager… The result of which meant knuckling down and coming up with a new security build infrastructure, which involved doing a dak install for the security archive, then building on the changes just rolled out for crypto-in-main to allow for packages to be autobuilt before actually being included in the archive, then updating official buildds for all the various architectures to target stable-security and testing-security and prioritise them and upload to the right place.

    But as stressful and last minute as that was, at least it worked, and woody made it out the door on July 20th 2002 — and it’s a good thing it worked, given woody ended up getting security support until June 30th 2006 — the usual year after it’s successor is released — for four years support. (The same rule applied to sarge has resulted in two years and ten months’ security support; eighteen month release cycles in general should result in two years and six months of support)

    Since then, with security managed by dak, there’s been a few more bits I’ve been involved in. The most interesting of which (to my mind) was splitting the security upload queue from being completely secret and restricted to folks who are allowed to access updates prior to public announcement, into separate secret and non-secret queues. That was December 2005, and since then we’ve gone from having Joey Schulze basically being the sole person doing security updates to two separate teams, with ten new members. It’s not without it’s problems — particularly for packages that get (mis)classified as NEW, or that get lost on their way for inclusion in the next stable point update, but it’s nevertheless lightyears from where we were in April 2002, or even November 2005.

  • Playing other people for suckers: with woody released, the whole “redo to release process” thing was essentially done — it had been implemented, it had worked all the way to getting a release out, and in a fair few respects had improved things. Instead of the eight months of hand-approved uploads for the potato freeze, things had gone it automatically, albeit with gradually stricter constraints, right up until April; and even when the security problems delayed things, the manual updates only lasted a bit under three months. So at that point I was pretty happy, and looking forward to making things work a bit better and basically passing the buck to someone else.

    The first bit of buck passing that worked exceptionally well was jumping on Colin Watson, who’d been offering some debbugs patches on IRC, and after a delightfully brief discussion with some of the other debbugs folks, got getting him added to the group and able to make changes directly on the site. His advogato post the next day was:

    Yesterday I was invited onto the group that administers the Debian bug tracking system. I can’t decide whether to be intimidated or excited. There’s a great deal to do: my first couple of major projects look set to be integrating some of the existing QA interfaces to release-critical bug tracking, and developing a tool to help us edit spam out of bug histories.

    There’s a lot to be said for grabbing people at the point where they’re still intimidated by what they’re joining; sure there’s a definite risk that they’ll break something you’ll have to clean up, but that’s pretty easy to deal with, and well worth all the extra excitement and energy that comes with a little bit of nerves.

    I did a talk on debbugs a few years later, at the 2005 DebConf, at which point we managed to entice Pascal Hakim and Don Armstrong into signing up, if I’m remembering right. At the same time Colin rolled out his implementation of version tracking support for debbugs (which I’d specced up a while ago, and helped design, but cannily avoided actually coding). Shortly after, Pasc put together bug subscriptions, and these days Don’s pretty much the primary maintainer of debbugs. How cool is that?

    Passing on release managership was a bit less fortuitous though; it’s a big job, and it’s not something that you can offer “patches” for, generally. So after a bit of cogitating on just how to do it, I went with an open recruitment call in March 2003, and then a bunch of practical assignments to get the volunteers used to what RM work actually entailed and to see whether they actually like it. The recruitment call was mostly focussed on all the negatives, half to avoid people volunteering to be RM for the power (to decide which release policy wins, to tell maintainers what to do, to drop packages, or to just decide what name/number the next release gets) instead of wanting to actually do the work to get the next release out and keep developers happy; and the other half was the theory that if you make a job look hard, technical, but not terribly risky, it starts looking interesting to the right kind of people.

    The assignments generally were two parters — first to introduce some stuff about release management that they mightn’t be familiar with, and the second a bunch of problems that needed resolving that’d use those skills. The first set was RC bug fixing, followed by working out why packages aren’t getting into testing, followed by dealing with packages with mutual dependencies and removing packages. At some point that evolved into the testing scripts accepting “hints” from the release team as to what packages to try in combination, or which packages to force out of testing.

    That turned out pretty successful, with Steve Langasek, Colin Watson and Joey Hess becoming official release assistants in August, and Colin and Steve replacing me as RM in mid-2004. It was also pretty satisfying to see Andi Barth do some more release assistant recruiting in 2005 following much the same model, and indeed using much of my mail verbatim. You can’t really ask for better than that, I figure.

Geez, this was meant to be briefer…

Been a while…

So, sometime over the past few weeks I clocked up ten years as a Debian developer:

From: Anthony Towns <aj@humbug.org.au>
Subject: Wannabe maintainer.
Date: Sun, 8 Feb 1998 18:35:28 +1000 (EST)
To: new-maintainer@debian.org

Hello world,

I'd like to become a debian maintainer.

I'd like an account on master, and for it to be subscribed to the
debian-private list.

My preferred login on master would have been aj, but as that's taken
ajt or atowns would be great.

I've run a debian system at home for half a year, and a system at work
for about two months. I've run Linux for two and a half years at home,
two years at work. I've been active in my local linux users' group for
just over a year. I've written a few programs, and am part way through
packaging the distributed.net personal proxy for Debian (pending
approval for non-free distribution from distributed.net).

I've read the Debian Social Contract.

My PGP public key is attached, and also available as
<http://azure.humbug.org.au/~aj/aj_key.asc>.

If there's anything more you need to know, please email me.

Thanks in advance.

Cheers,
aj

-- 
Anthony Towns <aj@humbug.org.au> <http://azure.humbug.org.au/~aj/>
I don't speak for anyone save myself. PGP encrypted mail preferred.

On Netscape GPLing their browser: ``How can you trust a browser that
ANYONE can hack? For the secure choice, choose Microsoft.''
        -- <oryx@pobox.com> in a comment on slashdot.org

Apparently that also means I’ve clocked up ten and a half years as a Debian user; I think my previous two years of Linux (mid-95 to mid-97) were split between Slackware and Red Hat, though I couldn’t say for sure at this point.

There’s already been a few other grand ten-year reviews, such as Joey Hess’s twenty-part serial, or LWN’s week-by-week review, or ONLamp’s interview with Bruce Perens, Eric Raymond and Michael Tiemann on ten years of “open source”. I don’t think I’m going to try matching that sort of depth though, so here are some of my highlights (after the break).

  • Starting small: my first package was distributed-net-pproxy, which would claim a bunch of work units which could then be distributed over the local LAN. Useful if you’re in Australia in the mid-90s and being charged by the minute for Internet access. distributed.net didn’t allow general distribution, so I asked for (and received) explicit permission to include it in Debian. (distributed.net had a ten year anniversary last year too)

  • Diving straight in: I got sucked in to the mailing lists pretty quickly, and within a couple of months was up to my neck trying to redesign the way we did releases; oddly enough that didn’t turn out quite so easy, but at least it ended up with some concrete proposals within a couple of months, pretty much synchronously with the hamm (2.0) release going out (and, I guess, about a year after I’d started using Debian…). At around the same time was, I think, my first recorded comment about trade offs regarding freeness…

  • cruft: My first bit of Debian specific code was cruft (that its name, not it’s value…), which worked okay as a prototype, but got snagged up in debian-policy trying to get packages to make a note of files they’ll put on the system without telling dpkg (eg, /etc/passwd, /var/cache/apt/archives/*.deb, etc). That pretty much sapped my interest in it, and cruft stagnated until Marcin Owsiany picked it up in 2005.

  • Played for a sucker, part one: a few months later I was messing around doing QA stuff and fixing bugs and as part of that did an NMU of netbase (which at the time directly contained all sorts of important tools like ping and inetd directly). That went something like:

    From: Anthony Towns
    Date: Sat, Nov 21, 1998
    There are a few bugs accumulating against the netbase package which
    you're maintaining. I was wondering if you'd mind if I made an NMU
    to fix some of them for the upcoming slink release?
    
    From: Peter Tobias
    Date: Sat, Nov 21, 1998
    No, please go ahead ... I'm quite busy right now and I would really
    appreciate any help. Please let me know if you need additional information
    about the package.
    
    From: Anthony Towns
    Date: Sun, Dec 6, 1998
    There's an NMU sitting in Incoming now. It fixes a few bugs, viz:
    [...]
    
    From: Peter Tobias
    Date: Fri, Dec 25, 1998
    due to my current job I don't have much time to work on my debian
    packages. In order to have more time for my other debian packages
    I would like to give away the netbase package. Are you interested
    in maintaining this package?
    
    From: Anthony Towns
    Date: Fri, Dec 25, 1998
    Ummm. Sure. I guess. (or, iow, *Eeeeeeeeeeeeek*!!!)
  • ifupdown: As part of maintaining netbase I tried to figure out a way of fixing bugs like 31745 and 39118. Basically, Debian used to do networking by generating a script on install that you could modify by hand, and that was it — so when the commands needed changed between 2.0 and 2.2 (no more “route add -net”), you couldn’t make that upgrade “just work”. Red Hat handled it with “ifup” and “ifdown” commands that would look at a whole bunch of shell-format variables in /etc, which worked but wasn’t elegant, so I came up with something I thought was actually pleasant. It’s fundamental attitude is to be a parser — the networking commands are specifed as compile-time configuration, the description of your network is your runtime configuration, and ifupdown just puts those all together without trying to actually understand what’s going on. To some degree, that works really well — it keeps all the knowledge separate so when you’re writing an /etc/network/interfaces, you’re not worried about how DHCP works; in others it breaks down — in particular if bringing up an interface fails part way through, is it up, or down, or something else entirely?

  • bugs.debian.org: I’m not entirely sure why, but in August ’99 I decided to start running a script to stop old bugs from being permanently deleted:

    From: Anthony Towns
    Subject: BTS and old bugs
    Date: Tue, 24 Aug 1999 22:33:16 +1000
    To: debian-private@lists.debian.org
    
    ObPrivate: Erm. I'm not sure. It is only even vaguely relevant to
    developers.
    
    Since bugs #9705 and #36727 don't seem like being fixed any time soon
    and Darren hasn't managed to convert the BTS to using debbugs.deb yet,
    I've made a little script to stop us from continuing to lose bug
    reports, and am running it in my crontab on master.
    
    ~ajt/debian-bugs/archive/ contains hardlinked copies of the bugs in
    ~iwj/debian-bugs/spool/db (except split into sub-directories). When
    the bugs get expired from the BTS, the hardlink in ~ajt remains, so
    the file doesn't get lost forever.
    
    In the week or so I've been running it, some 500 odd bugs expired [0].
    

    Next month I sent Darren Benham a first version of bugreport.cgi, and at some point around then must’ve sent off a pkreport.cgi too; by the month after (October) I’d evidently been added to the debbugs group, because I was merging my archived bugs into the official debbugs directories.

  • testing: So a year and a bit later and there had been some more discussions about making major changes to our release process, and since I’d done some more algorithms subjects at university by this point, dove in a bit more seriously. For me, the main challenge seemed to be keeping dependencies consistent, and it turns out validating dependencies and conflicts is an NP-complete problem, and solving that reasonably efficiently seemed like a good first step. By October ’99 I’d come up with a first pass solution, which I think was still in Perl at the time. A couple more rounds of playing with consistency checking stuff ensued, resulting in some regularly updated lists being generated by December, and by March 2000 or so that had graduated to a simulation of testing as we know and love it today.

  • Played for a sucker, part two: so after a few months of maintaining a testing suite while potato (Debian 2.2) was getting finalised for release I figured it’d be useful to get broader release experience, so asked Richard Braakman if I could help out with the last bits of the potato release. At which point he said “sure”, gave me James Troup’s email address for accepting/rejecting packages etc, then buzzed off to a mobile phone related junket in the US, and laid low until the release was actually out… A cunning plan, for sure. Happily, that didn’t take too long — I think I started around June, and potato was finished baking for release at LinuxWorld Expo in August, though it did involve about 2000 lines of archive changes and stuff mailed to James over the period, along with half a dozen mails to -devel-announce.

  • Junket: In the middle of that was my first ever debconf — or technically the Debian portion of the 2000 LSM/RMLL in Bordeaux, aka Debconf 0. It was awesome — met heaps of cool people, got to practice my high school French, and enjoy some lovely red wine while gossipping about free software. Also heard RMS sing the free software song live. But the wine was great!

  • katie/dak: The biggest problem with the “simulation of testing” mentioned above was it was held together with paperclips and sticky-tape; or less metaphorically, shell one-liners and hardlinks. Since packages would originally get uploaded to unstable, and then move into testing, and then get deleted from unstable, you’d end up at least doubling the bandwidth required to mirror Debian, because each package would get sent to mirrors once for unstable and once for testing. Which is fine for a simulation, but for deployment, well, we needed package pools, which in turned required a rewrite of dinstall.

    James did all the initial work, apart from some of the schema design and the scarier SQL, and by November had it deployed for the non-US archive, and got it onto ftp-master in December. At which point the time was ripe for hooking up testing into the archive proper, which is about the point that the testing scripts got named “britney” so as to fit in with all silly names in the da-katie suite. I’m pretty sure the rationale was that I did the final coordination of the potato release (archive changes, cd images, announcement, etc), I spent from about 3am to 10am listening to a handful of Britney Spears songs on repeat. The phrase “scarred for life” might come to mind.

    Since that point, there’s been lots of evolutionary changes to both dak and testing (including a rewrite of the perl parts of testing into python), but it’s all built fairly naturally from that point.

  • debootstrap: I’m pretty sure my motivation for writing debootstrap was a mix of just wanting to avoid having to worry about out of date base tarballs for the purposes of keeping testing in-sync, and wondering if it was actually possible to write a script to bootstrap a Debian system from the sort of minimally functioanl environment the installer has to put up with. In the end, it turned out pretty cool — it was used by the final boot-floppies, it’s part of d-i, it makes tools like pbuilder possible, I think it helps embedded folks heaps, it’s spawned some imitators, and generally been a pretty cool exercise in hackery.

  • Freedom and purity: About six months earlier, there was a big flamewar and an attempted vote on whether to remove non-free from the archive. I’d written a rebuttal and counter-proposal, but it all ended up collapsing in a heap, due to disputes about the constitution. For some reason I feel like this mail was more interesting than all the previous ones combined…

    That ended up needing a whole bunch of changes to the constitution: one to make it clear under what circumstances the social contract can be changed, and another on what happens when one option on a vote requires a supermajority while another doesn’t, and working that out ended up taking from October 2000 until October 2003. It was actually kind-of interesting; trying to analyse a good voting system that works in the real world has all sorts of interesting maths to it, and there’s a whole bunch of election methods guys that have studied it in a bunch of depth. And since its elections, you can have fun with all sorts of scenarious of corruption — like what if the secretary’s trying to change the results. In practice, the difference between good and bad election methods seems to turn out to be negligible, but why wouldn’t you have the best one you possibly can?

    A couple of months after those changes were done, dropping non-free got reproposed, and I did another counter-proposal; this time they were actually voted on, with the counter-proposal winning, by a little under a two-to-one margin. Which means we get to get rid of non-free by making it completely superfluous, rather than just near enough.

Hrm, this is going on longer than I’d hoped. Oh well, to be continued!

Exclusion and Debian

Oh yay, another argument about sexism. I thought we were over this. Aigars writes:

Trying to restrict what words people can or can not use (by labeling them sexist, racist or obscene) is the bread and butter of modern day media censorship. It is censorship and not “just political correctness”. While I would not want people trying to limit contributions to Debian only to “smart and educated white people” (racism) or “logically thinking males” (sexism), going the other way and excluding people from Debian because their remarks or way of thinking might offend someone is just offensive to me.

One: it’s not censorship for Debian to limit discussion of various things on Debian channels. It’s censorship when you prevent discussion of something anywhere.

Two: if you think excluding people is bad, then supporting jerks whose mysogyny repulses people isn’t compatible with that.

Three: doing something in someone else’s name, that isn’t supported by them, is wrong. If you’re in a channel called “debian-something” don’t act in ways that don’t match Debian’s goals. If you want to be free to go against the principles of the DFSG by, eg, discriminating against people or groups, make up your own name for a channel.

The Leaf of Trust

Wow. Pretty.