Amazon’s S3

I’ve thought S3 was pretty awesome sounding ever since it launched early last year. Sadly I haven’t managed to get it set up yet — it’s refusing to accept my credit card (which I’ve used to buy stuff from Amazon before), so I just get a NotSignedUp error and told “Your account is not signed up for the S3 service.” The webservices help has responded, but not managed to work out wtf’s going on yet.

One of the things I’d like to do with it is (the pretty obvious and staid) remote backup thing — though what I’m actually hoping for is more to get something more or less the same as the backup described in Greg Egan’s Distress, which is more or less: your laptop automatically backs itself up to the web somewhere, then when your laptop dies or gets stolen, you get a random new one, authorise it with your passcode or thumbprint or whatever, and it automatically recovers itself from the backups you’ve got on the web.

Oddly though none of the command line tools seem to quite do backup “right” (by my definition) — there’re some that encrypt, some that do an rsync equivalent, and none that I’ve seen compress, all of which seem like a requirement to me; and given the support for metadata pretty easy to actually handle.

It’d be even nicer if there were some other services with similar APIs (and cost structures!) so it wasn’t quite tying yourself to amazon so much, but until Google decide to release G-Drive or some other distributed thing happens, there’s not much to be done. Of course while amazon won’t even let me in, not much to be done…

Men in Skirts

Well, made it to Edinburgh, via a forty hour stopover in Singapore. Picked up a tie and kilt in the Debian tartan. Sadly Edinburgh in midsummer isn’t as warm as Singapore in summer, or even as Brisbane in winter, so there’s good odds my knees are going to be shaking as well as visible all to soon. Oh well, there’s reports that there might be scattered sunshine by Monday — fingers crossed.

The dak BOF’s been delayed until tomorrow (Saturday 3pm, Edinburgh localtime) so Andi and Steve can sit in. Bit weird having something that technical on Debian day, I guess, but hey, whatever. It’s scheduled for a 2hr session, but I’m hoping it’ll be pretty informal and hacky rather than slides and lecturing or whatever. I’m hoping people will arrive with some ideas on what cool things that dak ought to be handling, and we can spend the time actually making dak support as many of those things as possible. Some of the ideas I like are in the BOF proposal. It might also be worth having a look at the video (or audio) of Robert Collins’ talk (“…Release always?”) at the Debian miniconf at linux.conf.au earlier this year, particular related to some of the ideas about setting up separate GNOME or KDE or libstdc++ staging areas for getting major updates ready for unstable rather than mixing them all up in experimental. Anyway, feel free to plan on arriving or leaving anytime during the BOF, unstructured is just another word for freedom!

Yesterday I had a quick chat with Frans about getting debootstrap officially incorporated into the d-i subversion repo, so that it’s officially team maintained, and there’s a convenient central place for hacking on it. Given the recent discussion (while I was flying to debconf, in fact — how inconsiderate!) hopefully that’ll mean we’ll get a couple of cleanups there too.

Oh, also got informed that Eben Moglen will be giving a free lecture in Edinburgh on Tuesday after debconf (the 26th) courtesy of the Scottish Society for Computers and Law. Sounds interesting if you’re into the whole theory of law and the effects of modern technology thereon:

In this lecture, Professor Moglen considers how private legislation is replacing public law as the organising intellectual structure for software and the technology industries, with far-reaching social consequences and theoretical implications.

Not sure if he’ll arrive in time to drop by during DebConf proper or not.

Torrenting the Debian Archive

Continuing the theme from my previous post — the first and fundamental thing any distro needs to have, and thus the first and fundamental thing to think about disintermediating, is some way of distributing software. That might be burning CDs and sending them out to resellers or OEMs, or having your stuff available for download, whether by putting it up on your own server, hosting it via a dedicated high-reliability provider like akamai, or maintaining a network of mirror sites and keeping them up to date.

And, of course, anyone who wants to distribute their own software has to do the exact same thing — which either means doing it themselves, and not being able to take advantage of the scalability the distributor has already made work, or going through the distribution and dealing with the problems that entails, as well as the benefits.

In this case, disintermediation and decentralisation are pretty much one and the same thing: and decentralising content distribution is already a well understood problem: that’s what peer-to-peer is all about, and peer-to-peer distribution of, well, distributions is already very successful — at least when it comes to CD (and DVD) images. Which means that just about anyone can create a CD image, and distribute it to the world in exactly the same way a major organisation like Debian or Red Hat would — upload a torrent file, run a seed, and let the magic of BitTorrent do its thing. Scalability is no longer something you need a distribution organisation to manage, instead it’s built directly into the technology itself.

Unfortunately BitTorrent is designed for large static images — not something you update on a daily basis. And for better or worse, my preferred distribution (Debian testing, currently lenny) does update that frequently.

Fortunately, though, BitTorrent’s an open protocol and has dozens of open source implementations — so last year I was able to propose it as part of the Google Summer of Code, and was lucky enough to get a few responses and a student to work on it. It didn’t get much further than that unfortunately — the student lost internet access not long into the programme (and for most of its duration), and that was pretty much that. So when this year’s SoC rolled around, I didn’t really expect much, and didn’t put the idea up for consideration again, but lo and behold someone came through and asked if it was still possible and if there was any more information, and when I forwarded on some of the mails from the previous year we ended up with a second chance at the project.

So far it’s looking pretty good — we’ve had a lot more success at keeping in touch, and thanks to the extended schedule for the SoC this year we’ve been able to do a much better job of keeping on top of what’s going on. So much so, in fact, that there’s a first (alpha) release out before the SoC is officially due to start! Wonderful stuff!

What it does at the moment is allow you to take a Packages file (the stuff “apt-get update” downloads, which includes descriptions of all the packages that are available, how they inter-depend, and so forth), and from that information create a usable torrent from which to obtain the packages themselves which can then be used to share and distribute the packages.

There are two crucial steps in that: the first is allowing the torrent to work without requiring huge amounts of extra information to be distributed (which would introduce scalability problems of its own, instead of solving them), and the second is that the pieces that make up the torrent are selected in a way that matches the actual packages, so that when you upload a single new package, you are in fact only making a minor change to the torrent, rather than having it (on average) completely redefine half of the torrent (and again introduce scalability problems rather than solve them).

There’s more information on the DebTorrent wiki page and Cameron’s blog if you’re interested.

Anyway, it’s just an alpha release at the moment, which means while there’s code that does something, it’s not actually all that useful yet. The current plan is to next add some code to make it automatically prioritise packages based on what you’re actually using — so that rather than downloading all the debs it see, it’ll only download the ones you’ve got installed, or maybe only newer versions of the ones you’ve got installed, which should get us pretty close to the point where it’s actually useful for something.

The end result, of course, is to build a tool that you can point at a Debian archive, run it on a machine connected to the Internet, and you won’t have to do anything more to have a reliable, scalable and reasonably efficient means of allowing your users to distribute and update their systems. In this case, scalable means that if you end up with as many users as Debian or Ubuntu, your users will have a comparable experience, as if you’d arranged for a similarly comprehensive mirror network, without actually having to do the leg work.

And heck, presuming that works, it doesn’t even matter if no one else actually does that — it’s worth it even if it saves Debian or Ubuntu the effort of keeping track of a mirror network by hand.

There are interesting possibilities at smaller scales too, of course. :)

Big Spending Budgets

Every time I’ve heard the budget mentioned over the past week or so, it’s prefaced by the words “big spending”, but I’m lost as to how that could actually be the case — it’s in surplus, and taxes are to be reduced a bit, so doesn’t that mean any other possible budget would be a bigger spending budget? Or is an “average” budget meant to include huge tax cuts while maintaining a surplus, and thus everything else is comparatively wasteful?

Or is it just that most of the people writing news stories about the budget don’t have anything useful to say, so they have to resort to a Pavlovian response of adding “big-spending” whenever an election is nigh to fill in time?

My take? It’s a boring budget — tax rates greater than 30% are getting less and less relevant, along the lines of the John Humphrey’s Reform 30/30 proposal, though in bite size-pieces. Education is getting a few more market-driven policies with some more full fee courses and some more incentive-based pay for teachers and schools that do well, though that’s more of a nudge than anything.

The higher education endowment fund could be pretty impressive, I guess; though I’m a bit lost as to how it should be seen as an “investment”, or, at the very least, how you’d even guess at an appropriate value to put on the expected return (which I’m assuming is expected to be collected through increased tax revenue coming from a higher GDP), so as to compare it against other possible investments of the Future Fund. Seems like a much more sensible thing for the government to be investing in than broadband, at least. That’s assuming you look at it from the perspective of that money having been “expected” to go to government superannuation entitlements, and thus “taken” from the Future Fund, rather than an expansion of the Future Fund to address other political goals, which I guess is the view that Costello holds, and at least according to one report does so with some justification.

Actually, I guess it’s not that hard to expect that to be a major success: it’s relying on universities to spend it sensibly, which seems a safe bet, and as an endowment fund should provide about $300 million to be invested every year forever, along with encouragement for significant private support simply by making the government funds available as matching, tax-deductible contributions, or similar.

There’s apparently a significant increase in military spending with no particular strategic change I can see, some pretty straightforward nods to the environment, and a bunch of stuff to help families that followed the child-rearing advice of “one for the mother, one for the father, and one for the country” from earlier budgets.

A conversation with jdub

A couple of months ago I was sitting in a north Sydney pub cradling a beer with the ineluctable jdub, the decidedly unparsimonious rusty and (if I’m accurately remembering who’d left and who hadn’t by this point) the irremediable mrd. As you might imagine, these sort of conditions are a perfect breeding ground for a particular type of discussion, and with Jeff’s departure from Canonical not that long beforehand accompanied by the then ongoing DPL elections, talk turned to the future of Debian, and in particular Jeff’s views on that future.

Obviously more than a few weeks have passed since then, so please imagine this as a dream sequence with a stylish TV fade-in accompanied by appropriate fine print caveats about paraphrasing and the perfidy of memory.

I think it’s fair to sum Jeff’s main point about the future of distros as one of “disintermediation” — that is removing, or at least minimising, the barriers between the authors and users of a piece of software, and in general getting closer and quicker collaboration between everyone involved and interested in developing and maintaining that software. Which means things like trying to remove the line between the Debian bug tracker and upstream bug trackers entirely, so that Debian users end up talking directly with the upstream maintainers, rather than having a Debian maintainer relaying information between the two. It also means making the path for updated packages to get to users quicker and smoother — so that as soon as a developer commits a new version of the software to version control, the next time users of that software “apt-get update” they see it, download it, and start using it. Or at least, they do if they’ve chosen to follow the bleeding edge.

Jeff had a bunch more ideas in that vein — like not bothering with a central archive, but having users collect the packages they’re interested in from upstream sites all over the place, standardising, reducing or removing the “control” information for packages so that creating the correct packages for 80% of free software (that uses ./configure in the usual way, eg) was a complete no-brainer, and perhaps viewing distributions not so much as the gatekeepers and central players of the open source world, but perhaps more as systems of providing resources, assistance and opportunities to software authors who deal with their users directly — and in so doing have more of an opportunity for their users to actively participate in the project’s development, and have more feedback (and control) of how their project is perceived by its users.

I’m not sure if I’m doing it justice — it’s a pretty radical notion, and as an upstream-type hacker, Rusty was pretty enthralled by it. As a distro-type hacker, on the other hand, I’d have to say I was daunted: getting rid of the middleman here means getting rid of Debian, which I must admit I have something of an attachment for… and yet, in principle at least, I can’t see anything but good sense in the idea — getting users and developers interacting more closely is important to improving the free software community, and while distros have historically been a force for making free software more accessible to people, it’s entirely possible we can now make it even more accessible now by, in some sense, doing away with distros. And if that’s the case, then we definitely should, because it’ll benefit both users and developers of free software, which is what we’re all about.

On the other hand, distributions (at least currently) aren’t just about collecting and compiling software which could be automated; it’s also about integrating it, standardising interactions between software, and in some cases acting as a buffer between users who want to use a bit of software and developers who for one reason or another don’t actually want to deal with those users (perhaps due to lack of a shared language or political or philosophical disagreements).

I think it’s also pretty interesting to view Gentoo through this light; that is, not so much as a distro that makes you compile everything from source, but rather as a distro that gives its users a system based on exactly what the developers produce, with the addition of only a comparably tiny amount of Gentoo-derived content. Maybe it’s just me, but Gentoo sounds a lot more interesting to me in that light.

So is that a foretaste of the future of binary free software distributions too? There’s a lot of things that would stand in the way of Debian managing that, but so far all I can think of is a thousand problems that should probably be solved anyway, and not even one that can’t.

I think, btw, that jdub had been reading Accelerando (also available as a Creative Commons licensed ebook) somewhat concurrently with thinking about this stuff, so perhaps it should be called “Distributions 2.0” or something. Accelerando’s a crazy-awesome near-future/singularity book by Charles Stross, who happens to live in the city that will host the forthcoming DebConf 7…

Open Source Developers Conference

As Arjen’s already mentioned, Brisbane has been selected as the lucky host city for OSDC 2007. Since then, we’ve confirmed the venue and dates as the 26th-29th of November at the Royal on the Park, and we’ve had one local company signing up as a sponsor within 24 hours of sending the sponsor pack out — so like Arjen said, if you’re interested in supporting or being associated with the conference, please don’t delay getting in touch!

As this is the first year it’s run outside of Melbourne, we’re still in the process of working with the past organisers to figure out how to handle the website and such, but at last count we were expecting to get a website up and the CFP out on the first of April or so; we’re hoping to have a (very!) preliminary schedule up and registrations open in June or July.

I’ve got to say, I’m really excited about being involved in this — I haven’t really being paying as much attention as I’d like to have to the Brisbane open source scene, and that’s meant I’ve pretty much been seeing the same faces at HUMBUG and linux.conf.au — and while that’s great, getting to meet some of the Brisbane PHP folks is awesome, and seeing it serve as the impetus for Stephen Thorne and NetBox Blue to start up a Brisbane Python User Group is pretty sweet too — and if we end up with something like a local chapter of the OSDClub that’ll be beyond cool!

So thanks to the Melbourne OSDC team, and in particular Jacinta and Scott for your faith in us and your support, and I’m sure I speak for all the Brisbane volunteers in saying we can’t wait to blow your socks off in November :)

Five Things

Suppposedly, card number five in the Tarot is the Hierophant, described as “someone who interprets secret knowledge” and representing concepts such as “conformity” and “group identification”.

Not that any of that is related to this “five things you don’t know about me” meme, for which I’ve apparently been tagged by both Pia and Tony. And since I wouldn’t want to be accused of being either cool or vanity lacking here’s some from me.

  • I used to be a regular bit-part actor in school plays and musicals; I was in the school choir for four years in high school too. I tried getting back into it once during uni, but didn’t really have the time or interest, so I’m just left with going to the occassional show and watching these days. Of course, when they’re as spectacular as Woman in White that’s not much of a problem…

  • For quite a while, my favourite food was lamb’s brains, crumbed and fried with a white sauce. I’m quite fond of offal, also known as “variety meats” apparently, in general really — I’m not really persuaded by the “it’s morally wrong to eat animals” thing, but I figure if you’re going to anyway, it makes sense to not be wasteful about it. Kidneys are pretty awesome too; though liver seems to get boring before you’ve finished eating it. I’ve never tried haggis; but I suspect that’ll change before the next DebConf is over…

  • Prior to getting an 486sx33 that eventually ran Linux, I used to be an Amiga geek. As an Aussie Amigoid, that meant reading the Australian Commodore and Amiga Review (edited by Andy Farrell), and eventually when I became really elite, subscribing to Megadisc, a monthly magazine on floppy disk, which included useful programs, reviews, letters, and all sorts of interesting things. My first published C progream was included on one of the issues — it was a teensy little app for people with hard disks that would see if you had a Megadisc in the drive, then open up a dialog box so you could choose a directory to copy the disk too, and not have to worry about seek times and such. Unfortunately I was new enough at the whole “C” thing that I wasn’t sure how to make a new directory myself, and decided to just expect the user to use the “New Dir” button in the standard Amiga dialog box for that. Which would have been fine if I hadn’t also gone to the trouble of helpfully emptying the selected directory so the user wouldn’t end up with two Megadiscs in the same place. Put the two together, and evidently you end up with Megadisc’s editor, Tim Strachan, calling you at 9am the morning after publication and telling you how someone had tried it out and it had started deleting all the files on the hard drive. Ooops. With the Amiga dying out and the rise of the internet, it looks like Tim’s now more into alternative health products than computers.

  • On the Friday before flying down to Canberra to be Rusty’s wingman for a meeting with the Attorney-General’s department I had my first motorbike accident — not making a turn on a wet road at night just near home, hitting the asphalt and the gutter. I came out of it with some torn jeans and a grazed knee, a banged hand that needed icing, a nice bruise on my side, and a bike that ended up taking aaaages to get fixed and ridable again. Happily it’s gotten another couple of thousand k’s on the odometer since then.

  • In the couple of days before I head to Sydney for the 2007 edition of the fantabulous linux.conf.au I’ll be moving from my unit in Toowong to a house in Highgate Hill. Clinton and I will be sharing the upstairs, while the downstairs will be turned into a granny-flat for my parents for when they decide to stay in the city. The theory is it’ll be an investment over the longer term, with the idea being to try building some townhouses in the rather large backyard and sub-dividing. Should be fun!

Let’s see, I’ll tag: vocalist extraordinaire James, companionable carnivore Pat, sometime C hacker David, fellow motorcyclist Sez, and future housemate Clinton.

Can open source methodology make a movie?

Sometimes doing a Google News search for debian turns up some fascinating little gems. Today’s was this article:

Einfeldt says that the project also plans to sell copies of the film, or at least one of the versions of the film. Taking a cue from the Debian project, Einfeldt says that there will be several versions, starting with the edit codenamed “Buzz.” He says that there will be another version codenamed “Rex” that will be sold on DVD and through other avenues like Lulu.com that cater to self-distribution.

Some interesting quotes in the article from Postgresql hacker and SPI treasurer Josh Berkus, along with a link to the Digital Tipping Point wiki.

More DWN Bits

Following Joey’s lead, here’s some DWN-style comments on some of the stuff I’ve been involved in or heard of over the past week…

A future for m68k has been planned on the release list, after being officially dropped as a release architecture in September. The conclusion of the discussion seems to be that we’ll move the existing m68k binaries from etch into a new “testing-m68k” suite that will be primarily managed by m68k porters Wouter Verhelst and Michael Schmitz, and aim to track the real testing as closely as can be managed. In addition the m68k will aim to make installable snapshots from this, with the aim of getting something as close as possible to the etch release on other architectures.

A new trademark policy for Debian is finally in development, inspired by the Mozilla folks rightly pointing out that, contrary to what we recommend for Firefox, our own logos aren’t DFSG-free. Branden Robinson has started a wiki page to develop the policy. The current proposal is to retain two trademark policies — an open use policy for the swirl logo, that can be used by anyone to refer to Debian, with the logo released under an MIT-style copyright license, and left as an unregistered trademark; and an official use license for the bottle-and-swirl logo, with the logo being a registered trademark, but still licensed under a DFSG-free copyright license. The hope is that we can come up with at least one example, and hopefully more, of how to have an effective trademark without getting in the way of people who want to build on your work down the line.

Keynote address at OpenFest. Though obviously too modest to blog about this himself, Branden Robinson is currently off in Bulgaria, headlining the fourth annual OpenFest, speaking on the topics of Debian Democracy and the Debian Package Management System.

New Policy Team. After a few days of controversy following the withdrawal of the policy team delegation, a new policy team has formed consisting of Manoj Srivastava, Russ Allbery, Junichi Uekawa, Andreas Barth and Margarita Manterola.

Point release of sarge, 3.1r4. A minor update to Debian stable was released on the 28th October, incorporating a number of previously released security updates. Updated sarge CD images have not been prepared at this time and may not be created until 3.1r5 is released, which is expected in another two months, or simultaneously with the etch release.

Debian miniconf at linux.conf.au 2007. While it may technically not be supposed to be announced yet, there’s now a website for the the Debian miniconf at linux.conf.au 2007, to be held in Sydney on January 15th and 16th (with the rest of the conference continuing until the 20th). This year derived distributions are being explicitly encouraged to participate, so competition is likely to be high, and it’s probably a good idea to get your talk ideas sorted out pretty quickly if you want them to be considered!

Google Ate My Brane

After visiting Google for the Summer of Code Summit the other week, I thought I might actually try out some of the web services they’ve come up with, rather than just sticking with search and maps, and see if they did anything for me. To my surprise — as a certified hater of webapps generally — a couple did.

Writely, the web-based word processor, was kind-of interesting, but in the end didn’t work for me. The potential killer feature for me would’ve been SubEthaEdit or Gobby -like interactive collaboration, which seems like something Google ought to be able to do with their whacky AJAX techniques. Unfortunately, it seems to just be some sort of automated merge-on-commit, which does nothing for me.

What I’d really like as far as online document editing goes, is actually to be able to do Gobby-like editing of (moinmoin) wikis, rather than having to deal with advisory locking. I poked a bit further at that, and I suspect it ought to be possible to hack something up by using a tool like editmoin to edit wikipages with an editor rather than a webbrowser, and using gobby to do the editing, via a sobby server hosted on the same site as the wiki. It ought to be possible to automate all that complexity using an application/gobbymoin mime type; but I didn’t get anywhere because sobby seems to require IPv6 support. Oh well, maybe some other time.

I’ve played with GMail and Google Talk before, with minimal impact. GMail is kind-of nice, but I like to be able to read my mail offline, so whatever. It is useful as a backup email address if my regular one goes down though. Google Talk doesn’t seem to handle voice/video under Linux, so it’s just a Jabber server. Which is fine, since I hadn’t ever actually gotten any of these whizbang IM things setup. What’s less brilliant is that Gaim is a bit of a pain when it loses connectivity, which happens everytime I suspend my laptop, which is everytime I stop using it. But I need GMail in order to even try some of the interesting Google services these days, so whatever.

Google Calendar isn’t really something I expected much of. Sure, it’s a calendar app, but I’ve never gotten much use out of appointment diaries or planners or whatever anyway. Having it be web-based actually changes that a bit though, since it makes it trivial to publish to other people, and that even makes a calendar a little bit useful for me too. Having it be able to send reminder SMSes is also neat, at least now I’ve worked out how to default that behaviour to off… Oddly, though, I’ve found I’m getting more value out of it in listing things I’ve done rather than things I’ve got coming up. I guess it’s nicer to have a list of things you’ve actually done, rather than a list of things you should have done (but often didn’t), or a list of things you’ve got to do…

But the real winner is definitely Google Reader even if it’s still in Google Labs, rather than even being “beta”. While I’ve tried some aggregators in the past, none have remotely grabbed me, and I’ve been tending to just remember the URLs for the blogs and webcomics I like, and type them in when I’m feeling bored. That has the benefit that it limits the number of each I read, but the drawback that I waste time typing URLs and waiting for pages to load even when there haven’t been any updates. The keyboard interface to Reader is pretty pleasant, with the only drawback I’ve found a slight lag in loading entries at the start of the day. Having it be in my web browser is perfect, since I generally want to follow a few links from blog posts anyway. It’s also made it easy enough that I’ve added a few feeds from real newspapers (or news channels), which is probably a good thing as far as balancing my take on what’s going on in the world.

There’s a couple of downsides. One is that a lot of webcomics don’t have RSS feeds, or, if they do, don’t seem to include the actual comic, just a link to it. I don’t think there’s much of a reason for that — there are a few blogs I read that include ads in their RSS, so that doesn’t seem difficult to handle, and I can’t see any other potential objections. Also annoying is that posts that get aggregated on multiple planets (such as Planet Debian and Planet Linux Australia) show up multiple times, though admittedly I pretty much expected that. Probably the major downside is that it’s so easy to read stuff that I keep adding feeds to it, though…

Blogging Like Nobody’s Watching

While I do blog under the title “indolence log” for a reason, I’ve been a bit unhappy with how little I’ve managed to blog since April — barely managing one post a month. That drop-off coincided pretty sharply with getting elected DPL, and my best guess at the reasoning is that I’ve associated blogging with getting aggregated by Planet Debian, and the implication that I need to be careful about anything I might say. And that’s pretty much in conflict with how I prefer to blog, so I’ve been thinking over the last few days about whether I should deaggregate myself from Planet Debian to help avoid that tendency. So far it seems like just remembering what my blog’s about is enough, but we’ll see.

Of course, it’s hard to blog like no one’s watching when you do a quick post about todo lists, and get half a dozen replies in your inbox and elsewhere. Anyway, I stumbled upon gtodo (a simple gtk based todo app) after posting, probably not for the first time, and found it actually had all the field I wanted, and nothing else. I’ve tweaked the source a little (to include the weekday name in the date field, and only include the comments in the tooltips), and so far it looks like exactly what I specced out. We’ll see if that works for me over the next few weeks…

Todo Lists

A while ago I read Steve Yegge’s rant about Agile development, though I’ve forgotten who linked to it. The thing that struck me as interesting was the bit about “work queues”:

With a priority queue, you have a dumping-ground for any and all ideas (and bugs) that people suggest as the project unfolds. No engineer is ever idle, unless the queue is empty, which by definition means the project has launched. Tasks can be suspended and resumed simply by putting them back in the queue with appropriate notes or documentation. You always know how much work is left, and if you like, you can make time estimates based on the remaining tasks. You can examine closed work items to infer anything from bug regression rates to (if you like) individual productivity. You can see which tasks are often passed over, which can help you discover root causes of pain in the organization. A work queue is completely transparent, so there is minimal risk of accidental duplication of work.

Sadly, googling for “work queue” doesn’t come up with any sort of todo list stuff, but rather a multiprocessing scheduling tool which is cool, but not immediately relevant for me. As far as I can see, whatever was actually being talked about either isn’t public, or is more of a concept than an actual tool.

The only Google Todo thing I could find was an applet thing for the personalised google homepage, which just lets you make todo items and set them as high/medium/low priority. And while that might be all that Steve Yegge was talking about, it doesn’t really feel terribly inspiring to me.

I guess what I’d really like is to have todo items get assigned to a project (so that I can ignore all the todos for projects I don’t want to worry about atm), and also to be able to give them a deadline (so I can treat them with a bit more urgency when necessary) and a priority (so that I can easily spot things I’m willing to defer or ignore completely when I find I don’t have time to do everything I’d like).

I suspect I’ll probably just stick with writing notes in vi to keep track of things, though, same as I have been for years.

Vote Early, Vote Often

A couple of comments on the ongoing votes.

The DFSG/firmware issue is a complicated one. For the votes that we’ve currently got open, I’m voting for futher discussion in favour of the DFSG#2 clarification — not because I disagree with requiring source code for all works in principle, but because I think we should be making sure we can make Debian work with full source for everything first, before issuing position statements about it; and I’m voting for “release etch even with kernel firmware issues” above further discussion and “special exception to DFSG#2 for firmware” below further discussion, because I don’t think we can handle the broader issue before etch, and I don’t think it’s a good idea to try to tie the exception to the non-existance of technical measures directly. I’m not really sure that’s a good enough reason to vote that option below further discussion, so I might change my vote on that yet.

There have been quite a few other proposals on the topic, including one from me that didn’t get sufficient seconds to be voted on, another from Frans Pop that was withdrawn due to procedural issues, a couple more from Sven Luther, and a new proposal from Sven and supported by the kernel team that’s a further refinement on the “release etch even with firmware issues” resolution currently being voted on.

I personally think we should spend some time after etch thinking a bit more deeply about this stuff. Personally, I think we should insist on source for everything, but that also means we need to have a clear explanation on why it’s good — even for firmware and font files and music and artwork — and it means we’re going to need to make sure we have a reasonable way of distributing it, and it means we’re going to have to make sure that we have a good way of distributing stuff that doesn’t meet our standards but that users still need or want; whether that’s drivers they need to do installation or get good graphics performance, documentation for their software, or whatever else. There’s a lot of real improvements we could make there — both in making the core of Debian more free and more useful, and making it easier for users who want to make compromises to choose what they want to compromise on and what they don’t want to compromise on. I really hope that once etch is done and dusted quite a few of those sorts of improvements will get done, both in technical improvements in Debian, and in good advocacy from Debian and other groups towards people who aren’t already making things as free as they potentially could be.

One the recall issue, I would have preferred to vote “re-affirm”, then “recall”, then “further discussion”, to say “I don’t think this creates a conflict of interest that can’t be handled, but I’ve no objection if other people think it does”. But since that isn’t what the ballot(s) turned out to be, I’ve voted “re-affirm” above further discussion on that ballot, and “recall” below further discussion on the other ballot.

I’ve voted the “wish success” option above “don’t endorse/support” option for two reasons — first, because the “wish success” resolution actually refers to “projects funding Debian or helping towards the release of Etch” in general, while the “don’t endorse/support” proposal specifically talks about projects I’m involved in (including non-Dunc-Tank projects) which seems kind of personal. There’s also the fact that I’d rather see more success and mutual support in the Debian community, even for projects I don’t personally like, than less. I originally voted the “don’t endorse/support” option below further discussion for those reasons, but then decided that that was silly — just as I would have been happy to vote for the recall above further discussion, it’s not really that big a deal either way, and fundamentally I think both options are essentially the same anyway: that any potential conflict of interest can be dealt with, and Debian and Dunc-Tank are fundamentally different projects. I was probably influenced in that a fair bit by the “not endorse/support” option being proposed and seconded mostly by people who actively oppose the idea, including Josselin Mouette, Samuel Hocevar, Pierre Habouzit and Aurélien Jarno.

But in the end, the outcome’s fine any which way — some people will continue disagreeing with the concept, others will agree with it, and everyone can keep contributing to Debian in whatever way they think’s best whatever the outcome. And like I said when running for DPL this year, while you are a lot more visible as DPL, it’s not actually that necessary to be DPL to get things done in Debian.

Dunc-Tank announced

After a lot of discussion on -private, and a fair bit more discussion off list, a bunch of Debian developers, including myself, have launched dunc-tank.org. The idea is that we think getting etch out on time is important enough to be worth paying the release managers to work on it full time, so in the free software spirit, we’ve gone ahead and done something about it. We’ve already put up some of our own money, as have some other developers, and if you’d like to join us, there are more details at the website. We’re currently hoping that SPI will agree to accept funds on our behalf, and thus provide a good level of accountability. The board meeting later today will hopefully shed some light on whether that’s feasible.

There’s also some information in press release form, and some interesting background detail for people who don’t follow the Debian release process as closely as some of us do.

The first article on the topic’s already been published; with one somewhat inaccuracy — this is not a Debian project, and is being specifically handled outside of Debian to both ensure that any conflict of interest that might occur can be decided by Debian in Debian’s favour, and to allow other groups that have different ideas about what priorities are important to encourage contributions to those areas.

A question that has been raised is whether the organisation can be sufficiently “outside” of Debian when the DPL is intimately involved. I don’t have the answer to that — in my opinion it can be, but whether this one is will be up to Debian to decide.

Debian news for the day

It’s time for LinuxWorld SFO, which means it’s time for lots of interesting announcements. A major one today is from Hewlett-Packard, announcing that they’re ready to support Debian GNU/Linux officially on their Proliant and BladeSystem servers, and as a side note that their revenues from sales of Linux servers has now hit six billion dollars a year over the past eight years.

An interesting note from the IDG article above is IBM’s response:

“IBM works well with Debian in the Linux community and will, and does, support the Debian distribution for our customers,” the company said in a prepared statement. “It’s not a standard offering, but we do it under special bid.”

It’s interesting that vendors are seeing enough interest in Debian from their customers to be specifically supporting it, whether on an specially negotiated basis as IBM does, or now on a standard basis for entire product ranges as HP is. Personally, I find it really pleasing that companies are doing support for Debian not because we’ve negotiated with them, but because customers are asking for it. I’m also pretty pleased that HP’s commitment to Debian support doesn’t come at a cost to the support they’re offering for their other “tier-1” distributions: Red Hat and Novell’s Suse. To me, that’s what free software’s all about, friendly competition that’s focussed on making people’s computers work, not exclusive deals and promotional rights.

Debian’s also been mentioned in a ZDNet story about Movidis MIPS servers, which have a nifty 16-core processor and seem to be aimed at video streaming and other tasks that are a mixture of storage and processor intensive. Interestingly, while the story mentions the use of a mildly customised version of Debian as their OS basis, there doesn’t seem to be similar mentions on either the Movidis or Cavium Networks sites, let alone any juicy technical details.

Also interesting is this mention in a story about Zimbra groupware:

ZCS supports Microsoft’s Outlook messaging software and runs on the two leading flavors of Linux — Red Hat and Suse — and Apple Computer Inc.’s Mac OS X. Although ZCS also runs on Windows on the client side, Zimbra isn’t seeing interest from its customers in supporting Microsoft’s operating systems on the server side, Dietzen said. The start-up has a list of other operating systems it plans to support, notably the Debian distribution of Linux and Sun Microsystems Inc.’s Solaris flavor of Unix, he added.

Looks like the first efforts at Debian packaging are happening on the Zimbra forums already.

And then, of course, there’s the announcements from last week, such as the Creative Commons 3.0 discussion draft, that’s trying to solve some of the conflicts between the expectations we have of free “software” licenses, and that others have of free “content” licenses. A lot of work’s been put into that already, and hopefully it’ll pay off for everyone. Or the recent betrayal of the Debian spirit by Linspire, encapsulated in this ComputerWorld headline: Linspire releases Freespire 1.0 early. Ah well, traditions have to end sometime I guess. Or there was OSDL’s announcement that they’ve signed up Debian deriver Xandros for their desktop working group, with the little side note dropped in that IDC estimate desktop Linux will be worth ten billion dollars in annual revenue beginning the year after next. Or there was OpenVZ’s announcement at being included in unstable recently (it’s also available in etch, and they’ve announced today a build for RHEL 4).

Of course, I’ve also neglected to mention some much bigger Debian news, which is that as of Friday, etch beta3 is out, which is an excellent sign that we’re on track for release, even ignoring all of the nifty new features it includes.

And heck, Debian’s birthday is still a couple of days off.

In the meantime, I’m going to go to sleep, so I can make it to Novell’s launch event for Suse Linux Enterprise 10 in Brisbane tomorrow, and see what they’re up to.