LCA Sponsors

An article by Sam Varghese appeared on ITwire today, entitled linux.conf.au: What is Novell doing here?:

A GNU/Linux system does not normally load modules that are not released under an approved licence. So why should Australia’s national Linux conference take on board a sponsor who engages in practices that are at odds with the community?

What am I talking about? A company which should not be in the picture has poked its nose in as a sponsor. Novell, which indicated the level of its commitment to FOSS by signing a deal with Microsoft in November 2006, will be one of the supporting sponsors for the conference.

Novell was also a minor sponsor of the 2007 conference, and Sam wrote an article in January expressing similar thoughts, which included this quote from Bruce Perens:

“I’d rather they hadn’t accepted a Novell sponsorship. It wasn’t very clueful of them, given Novell’s recent collaboration with Microsoft in spreading fear and doubt about Linux and software patents,” Perens said.

Ultimately, I think that’s a mistaken view. Linux.conf.au is what it is thanks to the contributions of four groups:

the organisers
who create the conference, get a venue, organise a schedule of events, help the speakers and attendees to get there, and generally make it easy for everyone to just get immersed in cool Linux stuff
the speakers
who provide the core of the schedule, the reason for attendees to go, and a core depth of awesome technical knowledge and ideas
the attendees
who fill in the organisational/content gaps that the organisers and speakers miss, who make for fascinating corridor and dinner conversations, who make side events like the miniconfs, the hackfest or open day interesting, and who pay the rego fees that lets the conference happen
the sponsors
who provide a chunk of money to fill out the conference budget letting us commit to venues and events earlier (when we might otherwise have to wait to see how many people come), and let us do extra things that registration fees alone wouldn’t cover

Obviously sometimes you have to exclude people from participating, but that’s mostly only if they’re actually causing trouble for the event. For sponsors, that pretty much means trying to interfere in the conference itself, or not paying on time. Otherwise, if you’re contributing to the conference, and not causing problems, you certainly should be recognised for that, as far as I can see.

For me, the same thing would apply if Microsoft was offering to sponsor the conference — if they’re willing to contribute, and not cause problems, I’m all for it. If they happen to not be doing anything constructive in Linux-space anywhere else, well, it seems perfectly fine to me to start contributing by helping make linux.conf.au awesome.

In Microsoft’s case that would be hard, because all the people going “oh my gosh, Microsoft, Linux! Wolves, sheeps! Hell, snow!” along with possible mixed messages from Microsoft and our long-term major sponsors HP and IBM about the future of Linux and whatnot could really distract us from all the cool technical stuff the conference is fundamentally about. I don’t think there’s anything Microsoft could offer to justify that much disruption, but having more of the world’s software companies involved in free software would probably be worth a bit of hassle, if the disruption could be minimised.

Ultimately, I guess my disagreement comes down to these couple of comments from Sam’s article:

Asked whether it was right that Novell should be allowed to be a sponsor for a conference such as this – which, in my view, is a privilege – […]

[…] Novell, obviously, is hoping that, as public memory is woefully short, it will be able to wriggle its way back into the community. Providing such leeway is, in my opinion, a big mistake.

In my opinion, the ability to contribute to open source isn’t a privelege, it’s something that should be open to everyone, including people who’ve made mistakes in the past: and that’s precisely what the “free” in free software is all about.

OTOH, if you want to see who’s been participating most in the Linux world lately, you’re much better off looking at the list of speakers than sponsors. Novell (or at least SuSE) folks giving talks in the main conference this year seem to include John Johansen and Nick Piggin. Interestingly, the count of HP folks seems a bit low this year, with only two that I can see, which leaves them not only merely equalling Novell/SuSE, but beaten by both Intel and Catalyst. Tsk! I guess we’ll have to wait and see if that changes when we can see the list of attendees’ companies in the booklet this year…

Baby Got Bloat

With the whole incipient git obsession I’ve been cleaning out some of my scratch dirs. In one, last touched in mid-2006, I found:


Oh. My. God.
Becky, look at that bloat!
It's so big...
It looks like one of those Microsoft products...
Who understands those Microsoft guys anyway?
They only code that crap because they're paid by the line...
I mean the bloat...
It's just so slow...
I can't believe it's so laggy...
It's just bloated...
I mean, gross...
Look, that just ain't a Hack.

I like big apps and I cannot lie.
You other bruthas can't deny,
That when some perl comes by, not a symbol to waste
Like line-noise, cut and paste --
You're bewitched;
But now my context's switched,
Coz I notice that glest's got glitz.
Oh BABY! I wanna apt-get ya,
Coz you got pictures,
Those hackers tried to warn me,
But the bling you got
/Make me so horny/
Oooo, app fantastic,
You say you wanna fill up my drive?
Well, use me, use me, coz you ain't that average GUI.

I've seen them typing,
To hell with reciting,
I point, and click, and never miss a single trick.

I'm tired of tech websites,
Sayin' command lines are the thing.
Ask the average power user what makes them tick --
You gotta point and click.

So hackers! (Yeah!) Hackers! (Yeah!)
Has your UI got the G? (Hell Yeah!)
Well click it (click it), click it (click it), and use that healthy glitz,
Baby got bloat.

(vi code with a KDE UI...)

And before you ask, no, I don’t know what I was drinking…

User configuration

Inspired mostly by Joey’s nonchalant way of dealing with the death of his laptop…

This seems less of a disaster than other times a laptop’s disk has died on me. When did it start to become routine? […] My mr and etckeeper setup made it easy to check everything back out from revision control. […]

…I’ve been looking at getting all my stuff version controlled too. I’ve just gotten round to checking all my dotfiles into git, and it crossed my mind that it’d be nice if I could just set an environment variable to tell apps to create their random new dot-files directly in my “.etc-garbage” repo. I figured using “$USER_ETC/foo” instead of “$HOME/.foo” would be pretty easy, and might be a fun release goal that other Debian folks might be interested in, so I did a quick google to see if something similar had already been suggested.

The first thing I stumbled upon was a mail from the PLD Linux folks who apparently were using $HOME_ETC at one time which sounded pretty good, though it doesn’t seem to have gotten anywhere. That thread included a pointer to the system that has gotten somewhere which is the XDG spec.

It’s actually pretty good, if you don’t mind it being ugly as all hell.

They define three classes of directory — configuration stuff, non-essential/cached data, and other data. That more or less matches the /etc, /var/cache and /var/lib directories for the system-wide equivalents, though if the “other data” is stuff that can be distributed by the OS vendor it might go in /usr/lib or /usr/share (or the /usr/local/ equivalents) too.

Which is all well and good. Where it gets ugly is the naming.

For the “/etc” configuration stuff, we have the environment variable $XDG_CONFIG_HOME, which defaults to ~/.config, and has a backup path defined by $XDG_CONFIG_DIRS, which defaults to /etc/xdg.

For the “/var/lib” other data stuff, we have the environment variable $XDG_DATA_HOME, which defaults to ~/.local/share, and has a backup path defined by $XDG_DATA_DIRS, which defaults to /usr/local/share:/usr/share. (Though if you’re using gdm, it’ll get set for you to also include /usr/share/gdm)

And for the “/var/cache” stuff, we have the environment variable $XDG_CACHE_HOME, which defaults to ~/.cache.

That seems to me like exactly the right idea, with way too much crap on it. If you simplify it obsessively — using existing names, dropping the desktop-centrism, you end up with:

Put configuration files in $HOME_ETC/foo or $HOME/.foo. For shared/fallback configuration, search $PATH_ETC if it’s set, or just /etc if it’s not.

Put data files in $HOME_LIB/foo or $HOME/.foo. For shared data, search $PATH_LIB if it’s set, or look through /var/lib, /usr/local/{lib,share} and /usr/{lib,share} if it’s not.

Put caches in $HOME_CACHE/foo or $HOME/.foo. For shared caches, search $PATH_CACHE if it’s set, or just look in /var/cache if it’s not.

That seems much simpler to me to the point of being self-explanatory, and much more in keeping with traditional Unix style. It’s also backwards compatabile if you use both old and new versions of a program with the same home directory (or you happen to like dotfiles). And having the XDG variables set based on the above seems pretty easy too.

I wonder what other people think — does {HOME,PATH}_{ETC,LIB,CACHE} seem sensible, or is XDG_{CONFIG,DATA,CACHE}_{HOME,DIRS} already entrenched enough that it’s best just to accept what’s fated?

tempus fugit

I blogged a fair bit about darcs some time ago, but since then I’ve not been able to get comfortable with the patch algebra’s approach to dealing with conflicting merges — I think mostly because it doesn’t provide a way for the user to instruct darcs on how to recover from a conflict and continue on. I’ve had a look at bzr since then, but it just feels slow, to the point where I tend to rsync things around instead of using it properly, and it just generally hasn’t felt comfortable.

On the other hand, a whole bunch of other folks I respect have been a bit more decisive than I have on this, and from where I sit, there’s been a notable trend:

Keith Packard, Oct 2006
Repository formats matter, Tyrannical SCM selection
Ted Tso, Mar 2007
Git and hg
Joey Hess, Oct 2007
Git transitions, etckeeper, git archive as distro package format

Of course, Rusty swings the other way, as do the OpenSolaris guys. The OpenSolaris conclusions seem mostly out of date if you’re able to use git 1.5, and I haven’t learnt quilt to miss its mode of operation the way Rusty does. And as far as the basics go, Carl Worth did an interesting exercise in translating an introduction to Mercurial into the equivalent for git, so that looks okay for git too.

Risky advancements

The latest warning from Dresden Codak‘s Aaron Diaz:

Do we really want to live in a society populated by geriatric 27-year-olds? In living so long and spending so much time `thinking’ do we not also run the risk of becoming a cold, passionless race incapable of experiencing our two emotions (fear and not fear)?

Also interesting, is a talk by Vernor Vinge from back in February to the Long Now institute titled “What if the Singularity Does Not Happen?”, to which slides are available along with an audio recording.

Asus eeePC

Okay, so any excuse to bring out the Laphroaig is fine by me, but the cute little eeePC is better than most. That it’s cute and popular is all very well, but what really makes my day is this is the first device I’ve seen that both doesn’t hide the fact it’s running Linux, and is available in maintstream stores in Australia. A random review from the Sydney Morning Herald:

You also don’t get Windows. Asus has adopted the free Linux operating system that’s been slowly yet steadily growing in popularity over the past decade. This keeps the cost down and makes better use of its relatively modest hardware, which would creak under the weight of Windows. It does mean not being able to use your favourite Windows software but, fortunately, the Eee PC comes with dozens of programs, including the familiar Firefox web browser and Skype for online phone calls.

That came out on the day the eeePC was released in Australia. Three days later, they did a followup:

Taiwan computer maker Asus might have underestimated the local demand for its diminutive Eee PC, as the $499 laptop is now virtually sold out in Australia.

From that, there’s a brief take on who’s actually buying them:

“We’ve had customers coming in buying two or three units for the family – the mix of customer has been probably novices more than the tech types,” he said.

It’s really good to see that Linux is getting credit in pretty much all those stories, no matter how mainstream. Xandros is pretty rarely mentioned, and I don’t think I’ve seen Debian mentioned yet. But even without the credit, it’s still pretty cool that with the eeePC and Dell’s trysting with Ubuntu, the real in-roads to pre-installed consumer Linux these days are building on Debian. I’d always expected that Red Hat or SuSE or someone with more commercial muscle would get through that door first (they’ve certainly had box sets more readily available!), but apparently elitist, freedom obsessed, techno-geekery actually works better, somehow. At least when there’s a company to put a smiley face in front of it all :)

The other interesting thing about choosing Xandros, is that it presumably means the eeePC is covered by Microsoft’s patent protection scheme — which means no baseless threats against eeePC users from Microsoft, but also that Microsoft’s probably getting a cut of whatever Xandros receives from each eeePC sale. Whether you think that’s a problem or not, the result is probably going to be that it’s the thin edge of the wedge: it’s removed Microsoft from blocking a Linux preinstall on consumer hardware, which means we can see that regular people like Linux systems enough that they sell out in days. And that means Linux systems are a fact of life, and if Microsoft try to stop them by making it difficult for you to sell Windows systems, well, that’ll just make it difficult to sell Windows systems — which is another win for Linux. And once you get to the point where the argument for paying off Microsoft doesn’t rely on keeping your OEM deal for Windows, but is just a matter of whether their patents are actually valid…

Oh, also nice from the promoting free software angle, is that the first thing you see if you decide to visit Asus’s eeePC site to check the gizmo is the news item:

2007.11.27 ASUSTek is committed to meet the requirements of the GNU General…

Add the fact that an eeePC running a Debian-derivative has access to all the software in the Debian repositories for no cost, and the new government’s planned rebates and investments in IT probably mean that parents can get close to a full refund, and you’re getting pretty close to a choice of spending nothing and getting a popular, reasonably functional and very portable laptop with all the software you could ever want, that just happens to be running Linux, or paying an extra few hundred or thousand dollars to get a Windows laptop, and then probably pirating whatever software you end up needing.

In any event, to my mind, that makes 2007 the year of the Linux on the desktop: everything from here is just a simple matter of quantity. What’s next?

The Liberals in Limbo — how low can they go?

The latest squabbling incompetence:

Gold Coast MP Ray Stevens says leadership contender Tim Nicholls should withdraw his challenge and get behind Mark McArdle.

Mr Stevens says if the matter is not resolved that way, he says his legal advice suggests the only option left will be to draw names out of a hat.

“That is unacceptable to the people of Queensland [and] it’s unacceptable to me,” he said.

“If Mr Nicholls were to win that particular toss of the coin or ballot or whatever he would be forever known as toss-up Tim.”

Why the hell the Queensland Nationals are putting up with this incompetence from their supposed coalition partners beats me.

Against the 59 Labor MPs in Queensland, there are 17 Nationals MPs, and 8 Liberal MPs. Since Labor’s clever “Just vote [1]” campaign in 2004 stopped the Nationals and Liberals from running candidates in the same seat, that means that the Liberals won 16.3% of the seats they contested, while the Nationals won 42.5% (Labor won 66%).

The Qld Liberals are dead and getting worse every day, the Nationals are the only real opposition in Qld, and they should start acting like it — by trying to win a majority at the next election in their own right (rather than not even putting up candidates in over 50% of seats), telling people to let their preferences be heard and not “Just vote [1]”, and if the Liberals can’t get their act together, withdrawing from the state coalition entirely so they aren’t dragged down with them.

“You’re an embarrassment to Queensland. Get on with the job, get a leader and start doing the things you’re elected to do.”

What she said.

UPDATE 2007/12/06:

Heh.

Earlier today, Nationals leader Jeff Seeney described the debacle as damaging to the Coalition’s credibility.

[…]

Mr McArdle won when Mr Nicholls withdrew his challenge to take on the deputy’s position instead.

The compromise was reached after Mr Seeney threatened to tear up the Coalition agreement.

Hark!

What I want for christmas:

Managing Debian Installs

For a while I’ve been trying to find some easy way to keep a few machines I admin behaving the way I want them too with minimal effort. They don’t really need much maintenance — but I would like something to help keep them all in sync. Basically, something like FAI, but much, much simpler — ideally something that takes five minutes to understand, and another five minutes to deploy; and leave the more complicated and powerful tools for when they’re actually needed.

I figured what I really want was just a simple way to make a meta-package — one that doesn’t really provide any functionality, just tells apt/dpkg what I want installed (via Depends), and what I don’t want installed (via Conflicts) and adds any extra configuration stuff or local scripts that I decide I want.

But doing that with a real Debian package is harder than I’m really comfortable with — I don’t want to have to worry about potential lintian errors, or rules files and debhelper commands, or writing a Makefile to get my files installed or whatever, I want something more trivial than that. Looking for meta-package creators, the only one I spotted that I thought looked likely was cdd-dev, described as “Custom Debian Distributions common files for developing meta packages”. Unfortunately it seems to just provide templates, which makes things quicker, but no less complex.

Fortunately equivs (“Circumvent Debian package dependencies”) is actually used for metapackages these days, according to its maintainer on IRC and its long description:

This package provides a tool to create Debian packages that only contain dependency information.

One use for this is to create a metapackage: a package whose sole purpose is to declare dependencies and conflicts on other packages so that these will be automatically installed, upgraded, or removed.

Another use is to circumvent dependency checking. […]

That turned out to work much better than I remembered (from whenever I last tried it — back in ’99 I guess?), with the only drawback being that I couldn’t add files easily. But that’s just a matter of creating a patch to equivs, which I then won’t have to worry about again. So having done that, I can now create a metapackage to do whatever I want by creating a file like:

Suite: client
Section: misc
Priority: standard

Package: ajs-client-stuff
Version: 20071114.1
Maintainer: Anthony Towns <aj@erisian.com.au>
Description: Metapackage for aj's client computers
 Depends on necessary packages, etc.

File: /etc/apt/sources.list.d/client.list
 deb http://mirror.localnet/debian etch main contrib non-free
 deb http://mirror.localnet/debian etch-proposed-updates main contrib non-free
 .
 deb http://security.debian.org/ etch/updates main contrib non-free

File: postinst
 #!/bin/sh -e
 .
 apt-key add - <<EOF
 [output of gpg --armour --export $KEY]
 EOF
 .
 ##DEBHELPER##

debhelper kindly takes care of getting the permissions right for me, and equivs will generate a full source package if I tell it to, which I can just upload to mini-dinstall and have a regular Debian repository just by writing a text file and running equivs-build. And my metapackage can add dependencies, conflicts, apt sources, cronjobs, scripts, configuration files, documentation, or whatever I happen to want — which means I can make it automatically update itself, and thus install any dependencies or remove any conflicts, which then means that modifying the config on all the machines is just a matter of updating the metapackage. And new installs is (hopefully) just a matter of doing a standard install and then adding the metapackage. Perfect.

…even if it is really little more than a reinvention of rpm’s .spec files. :)

Some fun…

Something I’ve been meaning to play with for a while, inspired by a slashdot post the other day:

Hopefully I’ll be able to speed up the calculations enough to have it work on more than just .1% of the data in reasonable time (the above took three hours of CPU time to generate, sadly), at which point we might be getting somewhere.

Props to cairo and gifsicle for the viewable data dump.

Multiple Repositories — Sumultaneously!

If, like me, you’ve been following development of Joey’s nifty new multi-repository tool and busily registering all your git and bzr and cvs and whatnot repos, you might have noticed a tantalising TODO item that’s recently appeared in the git repo:

* Ability to run commands in paralell? (-j n)

  If done right, this could make an update of a lot of repos faster. If
  done wrong, it could suck mightily. ;-)

Well, sucking mightily just means you need to prototype it first, so here’s a little add-on to mr(1) that runs multiple invocations of mr(1) simultaneously, naturally enough called mrs(1). Consideration of what that implies about superior multitasking is left as an exercise to the interested reader.

The implementation is slightly interesting: it’s a fairly simple perl script that first uses “mr list” to get a list of repositories to work with, then simply uses perl’s “open” function to run mr on each of those directories with the output piped to a filehandle. At that point, things get slightly complicated, since we want to keep them all running no matter what’s going on, so we have a select() loop that collects all the output into one buffer per command, which we put together later, and print out. And just for kicks, if the output ‘s longer than 20 lines, we pipe it through less after trimming out any ugly ^M nonsense we might have had thanks to progress updates or similar.

I like it, anyway. And happily, while “mr update” takes about fifty seconds for me, “mrs update” takes about ten. Fun!

(Joey: btw, it’s “parallel” :)

GPLv3 and Debian

So with the GPLv3 and LGPLv3 finally out (hurray!) it’s time to actually pay attention to the practical consequences of an upgrade to the world’s premier copyleft licenses.

One of the things I hadn’t noticed, though I should have, is that the LGPLv3 is not compatible with the GPLv2 — that is, GPLv2 programs that used LGPLv2 libraries cannot make use of new versions of those libraries released under the LGPLv3 without also updating their license to GPLv3 (or “GPLv2 or above”, or LGPLv2 or LGPLv3 or something similar).

Fortunately one of the major LGPLv2 libraries out there — glibc — is reportedly going to remain available under the LGPL v2.1 but for other libraries, it could become a problem, and it’s something GPLv2 authors will need to pay attention to.

Also fortunately the opposite problem — GPLv3 apps using LGPLv2 libraries isn’t a problem, because the LGPLv2 explicitly allows redistribution and modification under the terms of GPLv2 or later, so you can take an LGPLv2 library and incorporate it in a GPLv3 application and just release the whole thing under the GPLv3.

Unfortunately, the LGPLv3 doesn’t have a similar clause, so when the GPLv4 comes out, if it ever does, we could easily find the GPLv4 apps can’t use LGPLv3 libraries, and GPLv3 apps can’t use LGPLv4 libraries.

Really these are side-effects of a fundamental change: up until now, almost everything we’ve distributed — and certainly all the low level stuff: the kernel, the C library, the compiler, the shell and basic tools — have all been under a GPLv2 compatible license. Some of them have been more flexible than that, due to being under a BSD or MIT license, or being under the LGPL or having a “GPLv2 or later” clause, but we’ve been essentially able to ignore all that and just treat everything as GPLv2. There are a few exceptions that we have to deal with — the occassional app that’s not GPL compatible doesn’t cause much of a problem, because things that depend on it don’t usually create copyleft issues, which just leaves checking that it doesn’t use any GPL-only libs, such as gnutls or libreadline. Openssl is a nuisance, in that it’s a popular, GPL-incompatible library, but it’s also reasonably easy to deal with on a case by case basis.

But with the GPLv3, that changes: as GPLv3 apps and LGPLv3 libraries get uploaded, the assumption that more or less everything is GPLv2 compatible goes out the window, and while some things are GPLv2 only (such as the kernel!) we also can’t assume everything is GPLv3 compatible.

Fortunately, both the GPLv2 and GPLv3 have what is known as the “system library” exception, which allows you to not bother distributing the source to your C compiler or C library or kernel with your app, because everyone’s assumed to have one of those already. Unfortunately, the GPLv2 version of that exception includes the phrase “unless that component itself accompanies the executable”, so Debian has always cautiously assumed the exception doesn’t apply to us, because everything we distribute in main “accompanies” everything else in main — kernel, C library, C compiler, etc. Equally unfortunately, while the GPLv3 avoids that problem, but ends up with a clause that’s not terribly comprehensible, despite a couple of rewrites during the drafting process.

What’s all that mean?

The big thing is that it means Debian’s going to have to put a bit more attention into the system library exception, both for GPLv2 and GPLv3 apps, because our current policy of trying to avoid needing it is probably not going to be sustainable. I’m not sure how in depth we’ll need to go for that; we’ve already contacted the FSF licensing folks for an explanation of the GPLv2 clause in relation to the CDDL, but it’s possible the FSF might prefer to be ambiguous in responding so as to keep their options open for defending or enforcing the GPL later, in which case we might need to get legal advice of our own as to what we should do.

The second thing is that in reviewing our understanding of the system library exception, we might be in a position to have a Debian GNU/OpenSolaris port, making use of the exception for the CDDL-licensed OpenSolaris kernel and libc, as Nexenta do. That seems to me to be almost certainly possible for GPLv3 (or “GPLv2 or above”) applications, and probably possible for GPLv2 apps as well.

If so, that’ll let us expand Debian to supporting four kernels in the near future: Linux, Hurd, FreeBSD and OpenSolaris, which will be pretty fun, in my book.

The downside, is that if we find an interpretation of the system library exception that lets us mix and match licenses comfortably, that same interpretation might be able to be used to let propietary software vendors mix and match GPLed software with non-free software, rather than just free software with different copyleft licenses — and thus defeat the entire point of copyleft. What’s worse, is that by following Debian’s lead, they’d have some moral authority in doing that, beyond just having a legal argument as to why it’s okay.

And that’s the rub.

(My thoughts on the license texts as posted to -legal are here, here, here and here.)

Some Random Notes from DebConf

debootstrap’s now team maintained under the debian-installer subversion repository; uploaders are Joey Hess, Frans Pop, Junichi Uekawa and myself. Rumours are Colin Watson might be joining in too. There’s a few changes committed, but an upload hasn’t been made yet — at least last I checked. Frans is applying pressure to bump the version to 1.0, as he seems to think it’s ready for production use.

ifupdown’s had a maintainer upload, albeit to experimental — consider it a response to the competitive threat of madduck’s netconf. Changes are pretty minimal — it now uses iproute instead of ifconfig, there’s a “-o” option for ifup that lets you say things like “ifup -o wireless_essid=DebConf7 eth1” instead of having to edit /etc/network/interfaces. There’s a couple of backwards-incompatible changes unfortunately, so some other development’s needed before it will be suitable for unstable.

In other red-letter news, the bzr archive for dak is up to date compared to what’s in use on ftp-master. Take advantage while you can! dak continues to stand for “debian-archive-kit”, not “debian-archive-kilt” however.

Chatting about the CDDL

Just prior to the “State of the Coffee Cup” talk on Sunday, Tom Marble sent a quick invitation to Steve Langasek, Sam Hocevar and myself to have a chat with Simon Phipps about the CDDL and some of the concerns that have been raised on the debian-legal list lately — particularly about choice of venue. Unfortunately in the ten or fifteen minutes between the mail and the meeting, we couldn’t find Sam, but we managed to grab Don Armstrong to join in instead.

It’s a couple of days later, and I’m going from memory, not notes, so this is paraphrasing. Simon’s primary concern was one of predictability — Debian accepted the Mozilla Public License back when it was released (and prior to Mozilla and other software tending to get dual licensed under the GPLv2) which is what the CDDL was based on and contains almost the exact same terms. Simon’s comment was that if you’ve accepted that then, reviewing new licenses from scratch and coming up with different results really makes it hard to work with you, and worse makes it look like you’re basing your conclusions more on who proposed the license rather than what the license actually says.

Don and Steve seemed to mostly appreciate that point, but still thought that choice of venue was harmful and unnecessary, and that it’d be nice if Sun — as pretty much the primary organisation currently behind any of these licenses — would take some steps to reduce that harm: limiting choice of venue to when both parties in a dispute are multinational, for example. Simon’s concern there was that updating the CDDL would require a lot of effort internally — getting buy in from all the groups in Sun using the CDDL, as well as getting the wording changes approved from the appropriate legal advisors, and convincing the lawyers of Sun’s commercial partners that the changes weren’t going to be a problem later; and committing to all that work when a new versions of the GPL is about to be released and it might be a better thing to work on getting the relevant software available under that, or even the GPLv2. On the other hand, Simon thought that making (binding) statements about Sun’s interpretation of the license seemed to (eventually) work reasonably well for the DLJ, so might be something that can be done again for this issue. And while that only counts for Sun, we can fairly easily ask if other authors of CDDLed stuff agree with Sun’s interpretation for their software, and only worry about possible problems if it turns out they don’t.

That’s pretty much where we ended up I think, with Simon suggesting that we work out what exactly needs to be clarified with Tom, and then get it passed back up to Simon to get signed-off on. And hopefully that’s what’ll happen over the next little while. :)

(The somewhat more interesting question of whether we can combine CDDL’ed OpenSolaris (in particular Solaris libc) with GPLed userspace tools such as dpkg to produce a Debian OpenSolaris distro is still up in the air — Sam’s currently waiting on a response from the FSF legal team on one of the questions that concerns Debian, aiui; but they’re currently flat out working on GPLv3 as you might imagine. Oh well — in the meantime, there’s always Nexenta!)

Dunc-tank Report Ideas

Okay, so with the “Bits from the DPL” talk that Sam and I shared now out of the way, that hopefully means finishing off the whole dunc-tank thing won’t imply any more conflicts of interest, potential or otherwise. And since we’ve now had the post-etch release team report, there shouldn’t be much to wait around for.

I’d been kinda hoping to be able to come to debconf with a pre-prepared report of similar quality to the debconf6’s final report (or debconf5’s for that matter); but while I was going back through the discussion threads, it just got harder and harder to find a way of writing the report that didn’t seem like it’d just inspire the same arguments again. I think that’s avoidable — I know some of the people who were against the way dunc-tank ended up have indicated there are other approaches they might support, and it ought to be possible to summarise the dunc-tank experience in a way that lets us have a better idea what to do in the future based on that experience, than have the same disagreements we’ve already had. Anyway, my best guess is to try crafting the report via blogging or put it on a wiki or something.

Anyway, there are three things that I thought we could usefully have in a report that wouldn’t devolve into the same old arguments (after all, if we at least have new arguments, that’s some sort of progress!).

First of all is trying to have a real understanding of what happened with etch — getting a release out is complicated with lots of potential blockers beyond just “release management” work: d-i, kernel, toolchain, CDs, builds, bug tracking, whatever. Presumably dunc-tank only ever had a chance of removing release management as a blocker, and since we can’t redo the etch release while changing parameters like dunc-tank or Vancouver, at least having as clear as possible understanding of which blockers happened when, and what other influences might have been important the release process seems like a good way of understanding the influences that were and weren’t important, so we can have a better idea what to do in future. I’m figuring a reasonable structure might be something like:

  • Release blockers between July 2006 and April 2007
  • Analsyis of bug discovery/resoution rates between July 2006 and April 2007
  • Comparison of major statistics for etch with previous releases (freeze length, etc)
  • Comparison of etch with Ubuntu’s edgy release (October 2006, roughly the time dunc-tank started), and feisty release (April 2007, roughly the same time as etch’s release)
  • Summary of other release timliness projects: Vancouver proposal, release assistants, BTS version tracking and changes to RC bug monitoring, transition monitoring through the entire release cycle, binNMU changes, improvements in experimental support, BSP marathons
  • Summary of release quality projects: DFSG-free content, LSB support, security support for testing, QA meetings, piuparts runs, whole archive rebuild testing

The theory is that the above should provide the information to make a good judgement of what (if any) effect dunc-tank (or the other projects) had on the release, without implying a preconceived conclusion of whether dunc-tank’s “good” or “bad” or whatever.

As well as looking at the release stuff in some depth, it seems to me to be pretty sensible to do a report on how dunc-tank worked financially. Anyone doing any similar sort of “let’s raise funds, then spend them on free software” is going to have similar issues in getting publicity, collecting funds, and actually spending them, no matter how they end up deciding on the harder questions of deciding who or what to fund. There’s not a lot to say there, and I don’t think there’s much that’s controversial there either, but there are a few problems and approaches to solving them that are worth reporting on.

I think that’s about as far as you can get trying to cleverly avoid the controversy though. And presuming we rule out the idea of choosing a position on some of the controversial questions and advocating for that (“dunc-tank was good, paying release managers was awesome, we should do it again for everyone!”), the best idea I’ve had so far is to just try summarising some of the controversial questions that remain open, which Debian, or dunc-tank, or whoever can then address in whatever way we like — by GRs, or further debate, or more experiments, or leaving it to other people to do for us, or whatever. So having the questions and some of the arguments for various answers and summarising what happened with dunc-tank seems like a plausible way to go. The questions I’ve got:

  • Is paid work compatible with volunteer work?
    • How the release managers and assistants reacted to doing volunteer work while someone was paid to do similar work
    • Other developers — some donated; others resigned or reduced work
    • Dunc-bank and QA initiatives
  • How much distance should there be between Debian and whoever’s funding development?
    • Involvement of DPL and high-profile developers
    • Involvement of SPI
    • Statements from press reports about the relationship
    • Votes to support/endorse/recommend against
    • Donors being confused as to where to donate funds
  • How should work or developers to be funded be chosen?
    • Potential for favouritism
    • Potential for misalignment with Debian’s goals
    • Choosing based on least controversy or most benefit
  • How should paid “volunteer” work be priced?
    • Covering living expenses?
    • Comparable rate to similar proprietary/commercial development?
    • “Market” rates?
    • Full-time versus part-time work
    • Contracts versus prizes or bounties or rewards

No promises as to whether they’re the best questions, the best sub-points, or that we’re going to be able to come to an answer on any of them. :)

Anyway, that’s my theory so far, no I need to grab some lunch before the next lot of talks — feel free to mail me, or talk to me in person, or blog responses or whatever…