Promoting Free Markets

Perhaps the most beautiful facet of capitalism is the way even its nominal opponents are forced into enhancing its effectiveness. Following in the footsteps of Mike Moore’s schlocumentary producing corporate empire, it’s my pleasure to introduce the first issue of Blender’s Consumer Reports: Centrecom sucks. (And don’t forget to read issue two, Centrecom sucks: I really mean it and watch out for the forthcoming followups, with the working titles: You know what sucks? Centrecom, and Nothing sucks like an Electrolux? What about Centrecom?)

YADFW

Unsurprisingly, Ubuntu’s release has generated some discussion in Debian. Odds on it won’t create much else. Anyway, Scott (a Canonical employee and dpkg hacker, among other things) writes:

Release, release, my kingdom for a release!

[…]

I think he’s missed something major there, and that something major is the reason I think Debian finds releasing difficult.

Testing.

Clearly, this requires a response.

Continuing on…

His point would be valid if Debian Developers ran the distribution that gets released, but they don’t. Most Debian Developers run unstable but the distribution that gets released is testing. This means that it doesn’t really matter to them whether there is a release or not, it doesn’t affect the system they run on their machines.

[…]

Testing was created to provide an almost-releasable version of Debian at all times, instead its separated most developers from the release so they’ve lost interest in it.

First, let’s deal with the obvious mistakes. It’s easy to claim that no one’s interested in a release, but it’s far from true — plenty of people want a release as soon as possible; the issue that’s hard is working out what should go in the release, and fixing the remaining problems. No doubt you can find a decent number of folks who don’t care about releasing, but it’s far easier to find people who do want stable updated. Looking at the Canonical payroll would probably be a good start, eg. That Scott himself is willing to offer an entire kingdom for a release should have been some indication that the desire, at least, is out there.

In a sense, that isn’t really an avoidable issue anyway. Scott joined the project in late 2001 apparently, so might have missed the way we used to do things when releasing which was to rename the “unstable” distribution to “frozen”, create a new “unstable”, and then try to make “frozen” releasable. That had the same problem Scott was complaining about — that people might just keep running unstable, and ignore the release issues that need fixing. But hey, that’s the way things are: Debian’s a volunteer project, which means if people want to ignore particular problems then that’s their right. Sure, you can try to force the situation, but you don’t have any leverage: the only thing you can do is prevent people from helping you in other ways than the one you particularly want.

And let’s have a quick look at the question of just how hard testing does make it to release. Of the 2225 packages in warty/main, 989 are “ubuntu” specific updates, 625 match the version in testing, and a further 544 are older than the version currently in testing, and 9 are from unstable (5 of which have had further updates). Of the remaining 58 packages, 52 are Ubuntu-specific packages, and the others are apt (whose warty version has an NMU number, rather than an Ubuntu NMU number) and gnome-terminal (which seems to have been pulled from experimental).

(Side note: of the 989 updated packages, only 294 are still “newer” than Debian; 613 of them have a higher version in testing than warty, and 88 have a higher version in unstable. That’s not terribly meaningful, though, since warty specific patches haven’t necessarily been included in the updated Debian packages; and conversely some of the Debian changes may have been backported into the Warty packages.)

So, it seems roughly reasonable to say that testing does about 80% (625+544+613) of the work for you, and of the remaining 20% of the work, Debian only does around 4% (9+88) anyway. Of course, that’s only counting the easy bit: Debian includes over 14,000 packages these days.

Once we start looking at the 11556 packages in Warty’s universe, we find another 679 “ubuntu” updates, 5889 packages matching the testing version, a further 4193 that are older than the version in testing, an additional 280 packages that have been dropped from testing, and 515 packages pulled from unstable (247 of which match the current unstable version). In which case, testing appears to get you about 90% of the way there, while Debian could conceivably only get you an additional 4.5%.

The other interesting question is how much better (or worse) testing does than unstable as a basis for constructing warty. For warty/main, there are 51 packages where warty matches testing, but not unstable, versus 5 for unstable, but not testing; for universe the numbers are 415 and 247 respectively. These are pretty much dwarfed by the number of packages in warty that are older than the corresponding packages in testing though, and I’m not sure how you could meaningfully factor that in. It’s also true that it’s harder to “downgrade” in Debian than it is to upgrade; so starting off your distro with a buggy “2.0” and going back to “1.5” is usually harder than starting off with a boring “1.0” and moving up to “1.5”. I’m not sure how you’d factor that in, either.

It’s interesting that testing gets 80% and 90% of the way even in spite of the fact that testing has 11 architectures, and warty only has three. It’s possible Canonical aren’t taking full advantage of this yet; that is, pulling packages from unstable when they’d be acceptable for testing apart from architecture specific problems. It’s also possible that the gains got mostly wiped out during the four month warty freeze (June 28th to October 20th).

Now, you can blame testing for Debian’s problems all you like, but personally, I don’t buy it.

My opinion on what’s behind Debian’s release problems is as mentioned above: we’re hobbled by the difficulty of deciding what we want to release, and the task of getting people to actually make the improvements we choose to insist on.

Major examples of the former in the past year have been the nonfree and amd64 issues, but there are dozens of less obvious questions that are raised and resolved every week beyond those.

Solutions to the latter come in three forms: find other people, provide better incentives for doing the work, or provide less disincentives for doing the work. There’s also the possibility of providing disincentives for not doing the work, but as above, I don’t think that tends to be particular effective. You can compare the Ubuntu environment with Debian on these issues: there are few new people working on it compared to Debian (notably jdub), there are plenty of additional rewards for doing the work (money and kudos), and a reasonable chunk of disincentives have been actively removed (no non-Debian day job to worry about, flamewar free lists, effective decision making, and a single boss to worry about pleasing, rather than a thousand developers ready to sign a GR proposal, or whine on their blogs).

In my opinion, Debian’s particularly rife with disincentives to contributing. As a trivial example, discussing the release process always results in people coming out of the woodwork to complain about testing. I can assure you that “That thing you spent a couple of years thinking about and building? Yeah, it fucking sucks.” isn’t high on my personal list of interesting conversations. Sure, testing’s far from perfect; but the biggest imperfection has always been the lack of ongoing security support, and just try getting past the resistance to that. And what’s the point of devoting yourself to solving minor issues, while the big ones just sit there, year after year?

Anyway, an equally good example for Scott’s point would be “woody plus backports” — its existance stops folks from bothering to run the forthcoming release and resolving the bugs therein too. Is it a huge problem for releasing that must be stopped too?

For reference, I don’t run unstable on any machines; I have a machine running testing, a machine running warty (dist-upgraded from testing), and a few machines running stable with a handful of miscellaneous backports. And it seems like good odds that my new iBook will end up running Mac OS X most of the time when it arrives, and that I’ll need to buy a copy of Microsoft’s VirtualPC if I want to run Linux on it, without giving up wireless connectivity. Yay.

UPDATE 2004/10/23:

Actually, maybe it is possible to get some analyis of whether testing or unstable would make a better base for warty. Of the packages in warty/universe, 513 are older than what’s in testing while there’s a newer version in unstable, and 415 match the version in testing, though there’s a newer version in unstable. Conversely, there are 262 packages newer than what’s in testing (though older than unstable), and 247 packages that are newer than what’s in testing and match what’s in unstable. So that’s 927 packages where warty and testing agree on avoiding unstable, and 509 packages where testing’s being more reticent than warty about getting newer software. Which is a 9:5 split in favour of working from testing rather than just using unstable (or 65% to 35%). As above, the real problem is that getting the 65% of packages reverted is a harder job than updating the 35% of packages. If you’re clever (eg, if you know exactly which packages make up the 65% from the very start), that’s a trivial problem. If you’re not (eg, you only realise your updated gcc has made your distro incompatible with everyone else, and you need to rebuild everything), it can become ludicrously hard. It’s hard to say where between those two extremes Ubuntu would lie.

(Note that the old Debian release process didn’t do reversions — we fixed the bugs, no matter how long it took. Reverting a package will tend to cause apt or dselect not to update it, which screws your users. Reverting via an epoch is irritating, and can break dependencies, especially when shlibs are involved ((>= 1.03) matches 1:1.02, which it shouldn’t if 1:1.02 was meant to be the same as 1.02). Having to fix bugs, rather than just switch to a non-buggy version of the package makes things even harder, of course.)

Unfortunately, all that could be just noise compared to the 3677 packages that are the same in testing and unstable, but for which there’s an older version in warty. That’s not counting ubuntu specific updates (assuming their versioning scheme is consistent, anyway), so should be purely attributable to the “freeze, hack, release” model they’ve adopted.

OTOH, that’s around a third of warty/universe that testing’s arguably done a “better” job of managing than a bunch of paid hackers. Or, at around 544 of 2225 packages, around 24% of warty/main. (Versus 44% Ubuntu specific, 28% handled equally well, 3% handled worse, and 1% rounded into non-existance)

Though I think that’s still not really very enlightening. It might help to know what proportion of packages were pulled from unstable “early” during warty’s development; but which testing later included anyway. Analysing which of the ubuntu specific changes have made it into testing, unstable and experimental would also be interesting.

Also, I should note that as much as I’d like it to be otherwise, I absolutely don’t recommend the use of testing for multiuser machines, or machines that offer any services over a network where you don’t have complete trust in everyone who has access to that network. You can do it, but you do so at your own risk, in the face of published security problems that not infrequently remain unfixed for extended periods. It’s possible to fix that situation, but the comments above about disincentives are particularly applicable here.

Darcs and Repositories

I think it’s reasonable to consider two sorts of “repository” when dealing with darcs — public repositories that are used to reflect a particular line of development, and private working directories that are used to actually do development. Unfortunately there’s some overlap here, pretty much taking the form of “copying your working directory around”.

The difference between the two main classes are nice and clear: for working directories you want as much control over what happens as you can get; and for public repositories you want consistency and accessibility. Which means working directories need to be local, public repositories can be remote; public repositories need to be consistent and append-only, while working directories can be “unpulled”, “unrecorded” and “reverted” as often as you like.

Now, darcs already handles working directories fine; but it’s arguably a bit too flexible as far as public repositories are concerned. We’ll just ignore the “in between” case and, presuming that one or the other extreme will be good enough in practice, work on adding some better support for public repositories.

(As an aside, I’m writing this entry concurrently with designing the actual code, kind-of a weird amalgam of blogging and literate programming. I wonder how it’ll work out.)

For public repositories we fundamentally want to be able to store patches easily: so we need to be able to do darcs push, and we want it to be reasonable space efficient. We also want to be able to get patches easily; hence we need darcs pull support, and we want it web accessible. We want to be able to deal with different projects, with different branches of a project, and different versions within a branch.

In order to have something that works with darcs directly, we only need to make available the patches for each tree we’ve got checked in, and the “inventory”, which tells us which patches the tree incorporates, and in what order they should be applied. Different trees in the same project will tend to have the same patches with one caveat: when they’re applied in different orders minor details will differ, particularly line numbers. Patches have a unique name based on the date they were created, the log message and some other things that doesn’t change when the patch is merged into a new tree, even if the actual contents of the patch is munged somewhat.

The way we’ll do this, then, is to support many branches per project, and share patches between branches. When a single patch takes a different form between two branches, we’ll represent that as the original patch, and a diff from the original to the variant — so rather than duplicating the entire patch, we’ll only note the number line number where it’s to be applied, eg. That should be both sufficiently capable and efficient enough that I don’t worry about wasting space gratuitously. We’ll just store each branch’s inventory as a separate file.

How about naming? I think for the moment it’s best to say “freeform filename”, so say anything matching [a-zA-Z0-9+_.,-]+, and adopt the arch style of using foo--bar to indicate “bar” is a sub-branch of “foo”. Then I can say mainline--1.0 and have separate branches per major version, or debian--1.0.1 to record Debian changes, or debian-nmu--1.0.1-3 to record the NMUs to the 1.0.1-3 Debian package. If using slashes or something later turns out to be convenient, it can be hacked in later. For now, KISS.

Finally, access. We want, perhaps, three forms of access: management, commits, and retrievals. In reverse order, retrievals we want to do over anonymous http, so we’ll need a CGI script to adapt our storage into precisely what darcs expects; for commits we’ll need to write some scripts at which to point DARCS_SSH and DARCS_SCP so we can get control over the process, and we’ll need to write a darcs-repo script to sit on the server that actually manages the repository and tries to avoid storing too much data.

In order to have the ssh hooks called, we need our repository name to be of the form [a-z]+:.*; and it seems like the most forwards-compatible thing to do is make it be darcs-repo:server/project/branch. Using URI syntax and putting a double-slash before the server name would unfortunately make darcs try to use curl instead, and wouldn’t get us anywhere. But hey, this is a “near enough” project. We’re also not going to worry about how you’d specify a username — .ssh/config will do for the time being.

The darcs-repo script needs two modes — a get mode that can give us a branch inventory or a patch from a branch, and an apply mode that will actually commit to a branch. The former’s simple — it’s just a matter of unmunging our stored patches. The latter’s trickier: we need to be able to parse darcs’ “apply” format, and we need to make sure that the patch actually applies to our repository. Fortunately, we can do this by just comparing the presented context to what we’ve actually got in the inventory — if they’re not a perfect match, rather than performing a merge like darcs would, we can just error out; that’ll even get passed back to the user, so it’s all good.

The CGI script can then basically just call darcs-repo get based on the URL it’s given. Easy.

darcs-ssh and darcs-scp are also easy — they just need to catch invocations of ssh darcs-repo .. and convert them to ssh host darcs-repo apply project branch, and convert invocations of scp darcs-repo:host/project/branch dest to ssh host darcs-repo get project branch >dest.

So, all that said, here’s an implementation of the above.

Bugs? Yes, there are some — darcs’ patch format isn’t parsed properly, so if you find yourself setting a preference to be “}”, you might have problems. --apply-as isn’t supported. Error messages also aren’t very nice. The CGI might also be a bit slow. Tags or checkpoints or contexts could break things.

Missing features? Creating new projects and branches has to be done manually (with a mkdir and a touch inventory-branch respectively). It’s probably also be nice to have tarballs automatically made from checked in code. Otherwise, it seems like a good first pass at the idea.

Neat trick learnt? If you want to apply a patch without actually writing the new file to the filesystem anywhere, one way is to use diff --ed style patches, and feed them to red - origfile, followed by the command 1,$p to print the entire resulting file to stdout.

Carnival of the Capitalists

It’s a cracker Carnival this week, with a Catallarchy post on the link between competing replicators and gay amateur gang rape porn, a Truck and Barter post on how government makes us sick, and a Layman’s Logic post on home made speed cameras, for sale online.

That Liberal Media

You know, that double-entendre just keeps getting better. Anyway, a couple of days before the election, lefty blogger Robert Corr noted a Crikey mailout claiming the Age was forced to take a pro-Howard line in its editorial by their Editor-in-chief, supposedly on the basis that “backing Latham wasnt in the commercial interests of the company.” Clearly then, the media aren’t liberal at all, right?

Many staffers at 250 Spencer Street are disgusted, and rightly so. Three years ago, Fairfax took the line that Howard was a liar and a xenophobe who was whipping the public into a fear frenzy over national security. Little has changed since then, (except our Editor-in-Chief), who – it is said – was alone in his decision to support the Coalition this time around.

Afghanistan Elections Marred By Peace

From Indian news site, NDTV: Boycott call dropped in Afghanistan

So it wasn’t bomb threats by the Taliban, but allegations of mass irregularities that threatened to derail the three-year march towards democracy.

Uh, moving your worries from “bomb threats” to “mass irregularities” is marching towards democracy. Is it really that unreasonable to expect the first democratic elections in Afghanistan to be portrayed as primarily a positive development? This is a country whose history for my life has entirely consisted of bloody coups, mass arrests, tortures, mass killings, Soviet invasions, secret police, human rights violations, puppet governments, civil war with 40,000 dead, guerilla warfare, the establishment of an Islamic state by a fundamentalist militia, the capital being reduced to rubble, oppression of women, more human rights violations, mass graves, thousands of civilians massacred, more torture and murder of civilians, destruction of historical statues and sites, and, oh, don’t forget UN sanctions.

Look, keep your goddamn bias — Latham and Kerry are fine chaps, vote for them, and support them all you like — but if you’re going to do a story entitled, say, Counting begins in Afghan election, how about showing a little courtesy by spending more than a sentence on the counting, and not just using at as an excuse to promote the posturing of the also-ran candidates?

At least FOX News has a story on the construction in Afghanistan without trying to make the improvements seem like a bad thing. On the other hand, their story on the elections themselves takes the same negative line as everywhere else, Associated Press reprint that it is. And what an utterly ridiculous way to conclude: Islamic poet Abdul Latif Padran, another minor candidate, said: “Today was a very black day. Today was the occupation of Afghanistan by America through elections.”

The Professor summarises the good news.

UPDATE 2004/10/12:

Gag. Via Tech Central Station:

KABUL – It was a regrettably typical comment from an American reporter in this part of the world. “At least it’s news,” he said of the Afghan election scuffle over the weekend. “Otherwise, this is just a success story.”

Bioweapons Labs, redux

Not long after I linked to the Yahoo story about confirmed bioweapons labs in Iraq last year, it disappeared. Let’s see if the same thing happens to this World Net Daily story that even includes pictures. These are almost certainly the trucks that the Duelfer report is talking about when it says:

[The Iraq Survey Group] thoroughly examined two trailers captured in 2003, suspected of being mobile [bioweapons] agent production units, and investigated the associated evidence. ISG judges that its Iraqi makers almost certainly designed and built the equipment exclusively for the generation of hydrogen. It is impractical to use the equipment for the production and weaponization of [bioweapons] agent. ISG judges that it cannot therefore be part of any [bioweapons] program.

The short summary of the Duelfer report is actually quite readable, and reasonably brief. It’s also a much more thorough and two-sided summary of the background than, eg, the Sydney Morning Herald’s take:

A report published last week by the CIA’s chief weapons investigator in Iraq, Charles Duelfer, concluded that Saddam Hussein destroyed his stockpiles of chemical and biological weapons in the early 1990s and never tried to rebuild them. But a little-noticed section of the 960-page report warns that the danger of a “devastating” attack with unconventional weapons has grown since the US-led invasion and occupation of Iraq last year.

Bikini Babes for Bush

David writes, under the heading Supermodels for Kerry:

Sure, I’m superficial and shallow — but you know you’re tempted too.

Don’t get me wrong, I’d accept a bumper sticker from Rebecca Romijn too, but it’s not like being superficial and shallow is an either/or choice compared with being a right-wing death beast. As Australian-American Gabrielle Reilly says: Senator John Kerry, Don’t Burn Someone Else’s Limited Edition Bra!

Election Results

First, the scores: Betting markets: +1; Pundits: 0; Polls: -1.

While geeking out on partial tallies, Michael noticed the extremely high informal count in his electorate, and noted that “we’re none too bright here in Rankin”. If you consider informal voting a measure of dumbness, it turns out Rankin’s the dumbest electorate in Queensland (at least as of close of counting on election night). Not the dumbest in the country though — there are eleven with an even higher informal count, all in NSW. Ha! Unfortunately for Queensland’s claims to be the smart state, Victoria has the top four electorates as far as voting formally goes.

Apart from inter-state rivalry, there’s another correlation to be found too. See if you can spot it:

Reid, 11.41% informal: ALP with 62.3%
Greenway, 11.18% informal: ALP with 50.9%
Blaxland, 10.45% informal: ALP with 62.8%
Chifley, 10.32% informal: ALP with 62.9%
Prospect, 9.15% informal: ALP with 56.7%

Fowler, 9.13% informal: ALP with 71.5%
Watson, 9.04% informal: ALP with 65.6%
Werriwa, 8.07% informal: ALP with 59.4% (Mark Latham’s seat)
Parramatta, 8.02% informal: ALP with 50.4%
Kingsford Smith, 7.94% informal: ALP with 58.7% (Peter Garrett’s seat)

Lindsay, 7.68% informal: LIB with 54.0%
Rankin, 7.60% informal: ALP with 52.4%
Dobell, 7.48% informal: LIB with 55.6%
Banks, 7.33% informal: ALP with 51.2%
Port Adelaide, 7.23% informal: ALP with 62.4%

Oxley, 7.21% informal: ALP with 59.4%
Barton, 7.19% informal: ALP with 57.3%
Macarthur, 6.92% informal: LIB with 59.6%
Lowe, 6.89% informal: ALP with 53.4%
Fadden, 6.87% informal: LIB with 65.5%

Australia’s dumbest electorates overwhelmingly return Labor representatives. Coincidence?

(For reference, the 16 ALP seats above represent a little over a quarter of the seats the ALP are looking at winning, the Liberal seats make up about 5% of the coalition’s haul. For a little perspective, the PM’s seat of Bennelong had 6.20% informal, and went with 57.71%; Kim Beazley’s seat of Brand had 5.49% informal and went with 60.05%. Bendigo’s got the best informal rate so far on 2.83%, and is returning Steve Gibbons, the sitting Labor member.)

October 9th Excitement

As the election day dawns, you can just feel the excitement in the air:

They seem genuinely excited. Almost everyone does. In the markets, people are actually talking about the vote. Some are driving around with pictures of candidates in their car windows. Posters of every hue cover the walls of central Kabul.

Meanwhile in Australia, it’s apathy central. You can only get so excited about keeping interest rates down, apparently.

On election day, stay in bed and vote [1] indolence!

(Staying in bed on election day may violate electoral laws; Indolence Log offers this advice with no faith whatsoever, and accepts no responsibility or liability for any actions taken in response to this advice or anything else anyone may or may not do, now or at any time in the past or future, in reality, fiction, fantasy, romance, thriller, or any other section of your local book or video store. Sections of music stores, hardware stores and supermarkets are excluded too.)

Oil Prices

When I reinvented this segment of my blog, I said it was meant to cover economics as well as politics, but I haven’t really been following through on that too well. Since someone thinks I may perhaps have a spec in my eye labelled “not blogging enough”, I figured I’d mix two metaphors with one stone.

One of the pre-requisites for being an economic right-winger is having faith in the market, and if you don’t have it, you probably don’t know what such a thing even means. Fortunately, an example showed up on the ABC news today. It went something like this:

Fuel price prompts power rethink in outback

The owner of a remote fuel station in the Gulf country of the Northern Territory says people in remote areas should consider installing alternative sources of power to combat a hike in oil prices.

[…]

[Paul Zlozkowski] says he has replaced a diesel generator with a solar powered system on his cattle station and has significantly reduced fuel consumption.

“Well here at Murranji it’s about a litre a day I’m using now, even with a small plant here I would have been using at least a litre an hour so that’s an incredible saving.”

The principle’s pretty simple: a decrease in supply, or an increase in demand cause the price of something to go up, and people start looking seriously at alternatives. Once it goes up enough to make the alternatives cheaper, they switch to them.

In this case, the price went from probably around $19 a day (24 litres per day at 80c a litre), to around $38 (at $1.60 per litre). Dropping down to one litre a day at those prices is a saving of almost $39 per day, or around $13,500 per year. At the old prices, it’d be a saving of only $6700. A $20,000 solar generator, with running costs of $10 a day, that achieved those results would pay for itself in two years and sixteen days at fuel prices of $1.60 a litre; it’d take six and a half years at 80c per litre. Those numbers can get worse too, if you factor in the risk that the solar generator might need to be replaced within a few years, or that the running costs might blow out, or that it won’t actually be as effective as you expect.

(For reference, solazone seems to have an $18,000 system that’ll provide 100% of the power in a 3-4 bedroom “energy-efficient” home. Add back the $4,000 solar power rebate for $22,000, and subtract GST for the $20,000 above. The $10/day of maintenance costs were pulled out of my butt. YMMV.)

All of which is to say that price fluctuations are an effective way of encouraging people to use alternative technologies. Everyone knows this — that’s why there are hefty tariffs on cigarettes: to encourage people to relax by other methods.

The real trick is in working out what people should prefer, whatever that means. When should you prefer solar over oil, or vice-versa? Do they both produce enough energy? How much space and other resources do they take up? Can the energy be stored easily and without too much waste? How sustainable is the resource? How about pollution? Does it clutter up the scenery like a windfarm?

But magically, most of those things are already factored into the price. If it takes more equipment to produce the energy you need, it’ll cost more in parts and labour. If the energy isn’t naturally storable, you’ll have to generate more of it (more expensive) and have expensive batteries to store it instead of a 20 gallon tank (also more expensive). If the resource isn’t sustainable, people will start buying it and storing it so that they can charge a premium and make themselves even richer when you can’t pump it from the ground anymore, thus increasing demand, and again increasing the cost. Even the scenery gets counted, as people are generally, and increasingly, willing to pay a premium for aesthetics. (Pollution is trickier to manage, though it can be pretty easily forced into the calculations by adding a rebate or a levy, as has been done with solar power in .au)

As a big believer in decentralisation and emergent order, I really like that sort of system — having lots of people acting independently to produce valuable information without even knowing how they’re doing it is impressively cool — certainly much more so than having a bunch of folks try to tell everyone how things “should” be. And even better, markets seem to be more effective in practice as well as being cooler in theory. Whether in working out what direction a business should take, or predicting the outcome of an election they seem remarkably effective, both in being far more willing to make confident predictions on what will happen, and in reacting quickly to changes.

A Foreign Correspondent in Baghdad

I’ve got no idea what’s going on in Iraq; but my impression is that things will suddenly appear orders of magnitude better than they do now once the US and Iraqi elections are out of the way — the former’s a problem because people are inclined to emphasise the bad news to defeat Bush, and the latter’s a problem because the terrorists causing the problems want to avoid them happening. The real question is whether “orders of magnitude better” will actually equate to something good, or something bad. For future reference, here’s what people are saying now:

[…] Iraq remains a disaster. If under Saddam it was a ‘potential’ threat, under the Americans it has been transformed to ‘imminent and active threat,’ a foreign policy failure bound to haunt the United States for decades to come. [If Iraq is an imminent threat now, by the end of January the form of that threat should become clear; eg Iraq being a hostile nation, terrorists launching attacks against America from Iraq, etc — aj]

Iraqis like to call this mess ‘the situation.’ When asked ‘how are thing?’ they reply: ‘the situation is very bad.”

What they mean by situation is this: the Iraqi government doesn’t control most Iraqi cities, there are several car bombs going off each day around the country killing and injuring scores of innocent people, the country’s roads are becoming impassable and littered by hundreds of landmines and explosive devices aimed to kill American soldiers, there are assassinations, kidnappings and beheadings. The situation, basically, means a raging barbaric guerilla war. In four days, 110 people died and over 300 got injured in Baghdad alone. The numbers are so shocking that the ministry of health — which was attempting an exercise of public transparency by releasing the numbers — has now stopped disclosing them. [How many cities the Iraqi government “controls” should be clear when the elections are held — aj]

Insurgents now attack Americans 87 times a day.

[…]

America’s last hope for a quick exit? The Iraqi police and National Guard units we are spending billions of dollars to train. The cops are being murdered by the dozens every day — over 700 to date — and the insurgents are infiltrating their ranks. […] [It’s unlikely that Bush is looking for a quick exit, that’s Kerry’s schtick. A rate of “dozens every day” adds up to around 3000 by the end of January, which ought to be verifiable — aj]

As for reconstruction: firstly it’s so unsafe for foreigners to operate that almost all projects have come to a halt. After two years, of the $18 billion Congress appropriated for Iraq reconstruction only about $1 billion or so has been spent and a chuck has now been reallocated for improving security, a sign of just how bad things are going here. [The $18.4 billion was appropriated in Nov 2003, less than one year ago, not two years. Similarly, the “spent” money doesn’t seem to include the cost of contracts that have already been signed, but not yet completed or paid. Anyway, if all projects have stopped, there shouldn’t be many entries in Chrenkoff’s Good News from Iraq round ups over the next few months. — aj]

[…]

Then I went to see an Iraqi scholar this week to talk to him about elections here. He has been trying to educate the public on the importance of voting. He said, “President Bush wanted to turn Iraq into a democracy that would be an example for the Middle East. Forget about democracy, forget about being a model for the region, we have to salvage Iraq before all is lost.

One could argue that Iraq is already lost beyond salvation. For those of us on the ground it’s hard to imagine what if anything could salvage it from its violent downward spiral. The genie of terrorism, chaos and mayhem has been unleashed onto this country as a result of American mistakes and it can’t be put back into a bottle. [So, if the downward spiral continues for a few months from something that’s already beyond salvation, by January we should see some decent riots and ideally some government buildings stormed. If things are really this bad, we should either see the US military restarting “major combat operations” before the elections, and if not we should certainly expect the majority of Iraq to be unable to hold elections. Comparing the turn out to participation rates in the November American election should be instructive: if there’s anything wrong, the turn out should be significantly less. — aj]

The Iraqi government is talking about having elections in three months while half of the country remains a ‘no go zone’-out of the hands of the government and the Americans and out of reach of journalists. In the other half, the disenchanted population is too terrified to show up at polling stations. The Sunnis have already said they’d boycott elections, leaving the stage open for polarized government of Kurds and Shiites that will not be deemed as legitimate and will most certainly lead to civil war.

[…]

Some other notes. What’s the point of having reporters in Iraq that’re scared to leave their homes? What sort of crazy person thinks visiting Iraq’s a risk-free place, or that a good way to manage your personal risk is to visit troublespots, then stay indoors as much as possible?

And naturally, the groups that’re cooperating with Al Qaeda, kidnapping and beheading reporters, killing hundreds of police and civilians, and preventing free and fair elections have to be described as “insurgents”, even in emails. After all, what if they’re successful, and end up appointing a dictator we need to cosy up to later? Note also the purported cooperation between Baathists and Al Qaeda — this is the sort of cooperation we were told was impossible a while ago, Saddam being too secular for Bin Laden to ever have anything to do with. Unless the Baathists have undergone a conversion to fundamentalist Islam, apparently “enemy of my enemy” alliances aren’t so out of the question.

Yeesh.

What else is there? Oh yes: 87 attacks on Americans a day. Australia and Iraq have about the same population, and if Australia was occupied I could imagine being involved in some sort of violent resistance. I’d imagine it might take as many as, say, twenty people to organise an attack (a couple of people to actually do the shooting, others to provide lodgings, cover, planning, whatever), and maybe spending five or six days to regroup and relocate before the next attack, for about 120 people to organise one attack every day, multiply by 87 to get around 10k people being involved in armed resistance, including passive involvement. From a population of 20M, that’s one person in two thousand or 0.05% of the population, which doesn’t seem like particularly many — I’d certainly expect more Americans or Australians to resist an armed occupation than that, but then, we’re already democracies, so whatever we’d end up with would be worse. Perhaps Iraqis aren’t that efficient, or the American response is so effective that a week isn’t long enough to regroup; but “most Iraqis are willing to give it a go, while a small group of terrorists causing as much chaos as they can” sounds like a better explanation to me.

In closing:

I heard an educated Iraqi say today that if Saddam Hussein were allowed to run for elections he would get the majority of the vote.

Every time Iraq’s had elections in recent years, Saddam Hussein did get the majority of the vote.

(BTW, aren’t most Iraqis educated? 56% of Iraqi men are literate according to the CIA, at the very least.)

Iran

The Oz and American elections are getting pretty boring. At least Iran seems to be keeping things interesting (via Jonah Goldberg, via Instapundit).

Hacking with darcs

Continuing the darcs theme, it does seem to be fairly pleasant to actually use. Having darcs record go through and prompt you for each change (which you can avoid by saying -a) makes for interesting habits — I’m finding I’m much more inclined to commit once per feature addition, and when I happen to fix a bug while implementing a feature I’m actually feeling encouraged to commit the two changes separately. For similar reasons it seems like a good match for refactoring, which encourages you to make a sequence of small, independent, and trivially correct as you hack.

The ability to just copy the _darcs/ directory to another code tree is pleasant — it really does make it feel like the code you’re working on and the repository you’re working against are separate, independent things which seems sensible and appropriate; and the ability to make an unpacked source tarball suddenly be version controlled, whether it’s had additional modifications or not, is definitely a feature.

On the downside, darcs and nvi don’t cooperate — apparently you have to specifically tell nvi its IO is coming from /dev/tty for it to not die. Oh well. vim, emacs and nano work; and the only reason to use an editor at all is for long log messages, which I’ve only actually wanted once so far — all my other changes have been granular enough to be properly described with a single line. Interestingly, Martin’s librsync darcs hacking seems to be similar, with only about 4% of his changes having more than just the single line description.

I haven’t bothered with patch dependencies yet, which is arguably buggy on my behalf. Not sure how much I should care about that — presumably it’ll become obvious with more use.

Hacking at darcs

After a little more looking at darcs, I think I’m willing to live with its flaws. I don’t think I mind the lack of a nice repository for long-term storage — I haven’t managed to grow to like any of the others I’ve seen (cvs, tla, subversion, aegis), anyway. Tarballs will do in the meantime, and not having to worry about a heavy-weight repository when I don’t want to is cool.

Not having support for metadata (timestamps, permissions, or ownership) does still concern me though, so I decided to have a poke at darcs’ internals to see if that can be fixed. That happens to mean I need to learn Haskell (which I’ve been meaning to do since 1997, admittedly), so maybe when Andrae continues his programming theory blogging I’ll actually be able to follow what he’s talking about. Scary.

Anyway, Haskell’s a nice language to express darcs in; pattern matching definitely pays off, and monads do seem to keep the code reasonably clear. It’s still pretty complicated: reading three thousand lines of code implementing something you don’t understand in a language you don’t know, with an extended form of a grammar you’ve mostly forgotten anyway doesn’t make for a walk in the park. In any case, I think I grok it enough to think a fix for the metadata issue is possible, and David Roundy (the darcs author) seems to largely agree. Cool. Going from possible to patched isn’t trivial though.

In the meantime, and given I’ve decided not to use darcs as a primary/permanent/public storage format (yay tarballs!), it seems like now’s a good time to check various things into darcs and see what happens. For regular programming it does seem like timestamps shouldn’t matter, and while not having execute bits might be annoying, I can certainly live without everything else.