Facial growth update

So I believe I mentioned I’d adopted a “live-and-let-live” approach to my chin and cheeks recently. The results so far:

beard

Definitely up to the point where it needs a bit of a trim, but apparently it’s not as easy to get a beard trimmer as it is to get razors. Who knew?

Out of shot, and helping hold up the door frame is a now empty bottle of James Squire Golden Ale, which was surprisingly pleasant.

Illusive Time

An interesting article in New Scientist from earlier in the year summarises some interesting Quantum Physics papers, though now I look, I see they’re actually a little dated, one from 2006 and the other from 1994, with Carlo Rovelli listed as the last author of both, and quoted in the article. The problem both articles are trying to address is to reconcile the relativistic properties of time with quantum theory: we’ve already established that time behaves variably when you’re in accelerating reference frames thanks to Einstein, but quantum theory assumes time just works sensibly — which is fine and a good approximation for the places we actually use quantum theory, but we do definitively know it’s wrong.

As the article explains, getting a conceptual handle on relativity and quantum indeterminism are hard for a variety of reasons — and the usual interpretation of quantum events as “collapsing a probability wave” makes it pretty much impossible as you’re forced to choose a temporal reference frame before you start the experiment, which means you can’t get results about the time from the experiment, which was the whole point… The more recent paper addresses that, by rejigging the quantum equations to reduce multi-phase experiments into a single phase, with a bit more complicated indeterminancy. So rather than saying “at this point we know the horizontal polarisation and have collapsed the probability wave, then we randomise that again by recollapsing the probability wave of the same photon and finding the vertical polarisation”, we say “we measure a system laid out in such-n-such a manner that ends up in one state if two measurements of polarisation result in horizontally polarised and vertical polarised”. The system’s layout can then, in theory, be varied so that time goes forwards, backwards, becomes a physical dimension, or something else, without completely breaking the maths. Which is neat, because if you come up with a theory on how time and the quantum maths interact, you can feasibly plug in the numbers, and get a result that you can actually test, and either verify or refute.

The more interesting part for me is the earlier paper, which posits a framework for what time “is”, and why it looks like it does. I’m not familiar with the maths it uses so can’t check its accuracy, and it’s not really formal — it’s more a sketch of a theory that you could maybe paint over later with a real theory. But hey. The basic idea seems to be that if you take a “space” with “stuff” in it, that relates to each other in a certain sort of consistent, logical way, then essentially the randomness of the system implies a way of looking at the stuff in the space, that ends up looking like one direction is “earlier” and the other is “later”, and that the way things are “later” is something you can roughly predict from the way things were “earlier”.

The theory ends up being that time is an emergent property from the universe’s thermodynamics. And it then predicts that when you accelerate  — by orbiting a black hole, or just shooting off into space in general — you’re changing what parts of the universe’s thermodynamics can actually affect you, and you’ll get a different view of “time”. The calculations of thermodynamic changes seem to match up with Hawking radiation and the Unruh effect, though as far as I can gather, that’s mostly because they’re the same calculations, derived from similar assumptions.

Presuming the two ideas actually hold up, can be filled in and put together, the maths ends up tractable, it doesn’t contradict experimental evidence, and it’s not all a pointless tautology, it seems like it provides an interesting way to get a perspective on “time” that actually lets you make useful comments. It rules out, eg, going back in time and killing your grandparents — you can’t just end up in another universe as a result, because there’s only one “space” with “stuff” in it. If the universe was created, then the universe’s creator doesn’t exist in our “time”, but looks at the entire history of the universe as a single thing, and any changes that creator desires simply changes the entire universe, past, present and future simultaneously. If the creator is interested, they really can hear every single prayer, just as a graphic artist can inspect every single pixel. But on the other hand, it’s just as easy for the creator to prevent their from being a need for prayer in the first place, as addressing it in the future. As far as multiple universes go, it’s possible the universe is being iterated through every degree of freedom it has just to see what happens, but if so, each universe exists from big bang to heat death/big crunch in its entirety, and independent as frames in a movie.

Weird, anyway. But interesting.

Catholic guilt, and the economics thereof

Disclaimer: I’m not Catholic, so this is even more speculative than usual. It’s also probably taking “devil’s advocacy” a little too literally, but hey, faith is there to be tested, right?

So I wonder how much Catholic Guilt is an economic phenomenon, as opposed to a purely religious or moral one. Here’s the theory. The core aspect of the Catholic church is that it has a near-monopoly in a fairly large market in the service of communicating with God (indeed, the Reformation was largely about breaking that monopoly by open sourcing the underlying good). I’m not sure it’s the case, but that may in fact be a monopoly unequalled in other religions, current or historical — polytheistic religions presumably have less of a monopoly, protestant churches explicitly disclaim a monopoly, and judaism and islam at least appear to have a much more distributed approach too. In any event, as best I can figure, there are two main reasons to talk to God: to obtain inspiration or confidence on how to act in future, and to gain for forgiveness for how you acted in the past.

Economic arguments are fundamentally the same as evolutionary ones: that people and organisations that act in ways that get rewarded will be strengthened, and thus people will ultimately act “rationally”, that is, in ways that gain them the most profit. Whether that happens because they’re clever and do that straight away, or because the ones who are lucky enough to latch on to smarter ways are the only ones that survive long enough to write history books doesn’t really matter. Another way to put that is that one way or another you have to monetise your natural advantages, whether you’re a person, business or church.

I’m not sure if monetising inspiration is something that happens in Catholicism — maybe they’ve left that market niche underexploited, and hence ripe for the pickings by more evangelical churches. Either way, monetising forgiveness is quite obvious, whether it be directly by charging for indulgences, or more indirectly by the principle of salvation by good works. And, equally naturally, some measure of the proceeds would be retained by the church to further its good works — ie, as profit. And obviously, over the centuries there’s been quite a bit of profit to be had.

However, once you’ve got a successful strategy for taking your natural advantages and large market and turning that into a revenue stream, evolutionary pressures will continue to force you to take as much advantage of that as possible: because even if you don’t, someone else will, so in a hundred years, they’ll be the ones in a position to expand, and in two hundred years, they’ll be the ones writing the history and you’ll be the footnote.

And what’s the best way to expand the market and revenues in forgiveness? Increasing the supply of things to be forgiven for, and increasing the opprobrium with which they’re met. That is to say, promoting guilt. You can, no doubt, take that too far: despite their devotion, it’s unlikely the Flagellants rewarded the church as much as people who work six days a week, and only worry about feeling guilty about their indiscretions on Sundays. That should, as far as I can see, lead to optimisation pressures focussed on maximising the congregation’s guilt, but not to the point where it renders you an unproductive citizen, and thus reduces tithing revenue.

If you can tie that guilt to strong innate urges — lust, anger, envy, say — that probably makes things easier, and if you can create price discrimination by making more wealthy people feel more guilty and thus willing to pay more for fogiveness — by talking about rich men, eyes of needles, say — you might really have something.

So here’s the not particular testable conclusions from that. Given the religious tenets of monotheism and the sacred magesterium, both corruption involving indulgences, and a culture of guilt should be expected. By comparison, a church based on the Priesthood of all believers should be expected not to sustain a culture of guilt; that is, if it started with the same culture, as many protestant churches did during the reformation, it should become less moralistic over time, or if it started independently maintain on average a lower level of guilt than the Catholic church, despite otherwise common religious principles.

In addition, the rise of competing churches, and thus alternative paths to forgiveness, should decrease the value of the Catholic church’s monopoly, and thus decrease the premiums they can collect (as tithes presumably), and potentially decrease the level of guilt they can inspire in their parishoners. Thus, you should expect Catholic churches to be more tolerant in areas where there is a more religiously diverse population, and more strict in areas with high percentages of self-identifying Catholics. And, presuming the above argument is valid, you should expect that almost entirely independently of the morality or scriptural basis for such behaviour, and solely due to economic and evolutionary forces, and time.

Toying with IPv6

For no good reason I’ve got IPv6 working on my laptop now. It’s not directly via Internode, unfortunately, because for some reason it wouldn’t seem to authenticate. But Aarnet’s broker service seems to work easily enough, and is almost as good. My setup involved installing the tspc package, setting userid, passwd, server, if_prefix and a couple of other settings in tspc.conf and crossing my fingers. And as a result, I’ve now got a personal, routable, public /64 and I can see the bouncing logo at ipv6.google.com. Beyond that, it’s not really very interesting. There are two things I’d like to do with it, but at the moment haven’t quite figured out.

The first is to use it for tunneling. It would be quite nice to be able to say “setup my encrypted tunipv6.google.comnel” and then be able to ssh/vnc directly to machines behind a firewall. Unfortunately IPv6 doesn’t buy much there — it just means I don’t have to worry about my private subnets clashing, I still have to have firewalls and tunnels and setup and teardown. And worse, I don’t actually run IPv6 on the other network, so I’d really rather have what I see as a VNC connection to an IPv6 host end up making an IPv4 connection. Which I think is going to involve having a dozen IPv6 addresses on the private subnet’s IPv6 router/firewall, and using 6tunnel to redirect an IPv6 VNC connection on those addresses to an IPv4 VNC connection to one of the actual computers. And I’m not really sure that’s going to be worth the effort, but hey.

The second is a bit pie in the sky too. I think it’d be interesting to have an IPv6-only wireless network at home — rather than handing out private, NATed IPv4 addresses, just handout public IPv6 addresses. That’s no fun if you can’t do anything useful, of course, but at least in theory you ought to be able to do something about it. Having a web proxy with an IPv4 address might get you most of the way, and perhaps you could use ssh’s ProxyCommand option to get ssh to IPv4 working too. I had thought there was supposed to be enough IPv6 addresses to make the entire IPv4 Internet addressable by IPv6 hosts, but apparently all the interoperability mechanisms keep getting deprecated, because a requirement for using IPv6 is apparently thinking NAT is evil.

So yeah, bouncing google logos is pretty much it for the moment.

Pensions for millionaires

A recent ABC story quotes the Brotherhood of St Laurence’s recommendations for counting the family home amongst net assets while calculating pension entitlements, with the recommendation that retirees who have valuable homes take out a reverse mortgage, eg, rather than getting a pension. I somehow suspect we’re going to see a lot of ideas along those lines over the next twenty years as the baby-boom becomes the retiree-boom. Here’s roughly how that one works out:

  • work all your life to pay off your mortgage
  • retire
  • find out there’s been a housing boom and your home is now worth over a million dollars
  • find out that means you don’t get a pension
  • take out a reverse mortgage, so that you get paid a pension-equivalent of $24,500 pa
  • hope you don’t get taxed on that, and that home loan rates stay at about 6% for the rest of your life
  • put up with losing other benefits that are tied to your pension
  • live for another twenty years, getting a total of about $490,000 in pension payments
  • leave your estate to your kids who get to find out that your fake pension has eaten up $943,000 of home equity, and if they want to keep the home, they get to start with almost a million dollar mortgage to pay off
  • or, more likely, end up with whichever bank was generous enough to pay you your pension, getting a half-million dollar discount when buying the rest of your house from your estate

I wonder if that’s really the social contract folks thought they were signing up to when they were paying taxes and their mortgages over the past forty odd years.

And you know, somehow I don’t think the recent drops in superfund balances are going to be the last thing to screw over retirees.

WoBloMo

Okay, so it’s not writing a novel, but it could still be fun. Via David Pennock, who is apparently of the view that if something’s worth doing, it’s worth registering the domain and turning it into a worldwide phenomenon. And hey, why not?

What’s a synonym for “meme” that starts with “a”?

An Alliterative Adventure

By Anthony

An age ago, Andrew was amusing himself annoying army ants. His aunt, Alice, an accomplished author, was attired in aubergine accoutrements from Abercrombie and Fitch (awkwardly altered by an anterior artichoke accident). Angsty and anxious, she attempted to ameliorate her aches with an aspirin, so as to be able to absorb her awareness in the arid aspect of Alice Springs, arena for the approaching article she was attempting to assemble, and already awaited by her adoring afficiandos. ARGH! Her absentmindedness had accosted her again, her appointment to attend Anaconda had been abolished from her attention! The apple juice was abandoned, the ABBA anthems aborted, as the aardvark to Albert St was about to accelerate! Not actually an animal, the “aardvark” was the appellation for her archetypical Audi, and accordingly, the AM was hardly adjusted to acquire Air Supply’s “All out of Love“, as she arrived.

1. What is your name: Anthony
2. A four Letter Word: army
3. A boy’s Name: Andrew
4. A girl’s Name: Alice
5. An occupation: author
6. A color: aubergine
7. Something you wear:   Abercrombie and Fitch
8. A food: artichokes
9. Something found in the bathroom:  aspirin
10. A place: Alice Springs
11. A reason for being late: absentmindedness
12. Something you shout: ARGH
13. A movie title: Anaconda
14. Something you drink: apple juice
15. A musical group: ABBA
16. An animal: aardvark
17. A street name: Albert St
18. A type of car: Audi
19. A song title: All Out of Love
20. A verb: arrive

(Via Jason on Facebook)

Facial Dignity, huh?

Bdale blogs about his new found sensitivity to breezes, which reminds me of two things.

One, much like Arjen, I feel inclined towards some show of solidarity for Bdale’s follicles, hence:

bearded

Second, and also coming somewhat indirectly under the heading of “facial dignity”, it strikes me as a good time to recall one of my favourite slogans: “linux.conf.au: it’s the people you meet”.

crazy keithp

For some reason I don’t have a lot of pictures of the beautiful scenery around Hobart this year…

Recessions and Free Software

I’m still not convinced I’ve got much of an idea what’s going on with the economy and global finance and all that, but one thing that seems apparent is that to at least some extent we should be expecting a widespread extended recession.

Naturally, that’s inspired various folks to explain how that’s a good thing for open source in one way or another for open source — much like you could probably expect news that the economy was absolutely fine all along to be equally heralded as good news for open source. I certainly don’t think it’s all good news — it’s definitely been tougher for the lca organisers this year for instance, with more speakers than usual having to pull out, somewhat fewer attendees than hoped for, and not quite as readily available corporate sponsorship as there’s been in the past. So I wonder how that’s all going to add up?

Recessions measure a sustained decrease in GDP, so at its simplest, you get a recession when collectively the average person and business is spending less money over the course of a year (and there aren’t sufficient new people or businesses to make up for the difference). Taking that as a given might mean good things for open source adoption, in so far as the purchase price is cheaper, so people and companies that can switch to free software are able to cut their costs without losing out on features (or businesses that have already chosen free software can keep their costs low, profits reasonable or high, and watch their competitors either catch up or disappear).

Ultimately, if everyone were to successfully switch to free software overnight — and the new software took no time to install, did everything they were used to, and was so intuitive and easy to learn that it didn’t require retraining or any lost productivity, and every single computer user was better off — that would result in a measurable recession simply because less money was changing hands. In that sense, there’s nothing wrong with a recession: it can just be part of the economy restructuring to take account of more efficient and effective ways of doing things. And in so far as open source software is effective and efficient, the fact of a recession likely will result in companies looking seriously at open source as a cost-cutting measure.

On the other hand, it doesn’t necessarily mean good things for open source support or training businesses, unless they’re either already, uh, high-value propositions (ie, cheap), or the costs there are more than made up for from savings from licensing costs. That’s probably a mixed bag for open source companies, but hey, web documentation, source code, conferences and IRC work for me, don’t they work for everyone? And I suspect there’re plenty of savings to be made in many IT budgets before the RHELs of the world start filling up CFO’s pie-charts.

The Fedoras, Debians, Ubuntus, CentOSs, OpenSUSEs and OpenSolarises of the world, otoh, probably get something of a boost though, as far as I can glean. In so far as they’re no-cost, non-budget items, they may get additional users, and hence additional contributions from those users (such as bug reports, extra helpers on forums, useful patches, more advocates or simply more relevance).

Of course, companies cutting budgets is going to hit salaries too, and if it hasn’t already hit significant numbers of people who are paid to work on free software, it will soon enough. If there’re any open source hackers getting paid big bucks to do visionary work that isn’t obviously going to make any obvious practical difference to anyone in the forseeable future, that’s probably going to not last so well, much like Eazel didn’t, though of course its code lives on. I can’t actually think of any projects falling into this category these days though — maybe they’re just all webapps this time round.

But in so far as free software development is done in people’s spare time, having more people with more spare time is likely to result in a net win for free software development anyway; whether that arises because people are just working less overtime, are laid off entirely and are filling in time (and their resume) until they’re working again, or they’re staying in university a little longer because a PhD suddenly doesn’t compare so badly to private sector salaries and stock options.

Donations and sponsorships and other freely provided resources are a bit more difficult though. Getting tens or hundreds of thousands of dollars to help run conferences like lca and debconf, or to fund development programs like GSoC, or to keep things like the Linux Foundation or the SFLC running will be more challenging. Challenging doesn’t mean impossible, or even necessarily hard, but it’ll probably mean things will look more local and unconferencey than like international trade shows, and non-profits will have to make more effective user of smaller budgets, but those aren’t necessarily bad things. I don’t think there’ll be much of an issue with in-kind donations — bandwidth and hardware is insanely cheap for all reasonable purposes these days. At worst I guess it’ll mean some mirror sites scale back or shut down, that distros will need to use P2P systems like bittorrent more often, and more people might have to actually deal with monthly bandwidth/usage limits… But none of that seems likely to have much of an impact on open source development or adoption as far as I can see.

I wouldn’t call any of that a huge win; more a lot of small benefits, with a few difficulties. Mostly it’s due to the free software being free as in beer, fun to do as a hobby, and not very expensive to work on; so in so far as all that applies to proprietary software development, it’ll be better off too. So free/cheap proprietary webapps that can be used as a cost cutting measure will probably also cope fairly well with economic recession, while expensive propeitary software had better be able to justify its price tag in triplicate if they want to stick around: these are the sorts of times people get fired even when they buy IBM

I wish I knew what it all meant for startups versus established businesses, or Australia versus the US or Europe, but oh well. Interesting times, either way.

Oh dear

So I’ve applied for membership of SAGE-AU, and started using drupal to build a website. Both seem entirely satisfactory, but I wonder how much more of this I can take before I’ll have to hand over my Real Programmer’s license…

Tuz in Tas

At linux.conf.au. Meant to blog yesterday before it started. Didn’t. In a talk. Trying to blog a sentence everytime Pia takes a breath. May not have been clever plan. Hobart’s pretty nice so far. Networking worked in my room first go. The view out my window from Saturday:

lca-window

And the last two overs of the cricket yesterday were good.

My plans for the week, in theory, are in the wiki. I’m hoping to run/go to two BOFs: a startup one, and an accounting software one. Haven’t found the BOF board yet to schedule them yet. Pictures of the conference mascot/plushy are available at other blogs. He’s cute. Peace out.

Linux Australia Financial Status

So the pre-AGM drafts of LA‘s financial status are up, in particular the 2007/08 financial year summary and the (hopefully) easier to follow bottom line figures (also, the 2006/07 financial year summary since I couldn’t find it online elsewhere). I’m pretty happy with them, both in what the tell us about the health of the organisation and linux.conf.au, and in the level of detail that’s available (not too little, but not too much to be overwhelming either, hopefully). I think it’s at about the level I’d expect of a professionally run organisation (which is, of course, also what I expect of volunteer-run free software organisations). There aren’t any pie-charts, though.

There’s still a few problems with the treasury in general:

  • we still have problems getting bills paid and reimbursements organised properly (though I think that’s improving),
  • we don’t have good budgeting processes yet (though I don’t think it’s possible to have a good budget until you’re confident in your understanding of your current and historical financials),
  • we don’t have very good document handling (the treasury’s made up of an unfun mix of papers, emails and other electronic stuff none of which is really well indexed); and
  • we don’t yet have good transitional processes (for when we get a new treasurer, or in how to improve our systems incrementally instead of always replacing them entirely).

With some misgivings (the other council members have had to pick up a fair bit of slack on my behalf this year, Terry and Steve in particular), I’ve accepted the nomination to continue as treasurer into 2009; with the primary hopes to get the first and the last bullet points above improved, and maybe also get some budgeting happening.

Unfortunately, I don’t have much idea on how to improve our document handling to something a bit saner, let alone something I’d actually consider good. Moinmoin is okay, I guess, and the Drupal CMS on the website proper is at least pretty, but I don’t think either would really cope with dumping all our receipts, invoices, and other paperwork into them. Using Confluence (which I’ve been reviewing lately for other purposes, and am growing to quite like) might be an effective approach, but while it does run on Linux, and is used by various free software projects, it’s not free software (everyone who buys it gets source access, everyone who doesn’t, doesn’t; there’s a cnet interview for anyone who cares), and I’m not sure if that’s where we want to go in as much as we’re a national FOSS organsition… Oh well, hopefully whoever’s elected secretary will solve that problem somehow.

Anyway, thought I might blog that for anyone who’s interested but isn’t already following linux-aus. Comments are open for anyone with anything to add. :)

Unix 2.0

I quite like the Web 2.0 revolution, both from the interactive, crowd-sourcing, social aspect, and the technical aspect. It’s the technical side of things that I’ve found particularly interesting lately, especially after reading about some of the possibilities of Google Chrome and Google Gears and apps like AjaxTerm or ChipTune. There’s a whole lot of development thought and effort going moving applications onto the web, and it’s probably pure hubris to try to guess how it will all end up, but hey, why would that stop a blogger?

I’ve always quite liked the Unix philosophy (UHH ownership notwithstanding), and there are some really interesting parallels between it and aspects of the AJAX revolution.

One of the most elegant features of Unix’s X Windowing System is the separation between the X Server and X clients — which in turn allows your applications, as X clients, to easily be pointed at varying X servers — so you can have them display on a different computer, or a virtual desktop, without major challenges. On top of the basic X system, there are a number of extensions servers can support to provide more powerful or faster access to the display/hardware, and there are a number of toolkits, such as gtk or qt, that make it easier to do standard things with X (like create menus and toolbars). And being a standard protocol, that means you can run an X server on your Windows or OS X machine, and display programs running on a remote Unix box just as if they were running locally.

The major downside with this, though, is that the X servers are largely “dumb” — they’ll provide standard extensions which help some things, but otherwise, it’s up to the client to do all the work. If you’re running the server and client on the same machine, that doesn’t matter. But if the client’s an ocean away, that can introduce a lot of lag in between simple UI actions and responses, that make for a bad user experience.

And Web 2.0 apps work kinda similarly. Rather than an X server, you have a browser; rather than an X client, you have a website; rather than extensions, you have plugins; and rather than toolkits you have Javascript libraries. And it doesn’t matter whether you’re running Windows or OS X or Unix, or which of those the website’s using — it’s all compatible.

But there are two additional features Web 2.0 has over X: the first is that you can specify your layout with a powerful (and, with CSS, themeable) language, in the form of HTML; and the second is you can actually push smarts to the client machine without requiring the user to install plugins/extensions, by making use of Javascript and dynamic HTML.

I think that benefit alone makes a good case for considering the Unix desktop of the future to be web-based: in theory at least you can duplicate the current features of X, while at the same time gaining some quite interesting benefits. (And since I’ve already nominated 2007 as the year of the Linux desktop, I’ve got no reason to worry about discarding current technology as obsolete :)

But what does that mean? I think the aforementioned AjaxTerm provides a good example of what existing Unix apps could be like if converted to web apps. Compare it to the equivalent standard Unix application, xterm. Normally, you would invoke the command (eg from an existing shell, or double clicking an icon), it would find the $DISPLAY setting, connect to the X server it references, and start sending data to open a window and display the output of the commands you’re running in the terminal.

Now suppose you want the same experience with AjaxTerm — that is, you’re logged in somewhere, and you want a new terminal window to appear, and you either type “ajaxterm” at a prompt, or double click the “ajaxterm” icon. In order for that to work:

  1. Your program needs to start providing content via a URL that’s accessible to the client.
  2. Your program needs to contact the client’s browser, and tell it to open a new window, pointing at that url.
  3. Your program needs to supply CSS and JavaScript files necessary to provide the correct user experience.

None of those seem terribly difficult things to do, at least in theory. If you’re already in a browser, and doing the equivalent of clicking a link, they’re actually pretty easy. But doing it with the same features as xterm does create some challenges, particularly in order to avoiding either needing a daemon running prior to the invocation of ajaxterm, or having the connection between the program and the display be public (ie, DISPLAY=:0, versus http://example.com/rootshell).

The next question is whether the technology is advanced enough to actually provide a good user experience — but with demos such as the aformentioned ChipTune and the ubiquitous Google Maps, that seems easy to answer in the affirmative.

How hard it is to make good web apps is the next challenge — there are certainly plenty of horrible web applications out there to demonstrate it’s not trivially easy. There are a few challenges:

  1. Designing good HTML and CSS based interfaces
  2. Writing basic user interface code
  3. Coding in JavaScript
  4. Communicating with the web server
  5. Avoiding web security issues (XSS, etc)
  6. Avoiding limitations imposed by web sandboxes

I’ve not had much luck finding good ways to design nice HTML+CSS sites; though to be fair I have spent years deliberately trying to avoid acquiring that skillset. And for webapp development, there’s the added difficulty that most of the Javascript libraries are fairly new — so they’re still in development themselves, and hoping for a Glade to your preferred Javascript Gtk (script.aculo.us, YUI, etc) seems to be a bit forlorn.

That Javascript is essentially a specialist web language naturally doesn’t help those of us who aren’t already specialist web programmers, but there seem to be a few effective solutions to avoiding coding everything in Javascript (Google Web Toolkit and Pyjamas compile Java and Python (resp) to Javascript, and from what I gather Rails and Pylons and similar will let you create a whole bunch of standard UI elements, without having to touch Javascript directly).

Communicating with the web server, and dealing with security issues is either easy (you’re not doing anything fancy, and the browser keeps you and your users safe), or can potentially be hard (if you’re trying to work around browser limitations or setup collaborative sites). But ultimately, whatever you’re trying to do is either no harder than it would be any other way (eg, designing mutliuser programs), or just needs some help from a plugin — and Google Gears seems to be doing a particularly good job covering the bases there.

So congratulations, we just argued fairly convincingly that it’s possible to write applications for the web — who would have thought? The real question is whether we can make them “Unixy”, or for that matter “open sourcey”.

One nice aspect of Unix applications (and particularly open source ones) is that they’re not really tied down to a particular machine. If you’ve got one machine with an app installed, you can generally quite happily copy it to another machine and, as long as you’ve got the necessary libraries installed, just run it.  And heck, this is what vendors and distributors expect this, and if you want an app, the normal way to go about using it is to install your own copy locally and then use it.

Webapps are generally the exact opposite of that — they’re not only generally tied into other applications, and require configuration changes and a daemon (apache, tomcat, etc) running before you can use them, but they’re often not distributed at all, and only run by the company that developed them. If you want to use the app, you send your data to them, they store it, and you just get to see what they let you see. There’s two reasons for that: one is just business — if you control the app, you control the users, and maybe you can make money that way; the other is that serving a webapp is actually hard: you need to setup and configure a web server, need to work out a security policy to prevent it being accessed by random people, you need to provide storage for the data, and you need to manage updates and such. One of those, at least, is solvable.

But having a forced separation between your screen and your data is useful too — if your data’s not actually on your laptop, it’s not as big a problem if your laptop gets stolen, or broken. And when you have to explicitly contact a server to write anything at all (browsers not providing local storage for webapps, generally) you get that feature for free. If you could make it an option where exactly your data gets put — so that the webapp that is Gmail, eg, could be told to access and store your emails on google’s servers, or on your own computer, or even on Amazon’s S3; you’d have a really powerful system, that suddenly feels not only a lot more free than current webapps, but also gives users a lot more freedom than current open source desktop apps.

The other downside of webapps is that they run on browsers. Which, compared to regular Unix, is lame: you don’t get multitasking, protected memory, process control, ease of debugging — heck, half the time you don’t even get to avoid your windows being adorned with various bits of spyware. In theory, Chrome fixes all that, though sadly it’s still Windows only. (OTOH, there’s a Mozilla hacking tutorial at lca this year, so maybe that’ll help Firefox pick up some of the features)

If you could assume a Chrome-like design from your browser, you could then take that a little further, and corral your webapps under different user-ids — so that you could ensure that a malicious Facebook application running under one user-id (aj-fun, say), can’t exploit a vulnerability in your Javascript interpretor to access your netbanking passwords, cookies, or so forth, stored under another user-id (aj-serious, eg). Most Unix systems could handle that easily (a single user Unix desktop system can generally cope with anywhere from 20,000 to 4 billion user-ids, with only a few dozen already allocated to the system), and if the browser could separate different windows/sites into different processes like Chrome claims to, and provides a hook for privilege changes via sudo, we’d have a pretty good step in securing both the web generally, and user’s desktops as well.

Additionally, you’d want to make the browser a little more invisible too — moving it more along the lines of a window manager than an application in its own right, at least when running webapps. This is close to what Chrome does with its Application Windows but might go a little beyond that.

Add all that up, and what do you get?

First, you change the system architecture: your Unix desktop now has to offer apps:

  • a kernel and X drivers to deal with the raw hardware
  • a window manager and/or browser that supplies appropriate chrome for switching between tabs/pages
  • a rendering engine that will handle HTML and CSS (probably in multiple layers, that mostly already exist)
  • a Javascript VM/interpretor
  • any plugins necessary for apps to do more than base W3C standards allow (eg, Gears, Flash)

And the system with the apps you want to actually use, needs:

  • a way of serving URLs to clients (eg, http)
  • appropriate background daemons to support apps (apache, tomcat)

Development changes from just picking a language (C, python, etc) and a toolkit (Gtk, wxWidgets, etc) to needing:

  • a language for the application (C, python, Java, etc)
  • a language for the UI (Javascript, or Java+GWT, Python+Pyjamas, Rails, etc) and possibly a toolkit (if it’s not already implied by the language)
  • a support framework/infrastructure for serving URLs (tomcat, Pylons internal server, etc)
  • a protocol for communicating data between the application and the UI (the “X” in AJAX, basically)
  • an HTML and CSS based-design for your user interface

Unfortunately, the latter list is currently way too complicated. If you could successfully simplify it just a little — by just having to choose one language (and a toolkit), and having good standards for frameworks/infrastructure — you can start writing apps for the web just as easily as you write apps for the Unix desktop, with exactly the same user experience for people on Unix desktops, but the added benefit that it actually works just as well for people on OS X or Windows, and just as well for people in a different country.

And since Unix desktop hackers are already halfway used to this, with the separation of X servers and X clients, it’s a relatively small step to a real brave new world.

So that’s my theory. If you want it summed up more pithily, I’m now claiming not only that 2007 was the year of the Linux desktop, but that if we’re lucky, 2009 or 2010 will be the year the Linux desktop is completely superseded by the web desktop. :)

Netbook bleg

So, having just last night been trying to convince myself that I could deal with the low-CPU and (more importantly) battery life of the HP Mini-Note 2133 (Joxer seems happy with his) along comes CES and the announcement of the HP Mini-Note 2140, with an Atom CPU, and reportedly a resolution of 1366 by 768.

Want.

Anyone reading know of any way to get one by/at linux.conf.au at the end of the month? Apple had a bunch of ibooks and powerbooks to loan out (and I think optionally purchase) at linux.conf.au 2004, maybe that’d be something that a certain Emperor Penguin Sponsor would consider this year… hint hint! :)

Babysitting bankers

It’s interesting how much high-falutin economics seems to come down to analogies to baby-sitting. Quite a racketThe most recent example (via Russ Nelson a few days ago), is an article by Paul Krugman (actually from 1998) extrapolating from a baby sitter’s co-op to the Asian monetary crisis. Basically, the idea is that a bunch of parents get together and decide to make coupons for baby-sitting, so when you babysit for someone else for a night, you get a coupon, and when you want a night out without the kids, you give that to someone else. Unfortunately, this goes pear-shaped because parents decide they want to save up coupons in case they want a few nights out in a row, eventually leading to a babysitting recession: people would like to babysit and others would like to go out, but they’ve decided not to in order to save their coupons.

Krugman reports attempts at “legislative” solutions — requiring each couple in the co-op to go out twice a week and thus ensuring some circulation, but basically advocates the “economic” (or more precisely “monetary policy”) solution of just issuing more coupons to everyone. Voila, their saving is done for them, and they can go ahead and spend, and the baby-sitting economy functions again. (Although, there’s also the telling comment “Eventually, of course, the co-op issued too much scrip, leading to different problems …“)

That’s pretty close to the remedy proposed for today’s worries — “It’s a credit crisis: banks won’t give out money because they want to keep lots themselves, but they can’t get any themselves because other banks won’t give it out. So let’s just give them more money so they can forget about saving and get on with it!” with concerns about inflation or balanced budgets relegated to an off-hand comment.

But at least in the baby-sitting case, a monetary solution isn’t the only one available. You could also simply “float” the baby-sitting coupons — instead of one coupon always being worth one hour of baby-sitting, no matter if it’s a stay-at-home Monday or an out-on-the-town Saturday, let the couples negotiate rates. If it so happens that everyone wants to save coupons, then that might mean that a couple who wants to go out will only offer one coupon for three hours’ baby-sitting, and another couple will be happy to accept that, because gosh, their supply is running really low. And, of course, if that discounting continues and sets a new benchmark price (one coupon for an entire evening out), you have baby-sitting deflation, and everyone’s coupons increase in value by 200%, and your problem’s solved.

That also seems like it addresses the real problem case Krugman identifies in his sidebar: namely wanting to be able to vary your monetary policy seasonally. An hour of baby-sitting in winter, when nobody wants to go out as much, isn’t worth an hour of baby-sitting in summer, when everyone wants to go out. And baby-sitting on Fridays and Saturdays anytime is probably more valuable than baby-sitting on Sunday through Thursday, too. And then, of course, there’s the prospect of baby-sitting Calvin…

(If you really trust everyone involved, then you can give people credit and there’s no great need for savings. Instead of a coupon they’ve saved up, they can just write an IOU, and as long as they fulfill that later, everything’s good. That takes away the self-regulating aspect though — you have to check that someone isn’t just writing up hundreds of IOUs and never making good on them. An interesting system for IOUs is Yahoo’s yootles project, which sadly seems to be bit-rotting a bit, but allows automated tracking of IOUs, including dealing with credit, making boring historical debts disappear, and some really fancy calculations of the best way to divide leftover ravioli)

Changing prices is where economic rubber hits the road, though: high prices make the ACCC investigate petrol companies for collusion, low prices make people complain that Walmart doesn’t care about its employees. This gameEconomists and politicians tend to consider wage levels in particular to be “inelastic” — in that people will refuse to accept lower face-value wages where it would be warranted (your work is less valuable than it used to be: heck, a robot can do it, and not that may people are interested anyway…). Even worse, it’s possible that that’s not actually true: some people do accept wage cuts. But the general expectation seems to be that wages will either get slowly eaten away by inflation (which is why economists like to have a lower limit on inflation as well as an upper one), or disappear entirely when people get fired or companies go bust.

And in the baby-sitting example it’s pretty simple to see why people don’t like price changes: “I baby-sat your brat last month for three nights, now you’re telling me that you’ll only baby-sit my lovely angel for one? Screw you!” But if the price has changed over that month, that’s actual the fair result — perhaps everyone was wanting to stay at home last month anyway, and they could’ve gotten baby-sitting from five other couples; and this week there’s a concert on that everyone wants to go to, and the only couple’s that are staying at home are completely booked out for their baby-sitting. Or maybe they are being completely arbitrary in raising the price — in which case “screw you” is completely the right response, because there are other baby-sitters who aren’t unreasonable available, and you haven’t been robbed because your coupons will work for them. And the result’s still fair, because you get baby-sitting at a fair rate from someone else, and they lose the chance to do baby-sitting for you in future, because you’ve stopped trusting them.

That point hits worst when you extend the delay to extremes — which is what you get when you look at saving for retirement or going on a pension. Ultimately, you’re hoping that the value the next generation assign to either your savings or even the generic value of your worth as a person is enough to do what it takes to keep you fed and healthy and happy, without you having to do another damn thing in return. Hopefully, that’s pretty easy: in twenty or forty years, there’s a good chance that we’ll have invented more stuff, and feeding people, doctoring them, and entertaining them will be even easier than today. But there are two fundamental difficulties: one is people generally start taking new advances for granted, and upgrade their expectations accordingly, and the other is that in a bunch of countries, there’ll simply be less people actively doing work in future: ie, the mass of babies produced in “the baby-boom” hit retirement age, and have to hope their generation’s 2.01 kids can support them, just as well as their parents’ generation’s 3.5 kids supported them. Or perhaps they’ll have to hope that China and India can provide enough new immigrants to keep Western economies ticking over, maintaining the value of their savings, and funding their pensions — and that those immigrants won’t be more interested in supporting their own parents back home.

Actually, having a look at recent US Presidents is interesting on that score. Pre-baby-boomers, Ford, Carter, Reagan, and GHW Bush (born 1913, 1924, 1911 and 1924) had four, four, five, and six children respectively; baby-boomers Clinton, GW Bush and Obama (born 1946, 1946, and 1961), by contrast, have one, two and two, respectively. According to Wikipedia, the baby-boom is actual split into the baby-boomers (Clinton and GW Bush) and “Generation Jones” (maybe Obama, depending on when you draw the line), but the baby-boom included both anyway.

In any event, it’s going to be interesting to see how financially and demographically sound things turn out to be over the next ten or twenty years, because I somehow guess going to be Gen-X and Gen-Y that ultimately get to decide just how long and pleasant the baby-boomers retirement works out to be.

The baby sitter flag