Exponential Growth
On Wednesday the 25th, I was thinking about project growth. The day before I’d posed a question to the debian-vote list:
Over the next twelve months, what single development/activity/project is going to improve Debian’s value the most? By how much? How will you be involved?
There have only been a couple of replies so far, the first of which was from Russ Allbery, who took issue with the way I’d chosen to focus on growth. Which lead to some interesting thoughts; or at least, I find them interesting.
The examples I gave talked about how much that would improve Debian from users’ perspective in percentage terms — this would make Debian three times as good for one out of every ten of our users, eg. That, in turn, implies an exponential rate of growth: if you can consistently improve at a given percentage over a given timeframe, whether that’s 100% a month or 1% a year, you’ll eventually be doing better than anything that can’t remain exponential, given enough time. On the other hand, if you can’t maintain exponential growth, you won’t be able to maintain any given percentage either — linear growth, eg, will give you percentages that drop rapidly: 100%, then 50%, then 33%, then 25%, 20%, 17%, 14%, 12.5%, 11%, 10%, etc.
The interesting thing is how that looks from a user’s perspective. If a user’s already got a working system — running Debian, or Ubuntu, or anything else — what’s the incentive to either upgrade or switch to another distribution? One is that ongoing support might disappear, so you can’t get security support, or Oracle will stop answering your calls because your OS is too old and they just can’t be bothered anymore. That’s a mostly negative approach though: you’re not expecting any benefit, so you just want to minimise the pain of upgrading. New features, behaviour changes, all that stuff is a cost, because you’re just looking to keep doing what you were doing before. The other reason to upgrade is exactly the opposite: that there are new features, or new ways of doing things that are a real benefit to you personally. Perhaps it’s a bunch of small things — a little less power usage, a few less errors here and there, less obnoxious popups, a faster boot, fewer typos around the place, some fixed bugs, some more documentation — that just add up to a more pleasant experience. Perhaps it’s one or two big things — you can replace your last Windows box, eg. Perhaps it’s something that pretty much only matters to you — you changed your name to an unpronouncable symbol, and finally your preferred font has a unicode glyph included that you can use as your real name in your email program.
But it seems to me, that if you’re going to provide a new version of your software and you want users to be happy about upgrading, then there are two things to focus on: making the changeover completely unnoticable, and making the upgrade give an appreciable benefit to an significant number of users. And if you’re going for the latter path, then that does require a percentage improvement: for x% of your users, they’re experience using the new software has to be y% more pleasant than using the old software. (And if you decide to go exclusively for the former path, there’s an easy solution: don’t change the software at all — you can’t notice changes that aren’t there)
And note that that’s actually the right answer, too: if you aren’t improving your users’ lives, you shouldn’t be releasing new versions. Upgrades come at a cost, even if they’re nominally free: some things break, you need to learn new things, and you often get forced to upgrade other things too. If you’re only making things 0.0001% better, it’s probably better to delay the upgrade until there’s a bunch of improvements that can be combined to actually make the cost worthwhile.
Getting back to the original question, it seems to me like it’s also fair to focus first on the things that are going to provide the biggest benefit; though obviously there’s plenty of room for debate over whether three small things are better than one big thing, or how to compare a short term benefit with a longer term benefit. But ultimately, I think it’s fair to say that when you multiply all the improvements all your contributors are working on, scaled by the proportion of users they affect and how much they affect those users, you want to end up with something similar to Moore’s law: that is your project becoming twice as “useful” every eighteen months. Maybe it’s a different time frame, maybe it’s three years or five years, but if it’s not in the ballpark, then you’re basically not doing your users any favours.
So how do you get there? There’s a few ways of getting that sort of growth. You can keep your userbase constant, and improve your quality. That has the benefit that you’ll likely get more users as well, so do better than you expect, but eventually you’ll hit a wall because your users will be near enough to completely happy that you just can’t make things that much better. Alternatively, you can maintain your quality and expand your userbase; which largely means finding new things to do, rather than doing the current things better. For open source development, that has the benefit that it can increase your contributor base too — if you have one contributor for every twenty users, then if your userbase increases by 100% every eighteen months, so do your contributors. And if each of those contributors is focussed on individually linear growth — improving the system enough that twenty new users will adopt it, for instance — that will rebound back into sustainable exponential growth.
Now, there are limits to that sustainability: eventually you run out of people (or just match the rate at which the population is growing, anyway), or hit the absolute limits of a quality experience. But that just means one of three things: your project should expand into other areas that still usefully contribute to humanity at a significant level, people should stop spending much time on your project and work on other projects that usefully contribute to peoples’ well-being, or the human race has pretty much hit the absolute limits of its potential. Those seem like big calls to me, and at least in the areas that interest me, there still seems like plenty of potential for big improvements.
So at the moment there are three responses to my original mail, from Russ, from Raphael Hertzog, and from (DPL candidate) Stefano Zacchiroli. And they’ve all pretty much avoided the “By how much?” part of the question. If it’s really fair to expect Debian to improve significantly (by 10%, 100%, 300%, whatever) over the course of a year — and as I’ve argued above, I think it is — not making estimates of how much benefit things will actually result in seems both a bad way to establish the project’s priorities, and somewhat disconnected from the usual philosophy of wanting to measure performance and results, that we expect from scientific and engineering endeavours.