I should’ve known better. I wrote a post a few days ago detailing my frustration with Linux, and suggested (admittedly in very indelicate terms) that the global effort to develop Linux into an alternative to general use desktop OSes such as Windows and OS X was a waste of resources. I have absolutely no idea how 400 people (most of them apparently angry Linux fans based on extrapolation from the comments) managed to find their way to the article within hours of me posting it. I think they must have a phone tree or something. Nonetheless, I should’ve been more diplomatic. So, as penance, I will here attempt to write a more reasonable post better explaining my skepticism of desktop Linux, and will even try to offer some constructive suggestions. I’m sure this post will get no comments, in keeping with the universal rule of the Internet that the amount of attention a post recieves is inversely proportional to the thought that went into it.
Before starting, let’s just stipulate something supported by the facts of the marketplace. Desktop linux has been a miserable failure in the OS market. If you’ve consumed so much of the purple koolaid prose of the desktop linux community that you can’t accept that, you might as well quit reading now. I think every year for the past decade or so has been “The Year Linux Takes Off.” Except it’s actually going down in market share at this point.
As I pointed out in my first post, (perhaps a bit rudely) this isn’t just a bad performance, it’s a tragic waste of energy. Can you imagine the good that could’ve been done for the world if these legions of programmers hadn’t spent over a decade applying their expertise (often for free) on a failure like desktop Linux? For one, they could’ve made a lot of money with that time and donated it to their favorite charities, assuming they were as hellbent on not making money as they appear to have been. And two, it might have been nice to see what useful things they would’ve produced had they done something somebody were actually willing to pay for as opposed to trying to ram desktop linux down the collective throat of the world. You know, sometimes the evil capitalistic market does useful things, like keeping people from wasting their time.
Open Source community projects put innovation and scale at odds. If an Open Source project is to be large, it must rely on the input of a huge distributed network of individuals and businesses. How can a coherent vision arise for the project in such a situation? The vacuum left by having no centralized vision is usually filled by the safe and bland decision to just copy existing work. Thus, most large scale Open Source efforts are aimed at offering a open alternative to something, like Office or Windows, because no vision is required, just a common model to follow. This is not to say that innovation is not found in the OS community, but it is usually on the smaller scale of single applications, like Emacs or WordPress, that can grow from the initial seed of a small group’s efforts. The Linux kernel is a thing of beauty, and is actually a small, self-contained project. But the larger distribution of a deskop OS is another matter, and here we find mostly derivative efforts.
An OS is only as good as the software written for it. One of the great things about Open Source is that there is a tremendous power in being able to take an existing project and spawn off a new one that fixes a few things you didn’t like. While this is fine for an application, it’s problematic for a piece of infrastructure software expected to serve as a reliable, standard substrate for other software. Any Linux application requiring low-level access to the OS will have to be produced in numerous versions to match all the possible distros and their various revisions. See OpenAFS for an example of how ridiculously messy this can get. For apps, do you support GNOME or KDE or both or just go, as many do, for the lowest common denominator? And supporting hardware accelerated 3D graphics or video is the very definition of a moving target. There are multiple competing sound systems, and neither are nearly as clean or simple as what’s available on Windows or the Mac. The result is usually a substandard product relative to what can be done on a more standardized operating system. Compare the Linux version of Google Earth or Skype to the Windows version of the same to see what I’m talking about. (That is, if you can even get them working at all with your graphics and sound configuration.)
Pointing the finger at developers doesn’t solve the problem. To begin with, for the reasons I’m explaining in this essay, some of the poor quality of Linux software is the fault of the Linux community diluting its effective market share with too many competing APIs. Even without this aggravating factor, developers just can’t justify spending significant time and money maintaining a branch of their software for an OS that has less than 3% market share, regardless. Because of this, the Linux version of commercial software is often of much lower quality than the Windows or Mac version. A prime example of this is the ubiquitous and required FlashPlayer. It consistently crashes Firefox. Is it Adobe’s fault? Maybe. But when this happens to somebody, do you think they know to blame Adobe or Firefox or just Linux? Does it even matter? It’s just one more reason for them to switch back to Windows or the Mac. And for the record, why should Adobe bother to make a good version of FlashPlayer for a platform with no stability and few users?
This solution to all of this is easy to state, but hard to enforce. (There are downsides to Freedom.) Somehow the fractious desktop Linux community must balance the ability to innovate freely with the adoption of, and adherence to, standards. Given the low market share, linux has to be BETTER than Windows as a development target, not just as good or worse. However, one of the problems with Linux seems to be a certain arrogance on the part of its developers, who consider applications as serving Linux, and not the other way around. An OS is only as good as the programs written for it, and perhaps the worst thing about Linux is that it hinders the development of applications by constantly presenting a moving target, and requiring developers to spend too much time programming and testing for so many variants.
It’s downright laughable that an OS with single digit market share would further dilute its market share by having two competing desktops. Yeah, I know KDE and GNOME are supposedly cooperating these days. But (a) it’s probably too late, (b) it’s not perfect and (c) even if you disagree that it dilutes effective market share, it still dilutes the development effort to maintain two desktop code bases. For godsake, somebody kill KDE and make GNOME suck less! Yeah, I know that’s never going to happen. That’s why the title of this essay is what it is.
One of the few things MS gets right is that for all they get wrong, they do understand one crucial thing: developers are the primary customer of an operating system. They may advertise to consumers, but they know that at the end of the day it is developers whom they serve. The Linux community just doesn’t get this.
Unfortunately, I don’t have much hope of desktop Linux ever becoming sufficiently standardized. If the focus becomes on making Linux friendly to applications and their developers, the distributions must be become sufficiently standardized as to render them effectively consolidated, and the desktop frameworks so static as to negate much of their Open Source character. For Linux to become more developer friendly, it would have to essentially become a normal operating system with a really weird economic model.
OS development actually requires physical resources. The FOSS movement is based on the idea that information is cheap to distribute, and thus it is a tremendous leverage of the human capital to develop it. People write some software, and the world perpetually benefits with little marginal cost. That works beautifully for applications, but OS development, especially desktop OS development, requires tremendous continuous resources to do correctly. For one, you need tons of machines of different configurations and vintages on which to test, which costs money. And you need a large team of people to run all those tests and maintain the machines. Any respectable software company dedicates a tremendous amount of their budget to QA, fixing stupid little bugs that happen to come up on obscure hardware configurations. Linux just can’t achieve the quality control of a commercial OS. And that’s probably why when I “upgraded” from Gutsy to Hardy, my machine no longer completes a restart on its own. Maybe this will get fixed when the planets align and somebody with the same motherboard as me who also knows how the hell to debug this runs into the same problem, but I’m starting to get weary of this, and the apparently I’m not alone based on the declining desktop linux market share.
Continue reading →