I should’ve known better. I wrote a post a few days ago detailing my frustration with Linux, and suggested (admittedly in very indelicate terms) that the global effort to develop Linux into an alternative to general use desktop OSes such as Windows and OS X was a waste of resources. I have absolutely no idea how 400 people (most of them apparently angry Linux fans based on extrapolation from the comments) managed to find their way to the article within hours of me posting it. I think they must have a phone tree or something. Nonetheless, I should’ve been more diplomatic. So, as penance, I will here attempt to write a more reasonable post better explaining my skepticism of desktop Linux, and will even try to offer some constructive suggestions. I’m sure this post will get no comments, in keeping with the universal rule of the Internet that the amount of attention a post recieves is inversely proportional to the thought that went into it.
Before starting, let’s just stipulate something supported by the facts of the marketplace. Desktop linux has been a miserable failure in the OS market. If you’ve consumed so much of the purple koolaid prose of the desktop linux community that you can’t accept that, you might as well quit reading now. I think every year for the past decade or so has been “The Year Linux Takes Off.” Except it’s actually going down in market share at this point.
As I pointed out in my first post, (perhaps a bit rudely) this isn’t just a bad performance, it’s a tragic waste of energy. Can you imagine the good that could’ve been done for the world if these legions of programmers hadn’t spent over a decade applying their expertise (often for free) on a failure like desktop Linux? For one, they could’ve made a lot of money with that time and donated it to their favorite charities, assuming they were as hellbent on not making money as they appear to have been. And two, it might have been nice to see what useful things they would’ve produced had they done something somebody were actually willing to pay for as opposed to trying to ram desktop linux down the collective throat of the world. You know, sometimes the evil capitalistic market does useful things, like keeping people from wasting their time.
Open Source community projects put innovation and scale at odds. If an Open Source project is to be large, it must rely on the input of a huge distributed network of individuals and businesses. How can a coherent vision arise for the project in such a situation? The vacuum left by having no centralized vision is usually filled by the safe and bland decision to just copy existing work. Thus, most large scale Open Source efforts are aimed at offering a open alternative to something, like Office or Windows, because no vision is required, just a common model to follow. This is not to say that innovation is not found in the OS community, but it is usually on the smaller scale of single applications, like Emacs or WordPress, that can grow from the initial seed of a small group’s efforts. The Linux kernel is a thing of beauty, and is actually a small, self-contained project. But the larger distribution of a deskop OS is another matter, and here we find mostly derivative efforts.
An OS is only as good as the software written for it. One of the great things about Open Source is that there is a tremendous power in being able to take an existing project and spawn off a new one that fixes a few things you didn’t like. While this is fine for an application, it’s problematic for a piece of infrastructure software expected to serve as a reliable, standard substrate for other software. Any Linux application requiring low-level access to the OS will have to be produced in numerous versions to match all the possible distros and their various revisions. See OpenAFS for an example of how ridiculously messy this can get. For apps, do you support GNOME or KDE or both or just go, as many do, for the lowest common denominator? And supporting hardware accelerated 3D graphics or video is the very definition of a moving target. There are multiple competing sound systems, and neither are nearly as clean or simple as what’s available on Windows or the Mac. The result is usually a substandard product relative to what can be done on a more standardized operating system. Compare the Linux version of Google Earth or Skype to the Windows version of the same to see what I’m talking about. (That is, if you can even get them working at all with your graphics and sound configuration.)
Pointing the finger at developers doesn’t solve the problem. To begin with, for the reasons I’m explaining in this essay, some of the poor quality of Linux software is the fault of the Linux community diluting its effective market share with too many competing APIs. Even without this aggravating factor, developers just can’t justify spending significant time and money maintaining a branch of their software for an OS that has less than 3% market share, regardless. Because of this, the Linux version of commercial software is often of much lower quality than the Windows or Mac version. A prime example of this is the ubiquitous and required FlashPlayer. It consistently crashes Firefox. Is it Adobe’s fault? Maybe. But when this happens to somebody, do you think they know to blame Adobe or Firefox or just Linux? Does it even matter? It’s just one more reason for them to switch back to Windows or the Mac. And for the record, why should Adobe bother to make a good version of FlashPlayer for a platform with no stability and few users?
This solution to all of this is easy to state, but hard to enforce. (There are downsides to Freedom.) Somehow the fractious desktop Linux community must balance the ability to innovate freely with the adoption of, and adherence to, standards. Given the low market share, linux has to be BETTER than Windows as a development target, not just as good or worse. However, one of the problems with Linux seems to be a certain arrogance on the part of its developers, who consider applications as serving Linux, and not the other way around. An OS is only as good as the programs written for it, and perhaps the worst thing about Linux is that it hinders the development of applications by constantly presenting a moving target, and requiring developers to spend too much time programming and testing for so many variants.
It’s downright laughable that an OS with single digit market share would further dilute its market share by having two competing desktops. Yeah, I know KDE and GNOME are supposedly cooperating these days. But (a) it’s probably too late, (b) it’s not perfect and (c) even if you disagree that it dilutes effective market share, it still dilutes the development effort to maintain two desktop code bases. For godsake, somebody kill KDE and make GNOME suck less! Yeah, I know that’s never going to happen. That’s why the title of this essay is what it is.
One of the few things MS gets right is that for all they get wrong, they do understand one crucial thing: developers are the primary customer of an operating system. They may advertise to consumers, but they know that at the end of the day it is developers whom they serve. The Linux community just doesn’t get this.
Unfortunately, I don’t have much hope of desktop Linux ever becoming sufficiently standardized. If the focus becomes on making Linux friendly to applications and their developers, the distributions must be become sufficiently standardized as to render them effectively consolidated, and the desktop frameworks so static as to negate much of their Open Source character. For Linux to become more developer friendly, it would have to essentially become a normal operating system with a really weird economic model.
OS development actually requires physical resources. The FOSS movement is based on the idea that information is cheap to distribute, and thus it is a tremendous leverage of the human capital to develop it. People write some software, and the world perpetually benefits with little marginal cost. That works beautifully for applications, but OS development, especially desktop OS development, requires tremendous continuous resources to do correctly. For one, you need tons of machines of different configurations and vintages on which to test, which costs money. And you need a large team of people to run all those tests and maintain the machines. Any respectable software company dedicates a tremendous amount of their budget to QA, fixing stupid little bugs that happen to come up on obscure hardware configurations. Linux just can’t achieve the quality control of a commercial OS. And that’s probably why when I “upgraded” from Gutsy to Hardy, my machine no longer completes a restart on its own. Maybe this will get fixed when the planets align and somebody with the same motherboard as me who also knows how the hell to debug this runs into the same problem, but I’m starting to get weary of this, and the apparently I’m not alone based on the declining desktop linux market share.
The lack of control of open source software makes Linux vulnerable to the weakest link. One of the biggest criticisms of my previous post was that I shouldn’t judge Linux by Ubuntu. Fine. But if we are to take the measure of each distribution as a product unto itself, than so we must measure their market shares as such. It is unfair to speak of the “adoption of desktop Linux” in sweeping terms, and then whenever something goes wrong with one distro, to conveniently speak of that distribution as not representative.
If one desktop Linux distribution gets bad press, it taints the entire community. This may seem unfair, but its the bed one makes when one decides to avoid having to actually write the majority of your own software. You can’t expect people to spend effort figuring out if something that goes wrong with Red Hat is really due to something specific to their implementation or something endemic to Linux.
Operating systems confer little advantage to the free-as-in-beer aspect of Linux. An operating system is, in some ways, one of the best bargains in software. The ubiquity of the need for OS software makes it highly efficient to develop, with the overhead costs negligable on a per unit basis. For their scope and breadth, paying about $100 for Windows or OS X is quite a bargain, and its cost easily justified if either saves you even a minute of time each day versus a free alternative. Thus, the OS market is naturally highly elastic; being free is a negligable advantage for a product which you use so often and which has such an impact on your life.
So, does Linux save people time, or does it just offer the illusion of saving money? In the context of individual users, I’d argue it offers no advantage for most people to switch. Again, the biggest determinant is is that resulting from differences in the quality of applications. In my experience, applications written for Windows and Mac OS X out perform those written for Linux, both in terms of robustness and usability. Maybe this is due to differing efforts aimed at each platform, or maybe its due to inherent difficulties in Linux as a development platform, as mentioned in the previous section. Either way, money is not the issue for anybody with any serious use for their computer.
The most compelling proof of the above is the fact that while desktop Linux has never seen widespread adoption (and may even be moving backwards) Linux as a server platform was a resounding success. Applications such as Apache and PHP are fundamentally good old fashioned UNIX applications, and do not suffer from the death of a thousand APIs or all of the other mistakes made by the desktop Linux community. Furthermore, this is an arena where the scales of operation make the free-as-in-beer aspect of Linux actually worthwhile, as opposed to a fool’s coupon. The market is certainly backing this up. In 2003, they were calling for Linux to be 20% of the market by 2008, up from 3%. It’s now 2008, and Linux is still less than 3% of the desktop market (but it’s serving more web pages than any other OS).
Community Open Source projects are the most successful and needed that address a more niche software need, where overhead development costs drive prices up. A good example of this is Adobe Illustrator. Inkscape and similar programs don’t have to be much better than Illustrator for them to make sense for average users, given the tremendous cost of Adobe products, and the relative infrequency of their use for the average person. But just as a professional artist would be stupid for “saving” money by going with Inkscape, a professional of any type is foolish for switching to Linux purely for the cost savings alone. One hour on the phone to tech support will erase the advantage. It is only because Windows is so horribly engineered that people can even begin to make the case that Linux is comparible in useability to Windows. Which brings me to my next point…
Linux won’t always be able to rely on the dominant OS being terrible. Offering something comparable in usability to Windows is nothing to be proud of, and simply sucking less than Microsoft may be easy, but it’s not a sustainable business model. Eventually, either Microsoft will get its act together, or far more likely, Apple will finally achieve market share critical mass and developers will flock to it. Apple has been making great strides, and in my opinion, has been improving OS X at a much faster rate than Linux has been improving. As they grow their market share, I expect this to accelerate. At some point, the Linux community will have to offer something more than ideas largely taken from Windows and OS X, and provide motivation to switch other than animosity for Microsoft.
The desktop Linux community lacks the ability to undergo significant rewrites. When Apple decided to use OpenStep as the basis for OS X, they saw fit to make a lot of changes to its Unix underpinnings, including a complete rewrite of the graphics model. Vista involves a complete retooling of the Windows driver model, allowing things like graphics driver installations without a restart. Apple and MS have the resources and organization to handle such labor intensive clean slate approaches. In my opinion, the Linux community just doesn’t have the ability to handle projects of such scale, at least not within a time frame even remotely competitive with Apple or MS.
When you’ve got labor divided as finely as it is in the Linux community, the general modus operandi tends to be patching and mods. However, there is only so much that can be done with X11. I think people are so impressed with what has been done with the 20 year old graphics system, that they forget to be embarrassed that you can’t even make trivial changes to the graphics configuration without requiring a restart of the X server. If you want proof of this, look no further than the 50+ pages of discussion on the Ubuntu forums on how to get dual monitors working.
The truth is that Linux is impressive, but it’s not as recent an accomplishment as is often claimed. The majority of what is usually called Linux was already there in the form of GNU and X11 (hence the FSF’s hopeless insistence on everybody calling it GNU/Linux) which has been under development in one way or another for around 20 years. But having been given a quick headstart by usurping the GNU project (the fair perils of Open Source, which must cause RMS no end of internal conflict), progress will not be so quick in the future. While MS and Apple were able to start from scratch on their graphics systems, Linux struggles to get compositing graphics to work reliably. This is completely understandable. I’m frankly amazed at what they’ve been able to do with extensions to X. But there is no extra credit given for effort, and Linux is falling further and further behind on the desktop as the task of grafting modern features onto an aging architecture gets harder and harder to maintain. Worse, but predictably, what people do work on is the fun stuff like compiz, and not the boring, unsexy infrastructure.
â€¢ â€¢ â€¢
My suggestions. If you’ve read this far, you might as well hear my hair brained proposals to the problems outlined above. Most of them deal with making Linux a better platform for developers.
- Take a hint from Apple and don’t try to work on every possible system configuration. Agree as a community on a small list of supported hardware. Maybe, if you standardize on something, a single hardware vendor might be induced to give a damn and write decent drivers. I’d suggest that the Linux community adopt Nvidia cards and completely forget ATI.
- Focus more on high quality development tools and standardize the multimedia APIs. Be ruthless. If OSS-supporting apps have to die, let them die! Pick one goddam printing interface. If stuff breaks or has to be rewritten, so be it. Never forget that applications determine the strength of an OS more than anything else. Worry more about developer convenience than user convenience. When was the last time you heard somebody say they used Windows because they liked it? They use Windows because applications X, Y, and Z work better on it, or are only available on it.
- Get all distros to agree on a standard release schedule such that all distros use the same kernel and GNOME/KDE release. Err on the side of longer release times and stable ABIs.
- Standardize on one package management scheme. This will allow developers to distribute software more easily, instead of having to maintain 7 different downloads on their distribution page.
- If all else fails: focus on the core Unix parts (user space GNU tools and kernel) that linux gets right and let somebody develop a closed-source desktop on top of it. Yes, you heard me right. If you change the license of the linux kernel and use the GNU userland tools as add-ins, I think it might be possible. I’d love to hear people’s opinions on this, especially the legal issues, but my feeling is this is the only way you’re going to get a decent desktop out of linux. It will take a single company with a profit motive to get linux to work on the desktop, just as it took Apple to make BSD/OpenStep work for the masses. Control by a single company would address the stable platform issue, as well as getting some of the nastier but needed tasks done that I mentioned earlier.