Note: If you are a member of the Orthodox Church of Linux and you suffer from high blood pressure, you might want to consult a physician before reading this. In fact, you may just want to skip to my follow up article, which presents my criticisms of Linux in a much more explanatory form.
I’m a sucker for a good story and that Linux certainly is: millions of programmers working out of the sheer goodness of their hearts on a project to benefit humanity by providing a free operating system. Never mind that they only cost about $100 anyway, and represent less than 10% of the cost of a new computer. Microsoft just makes us all so angry that if we have to spend billions of person hours so that we can all save $100 every few years, so be it. Time well spent.
So, it’s with heady optimism and hope for the future that once a year I anxiously download and install the latest consumer desktop incarnation of Linux, my eyes watering with the promise of life without Microsoft. For the past six years, I have installed Linux at some point during the year with the hope of never having to go back. And for the past six years I have used Linux for a week or so, only to inevitably capitulate after tiring of all the little things that go wrong and which require hours searching on the web for the right incantation to type in /etc/screwme.conf. While every year it gets a little bit more reliable, I am always guiltily relieved to finally get back to Windows, where there are no xorg.conf files to get corrupted or fstab files to edit.
This year, I decided to try Ubuntu 7.10. Given the hype, I had very high hopes. It installed without a hitch, and came up working fine for the most part. Just a small problem with the screen resolution being off and my second monitor not being recognized. I thought, “That should be easy to take care of. This could be the year!”
The problem with Linux is that for all the pretty shine put on it with mods to X11 and desktop frameworks like GNOME, nobody has the resources to really blank slate what needs to be scrapped. Thus, even though it’s 2008, we’re still stuck with X windows, which was designed in the 1980s, well before anybody had any idea we’d be doing desktop compositing with GPUs, let alone on multiple monitors. Despite the pretty GUIs to configure it, the screen settings are set by having the GUI parse and then edit a text file that was meant to be written by humans, essentially the same format that was used twenty years ago when X11 was written. Not surprisingly, it tends to get corrupted, and that’s exactly what happened to me this year. After using the GUI to turn on my second monitor and correct the resolution, things went to hell when I restarted and just kept going downhill. The second monitor would not work, and I could not get direct display to turn on, leaving me with sluggish graphics performance. Worse, when the graphics card driver was upgraded automatically, things finally got so bad that all Ubuntu could do was operate in VGA mode. Looking online, this happens to a lot of people, and everybody has a different opinion on what to do about it. Mine is to simply say the hell with this shit.
Now, if you ever dare complain about this kind of thing happening, inevitably some pasty-faced ponytail wearing druid wearing an O’Reilly t-shirt will crawl out of his mother’s basement and patronizingly scold you by saying “It works for me! You just have to edit the X11 mod section of /etc/X11/xorg.conf and install the xgl package if you’re going to be using this on an ATI card with the proprietary drivers.” What Linux supporters (I among them*) often fail to grasp, however, is that most people see computers as a means, not an end. Just because a system problem has a solution doesn’t mean that the obscurity of that solution should not be counted againsts the merits of that system. In general, people do not enjoy computers as objects of inherent interest, and that even goes for many scientists and engineers, most of whom have better things to learn than obscure unix configuration file formats and work-arounds.
So, this year’s experiment ended in disappointment even quicker than usual. Maybe I’ll move to a biennial schedule. I’m sure many people will say that I was too impatient. And I was impatient, but certainly not too much so if I am to consider my time worth even $25 an hour. I hope that those that develop desktop Linux enjoy the process, because it is hard to justify otherwise. The amount of time that has been spent developing something that offers negligable operational advantage over existing options (Mac OS X or Windows) cannot possibly represent a good investment for society. Linux will always have a place for hobbyists and as a unix replacement, but it will never be good enough as a desktop. The economies of scale are just too great for Microsoft and Apple. Since the volumes are so high, their operating systems are actually incredibly affordable given the impact they potentially have on a person’s productivity. Ironically, nobody should have a better grasp of the tremendous efficiencies of proprietary operating system development better than those that labor pointlessly to provide a free alternative to it. In other words, anything that can be done for free probably isn’t worth doing for free because somebody else can do it for cheap. The world already has two mediocre consumer operating systems that cause people no end of grief. Do we really need to spend the time making a third?
*I’m referring to linux as UNIX (i.e. for servers and engineering workstations) not desktop linux aimed at the general public.