Category Archives: Technology

How to quit your job with an iPhone

In ten easy steps:

  1. Purchase an ‘R’ rated movie from the iTunes music store.
  2. Pause movie during particularly noisy sex scene, preferably one involving farm animals.
  3. Turn off phone.
  4. Go to important meeting with your entire staff, including boss.
  5. Turn off ringer using mute switch and put phone to sleep.
  6. Think you’re safe.
  7. Innocently pull out headphones from jack, momentarily shorting out badly designed mechanism in jack used to sense the remote play switch.
  8. Remain blissfully unaware that you’ve just set a chain reaction in motion which will destroy your career.
  9. Sit in stunned horror a few seconds later as sounds of impure love eminate loudly from your shirtpocket in the middle of your boss’ presentation, despite the silent switch being on.
  10. Run back to office in tears to polish up resume.
  11. (Optional.) Write Steve Jobs an angry letter about how good design includes more than just polished metal.

RIP American Broadcast Television, 1939–2009

The end of broadcast television was always going to happen eventually, but thanks to the FCC and the vagaries of a little-known modulation format known as 8-VSB, it may happen before the end of this decade. As of February of 2009, analog television signals will be shutdown across in America. The signals which have been on in some form or another for the past 80 years, which brought the whole country together to watch fuzzy images of the moon landings, the Nixon “Checkers” speech, and the first shuttle launch, will suddenly stop. In their place will be digital high definition broadcasts. If you can receive them.

Despite the hype about digital, there are some very nice things about analog, most notably its graceful failure in the presence of noise. If interference degrades your signal, or a bus drives by your house, causing changing power levels, an analog picture will just get a bit fuzzy or maybe you’ll see a ghost image. Either way, you’ll still comprehend what’s going on, and it will be a minor annoyance. In a sense, the beauty of analog transmission is that it leverages the massive ability of the human brain to decipher noisy inputs, making the world’s best signal processor (you!) part of the transmission system. Digital television, however, is all or nothing. Either you get a perfect signal, or you get a black screen. And I predict that it will be the latter for a surprising number of people.

The modulation format chosen for our brave new world of digital television was not decided by engineers as much as by committees of bureaucrats. Their choice, 8-VSB, has two big positives going for it. One, it is power efficient, allowing less energy to be used by the transmitters. And two, and perhaps the biggest reason it was chosen, it relies on patents owned by Zenith, an American company. The downside, and the reason the Europeans don’t use it, is that 8-VSB is very susceptible to something called multi-path interference (MPI). Simply put, MPI happens when there are things—like mountains or buildings, or even buses—off of which the signal can reflect on route to your TV. 8-VSB is especially bad in changing environments, such as when people walk around a room or a plane flies over your house.

The origin of this sensitivity is the tremendous amount of data that must be squeezed into the 6 MHz channel spacing used by existing analog channels. The full data rate of a digital TV signal is almost 20 Mb/s, an astounding amount of data to be sending over the air. It’s even more astounding when you consider that it has to be squeezed into the 6 MHz channels used since the early days of television. To accomplish this feat, the information is spread over the full 6 MHz without any redundancy whatsoever. To be power efficient, it only uses one “carrier.” However, when multiple transmission paths interfere, certain parts of the spectrum (certain frequencies) will destructively interfere, causing loss of signal at those frequencies. This frequency-dependent attenuation has to be removed by equalization before the whole signal can be reliably deciphered. As you can imagine, doing this when the equalization needed is constantly changing is very difficult. In practice, the screen will go black while the receiver tries to figure out what to do. The Europeans, to their credit, use a modulation system called COFDM that is akin to having hundreds of independent radio stations each with a tiny bandwidth transmitting simultaneously, each carrying a simpler part of the more complicated signal. This is much less susceptible to interference, as each low-bandwidth subchannel only has to be equalized relative to itself, and there’s not much that can happen over a very narrow bandwidth signal. (There was actually quite a bit of “drama” over our choice of schemes, as partly detailed in this article.)

My skepticism about the switch to digital comes from forced experience with it, as I’m not allowed to have cable where I live. Most Boston stations have been broadcasting in digital for some time (in addition to their analog signals) and I can attest that it’s very difficult to get good reception where we are in Cambridge, due to all the buildings. We are a scant 5–7 miles from the transmitters, very close by most standards, and yet our digital signals are flaky and will go out completely if a bus drives by or a person walks by our window. We’ve becomed conditioned to expect the picture to go black whenever we hear a bus approach. And this is with a top-of-the-line antenna and a TV with a fifth generation receiver chip.

The reason I’m writing about all of this is not to complain for the sake of complaining. That is something I would never do! Nor do I think it would be the end of the world if we can’t all watch Knitting with the Stars or Survivor XXIV, Staten Island. I decided to write about this because the switch to digital could potentially be such a disaster that it could cause an interesting shift in our nation’s media consumption. It could also be a potentially lucrative shift if you invest in companies who stand to benefit, like Comcast or DirecTV.

In terms of social impact, the (few) people who were on the edge regarding TV, who maybe had an old analog TV around for news or the occasional football game, will probably end up giving up broadcast TV entirely, switching to Internet-based media. However, tens of millions of others will be driven to cable or satellite TV, many simply due to confusion over what the switch means. (A majority of people surveyed thought that they would have to buy a new TV once the switch occurs.) A high proportion of people still using OTA TV are poor—predominantly living in urban apartments, the worst possible situation for digital TV reception—and will no longer be able to receive any television. The result is that they will become even less connected to mainstream American media and news, for good or bad.

In terms of business, I think this will finally put the stake in the local affiliate system, as the need for local broadcast facility was one of the reasons they existed to begin with. Along with them will go the curious phenomenon known as the local news broadcast. (So at least one good thing will come of it.) National television networks will still exist, but unemcumbered by contracts with local affiliates, distribution will become entirely on demand through cable and satellite, and increasingly through the Internet. I would guess Apple will play a major role, offering a box (a la Apple TV) that allows one on-demand access to programs with targeted (and non-optional) advertising. (Microsoft will try to do the same, offering a set top box based on Windows, and will fail miserably.) The move to on-demand programming will bring the final demise of the network news departments. Nobody will be willing to sit through an on demand full news broadcast when there is always a sitcom to be had. We will realize, too late, that the imposed schedule of broadcast television was the last bit of social discipline we had, forcing us to at least sit through the news before we got Seinfeld at 8.

All because somebody in the government decided vestigial sideband modulation was the way to go.

In defense of Google’s Street View, and thoughts on Internet privacy

Quick Summary. Google’s street view is simply a representation of reality on a specific day, and they have not highlighted any aspect of the dataset, and furthermore the dataset is comprehensive. Given the mapping between reality and the dataset that is inherent in something like Street View, one’s privacy on the day your photo was taken and one’s privacy in the dataset are commensurate, because your relative anonymity is the same in each. Arguments pointing out that certain people and websites can highlight compromising pictures are missing the point, and are like blaming camera manufacturers for the actions of paparazzi. If a company decides to single out a certain picture on somebody on Street View on your website, that company is the party violating privacy, not Google. Google is producing an unbiased representation of reality; just as in physical reality, it is the choices and actions of others who decide whether or not privacy is violated.

Recently, Google has been driving around various metropolitan areas (including Boston) in a fleet of funky-looking cars adorned with eight cameras mounting on their roofs (see below) profligately photographing everything within view of the street every few feet, and linking the resulting panoramic shots to their respective locations in Google Maps. Their eventual goal is to have virtually every building on every street in every major city photographed, such that you can click on a street and see a picture of the surroundings from that location. You’d have to be Mr. and Mrs. Boring to not think that’s cool.

Car used by Google to obtain panoramic Street View data.

Car used by Google to obtain panoramic Street View data.

Right now, the resolution is sufficient to find that bar from which you stumbled home one night but whose name eludes, or to get a decent idea about whether or not the Lake View Retirement Home really has one. As it grows more complete, it will be a profoundly powerful dataset, and will doubtless result in all manner of unforeseen applications. This will be especially true if Google actually uses higher resolution pictures. Do you want to see when a favorite business is open, but they don’t have a website? You could, in theory, check out the hours posted on the front of their store with sufficiently high resolution imagery. If you’re wondering about the legal parking hours on the streets near a restaurant you’re planning to visit, you could read the parking signs across town from your computer.

Unfortunately, reactionary privacy concerns have plagued the service since its inception, and if the service survives at all, it’s likely that it will be limited to low resolution pictures. Some of the criticism has predictably come from people who have been photographed doing things they shouldn’t, but much of the ire has come from people who simply think that having a picture of them taken while they were in public shouldn’t be allowed online. And I have to admit, I took pause when I found our own car parked in our usual spot:

Our car, as found on Google's Street View.

Our car, as found on Google

However, upon further reflection, I realized that it is unreasonable to object to this as a privacy violation, for reasons that are especially clear in this particular case. Quite literally, there is going to be a nearly one-to-one correlation between the Google Street View dataset and the real world. Thus, while your image might be available to everybody, so are millions of other images. One might expect that at any given moment, the proportion of people interested in the Google picture of the specific place you were the day the google car spotted you is very roughly the same as those interested in that specific spot in real life at any given moment. Thus, for exactly the same reason you only saw a few people on the street with you at the moment the picture was taken, it’s likely only a few people are interested, at any one time, in that picture.

Continue reading

A tax tip for people with online investments

If you use E*TRADE or Ameritrade, be warned that they don’t provide correct cost basis information to online tax software such as TurboTax. I tell you this after I just spent four hours fixing the useless data they provided. However, despite being completely useless, they are listed in TurboTax as providing investment tax information, so you might be tempted to import the data from them. If you do, you’ll be amused to find that you may owe more in taxes than you made that year, because your cost basis for each trade will be entered as zero and each will be counted as a short term capital gain.

If you make a lot of trades, and use a brokerage that doesn’t provide cost basis information, your best bet is to just download a TXF file from your brokerage website and import that into the desktop version of TurboTax. (For some reason, the online version doesn’t accept this kind of file.) You can still import the 1099-DIV and 1099-INT data from E*TRADE or Ameritrade, but just make sure to disable the importing of brokerage sale (1099-B) data. Otherwise, you’re better off just entering each trade manually from your online history.

And in case you’re looking for an alternative brokerage, I can heartily recommend Fidelity. Apparently, they are fairly unique in managing to achieve the highly elusive technological feat of exporting correct cost basis information online.

Why Linux is failing on the desktop

I should’ve known better. I wrote a post a few days ago detailing my frustration with Linux, and suggested (admittedly in very indelicate terms) that the global effort to develop Linux into an alternative to general use desktop OSes such as Windows and OS X was a waste of resources. I have absolutely no idea how 400 people (most of them apparently angry Linux fans based on extrapolation from the comments) managed to find their way to the article within hours of me posting it. I think they must have a phone tree or something. Nonetheless, I should’ve been more diplomatic. So, as penance, I will here attempt to write a more reasonable post better explaining my skepticism of desktop Linux, and will even try to offer some constructive suggestions. I’m sure this post will get no comments, in keeping with the universal rule of the Internet that the amount of attention a post recieves is inversely proportional to the thought that went into it.

Before starting, let’s just stipulate something supported by the facts of the marketplace. Desktop linux has been a miserable failure in the OS market. If you’ve consumed so much of the purple koolaid prose of the desktop linux community that you can’t accept that, you might as well quit reading now. I think every year for the past decade or so has been “The Year Linux Takes Off.” Except it’s actually going down in market share at this point.

As I pointed out in my first post, (perhaps a bit rudely) this isn’t just a bad performance, it’s a tragic waste of energy. Can you imagine the good that could’ve been done for the world if these legions of programmers hadn’t spent over a decade applying their expertise (often for free) on a failure like desktop Linux? For one, they could’ve made a lot of money with that time and donated it to their favorite charities, assuming they were as hellbent on not making money as they appear to have been. And two, it might have been nice to see what useful things they would’ve produced had they done something somebody were actually willing to pay for as opposed to trying to ram desktop linux down the collective throat of the world. You know, sometimes the evil capitalistic market does useful things, like keeping people from wasting their time.

Open Source community projects put innovation and scale at odds. If an Open Source project is to be large, it must rely on the input of a huge distributed network of individuals and businesses. How can a coherent vision arise for the project in such a situation? The vacuum left by having no centralized vision is usually filled by the safe and bland decision to just copy existing work. Thus, most large scale Open Source efforts are aimed at offering a open alternative to something, like Office or Windows, because no vision is required, just a common model to follow. This is not to say that innovation is not found in the OS community, but it is usually on the smaller scale of single applications, like Emacs or WordPress, that can grow from the initial seed of a small group’s efforts. The Linux kernel is a thing of beauty, and is actually a small, self-contained project. But the larger distribution of a deskop OS is another matter, and here we find mostly derivative efforts.

An OS is only as good as the software written for it. One of the great things about Open Source is that there is a tremendous power in being able to take an existing project and spawn off a new one that fixes a few things you didn’t like. While this is fine for an application, it’s problematic for a piece of infrastructure software expected to serve as a reliable, standard substrate for other software. Any Linux application requiring low-level access to the OS will have to be produced in numerous versions to match all the possible distros and their various revisions. See OpenAFS for an example of how ridiculously messy this can get. For apps, do you support GNOME or KDE or both or just go, as many do, for the lowest common denominator? And supporting hardware accelerated 3D graphics or video is the very definition of a moving target. There are multiple competing sound systems, and neither are nearly as clean or simple as what’s available on Windows or the Mac. The result is usually a substandard product relative to what can be done on a more standardized operating system. Compare the Linux version of Google Earth or Skype to the Windows version of the same to see what I’m talking about. (That is, if you can even get them working at all with your graphics and sound configuration.)

Pointing the finger at developers doesn’t solve the problem. To begin with, for the reasons I’m explaining in this essay, some of the poor quality of Linux software is the fault of the Linux community diluting its effective market share with too many competing APIs. Even without this aggravating factor, developers just can’t justify spending significant time and money maintaining a branch of their software for an OS that has less than 3% market share, regardless. Because of this, the Linux version of commercial software is often of much lower quality than the Windows or Mac version. A prime example of this is the ubiquitous and required FlashPlayer. It consistently crashes Firefox. Is it Adobe’s fault? Maybe. But when this happens to somebody, do you think they know to blame Adobe or Firefox or just Linux? Does it even matter? It’s just one more reason for them to switch back to Windows or the Mac. And for the record, why should Adobe bother to make a good version of FlashPlayer for a platform with no stability and few users?

This solution to all of this is easy to state, but hard to enforce. (There are downsides to Freedom.) Somehow the fractious desktop Linux community must balance the ability to innovate freely with the adoption of, and adherence to, standards. Given the low market share, linux has to be BETTER than Windows as a development target, not just as good or worse. However, one of the problems with Linux seems to be a certain arrogance on the part of its developers, who consider applications as serving Linux, and not the other way around. An OS is only as good as the programs written for it, and perhaps the worst thing about Linux is that it hinders the development of applications by constantly presenting a moving target, and requiring developers to spend too much time programming and testing for so many variants.

It’s downright laughable that an OS with single digit market share would further dilute its market share by having two competing desktops. Yeah, I know KDE and GNOME are supposedly cooperating these days. But (a) it’s probably too late, (b) it’s not perfect and (c) even if you disagree that it dilutes effective market share, it still dilutes the development effort to maintain two desktop code bases. For godsake, somebody kill KDE and make GNOME suck less! Yeah, I know that’s never going to happen. That’s why the title of this essay is what it is.

One of the few things MS gets right is that for all they get wrong, they do understand one crucial thing: developers are the primary customer of an operating system. They may advertise to consumers, but they know that at the end of the day it is developers whom they serve. The Linux community just doesn’t get this.

Unfortunately, I don’t have much hope of desktop Linux ever becoming sufficiently standardized. If the focus becomes on making Linux friendly to applications and their developers, the distributions must be become sufficiently standardized as to render them effectively consolidated, and the desktop frameworks so static as to negate much of their Open Source character. For Linux to become more developer friendly, it would have to essentially become a normal operating system with a really weird economic model.

OS development actually requires physical resources. The FOSS movement is based on the idea that information is cheap to distribute, and thus it is a tremendous leverage of the human capital to develop it. People write some software, and the world perpetually benefits with little marginal cost. That works beautifully for applications, but OS development, especially desktop OS development, requires tremendous continuous resources to do correctly. For one, you need tons of machines of different configurations and vintages on which to test, which costs money. And you need a large team of people to run all those tests and maintain the machines. Any respectable software company dedicates a tremendous amount of their budget to QA, fixing stupid little bugs that happen to come up on obscure hardware configurations. Linux just can’t achieve the quality control of a commercial OS. And that’s probably why when I “upgraded” from Gutsy to Hardy, my machine no longer completes a restart on its own. Maybe this will get fixed when the planets align and somebody with the same motherboard as me who also knows how the hell to debug this runs into the same problem, but I’m starting to get weary of this, and the apparently I’m not alone based on the declining desktop linux market share.

Continue reading

The results of my annual desktop Linux survey are in: It still sucks!

Note: If you are a member of the Orthodox Church of Linux and you suffer from high blood pressure, you might want to consult a physician before reading this. In fact, you may just want to skip to my follow up article, which presents my criticisms of Linux in a much more explanatory form.

I’m a sucker for a good story and that Linux certainly is: millions of programmers working out of the sheer goodness of their hearts on a project to benefit humanity by providing a free operating system. Never mind that they only cost about $100 anyway, and represent less than 10% of the cost of a new computer. Microsoft just makes us all so angry that if we have to spend billions of person hours so that we can all save $100 every few years, so be it. Time well spent.

So, it’s with heady optimism and hope for the future that once a year I anxiously download and install the latest consumer desktop incarnation of Linux, my eyes watering with the promise of life without Microsoft. For the past six years, I have installed Linux at some point during the year with the hope of never having to go back. And for the past six years I have used Linux for a week or so, only to inevitably capitulate after tiring of all the little things that go wrong and which require hours searching on the web for the right incantation to type in /etc/screwme.conf. While every year it gets a little bit more reliable, I am always guiltily relieved to finally get back to Windows, where there are no xorg.conf files to get corrupted or fstab files to edit.

This year, I decided to try Ubuntu 7.10. Given the hype, I had very high hopes. It installed without a hitch, and came up working fine for the most part. Just a small problem with the screen resolution being off and my second monitor not being recognized. I thought, “That should be easy to take care of. This could be the year!”

Continue reading

Vintage technology: 757 flightdeck

A picture of the flight deck of a 757 we got to play with (on the ground) after a class I took on cockpit automation. (Click on the picture for a larger version.) The 757 was developed in the late 70s, and its delivery customer in 1982 was Eastern Airlines. (Remember them?)

I’m not certain, but I believe this was the last Boeing flightdeck to have CRT displays, as opposed to LCDs. The reason this is worth mentioning is that it’s one of those examples of technology going backwards. In CRT displays, the symbols (e.g. the engine arcs in the middle top display) are drawn not as rows and columns of dots, but the same way a person might draw them. For example, a circle is made by scanning the electron beam in a circle, creating a gorgeous, bright, perfect circle. Each letter is written by tracing the beam along the outline of the letter, is if writing it longhand. No pixels! It takes a rather large computer to handle all of this, located deep in the belly of the airplane. Despite being “antiquated” technology, the displays are utterly striking and unlike anything you see today. LCDs may be cheaper, but there’s something about CRTs, especially vector-based ones, that are a pity to see go.

Similarly, there is nothing that can replace the efficacy of analog gauges, in some ways. On the 757 there are still “steam gauges” showing speed and altitude to the left and right of the attitude indicator CRT (the blue and brown ball). They are easy to read, sharp, and wholly independent. I found that when flying the simulator, I would tend to use them over the more central digital speed and altitude “tapes” on either side of the attitude indicator. Integrated LCD panels are cheaper, yes, but I don’t think anybody will ever make one that improves upon the immediate readability of a steam gauge instrument. You can see the approximate angle and rate of change of a dial out of the corner of your eye, but reading a digital number requires eye movement and a bit of mental processing, especially if the numbers are changing rapidly or you’re in turbulence. Two steps forward…