The Big Switch

The Big Switch, by Nicholas Carr, is published by W.W.Norton, January 2008. Quotes and page numbers are from an advance copy.

Unlike most technology commentators Nicholas Carr knows that if you want to predict what’s happening next, you’ve got to follow the money. And he does so very well, which makes this book (and his weblog) recommended reading for anyone interested in where  technology is taking us.


Google is everywhere in The Big Switch and the reason is simple: cost.

No corporate computing system, not even the ones operated by very large businesses, can match the efficiency, speed and flexibility of Google’s system. One analyst [Martin Reynolds of the Gartner Group: see here ] estimates that Google can carry out a computing task for one tenth of what it would cost a typical company.

That means, if you are a company and you have a computing task to be done that Google already does, you can save a bunch of money and you can now start outsource your CPU cycles just as you previously outsourced other tasks. And that means that the computing landscape will get shaken up. Not in a matter of months, but over the next decade or so. It’s amazing how quickly we get used to a landscape and many of us are now so accustomed to PC’s and the basic layout of corporate computing systems that they seem almost natural. But Carr warns us that this is going to change and, as if to confirm his claims, last week Sun Microsystems, supplier of many of the computers that make up corporate data centres, announced that by 2015 it won’t have a single data centre.  Information Technology is not sacrosanct.

Google’s cost advantage comes partly from a built-in inefficiency of corporate computing: capacity underutilization. Many applications demand their own servers, and those servers must be able to handle the peak load that the application will experience even if that peak load happens only rarely. As a result most corporate computers, most of the time, do nothing except consume electricity and produce heat. This inefficiency was unavoidable until recently, but now high-speed Internet availability makes it possible for companies that have the resources (Google and a few others) to build warehouses full of servers that look like power stations (see Google’s The Dalles centre in Oregon, below, with two football-stadium-sized buildings full of perhaps 60,000 servers). And then they can supply CPU cycles over the Internet just like electrical utilities supply electricity. The demand on Google’s CPU cycles is smoothed out, being balanced among many consumers in different timezones with different needs, and that only helps their efficiency. It’s what Carr calls utility computing.

The first half of The Big Switch is given over to convincing us that utility computing is the wave of the future, and the second half of the book explores the implications – many of them disturbing – of this switch. The good news is that both are thought-provoking and open up a lot of questions. The less good news is that to cover all this ground Carr has to skim and, at 250 pages, this short book can’t delve very deeply into any of the questions.

The argument for the switch to utility computing is made by drawing an analogy between the history of electricity supply and the history of computing. A century ago factories generated their own electricity and many of the fears that people have of outsourcing computing were faced by the nascent electrical industry. Would companies trust their lifeblood to an external source? Can they the get guarantees they need to go over to this new model? History shows that they did, and quickly. Will computing go the same way? Carr says yes.

I broadly agree (in the long run) but there are reasons to think the Big Switch may be less than complete. First, there is an intermediate solutuion that companies are already adopting, which is "virtualization software" that allows them to run many "virtual computers" on a smaller number of real computers, so making better use of the real CPU cycles. The technology is booming and the savings are huge. And while this may still be more expensive than the full-fledged utility model, to the extent that many software packages are customized for individual companies (not so much word processors and so on as the ERP systems that many companies run their business on) a utility model may not be applicable. Whether the benefits of custom applications will win out over the cheapness of commodity software is open to debate. Second, while electricity is a relatively simple thing, computing is complex. Carr only scratches the surface of what forms utility computing will take. Will companies and consumers pay bills for complete applications (the salesforce.com model), for storage (Amazon’s S3, for example), for virtual computers on which they can install their own applications (now also starting to be offered by Amazon), or what? And third, from the consumer side, the flip side of the unused CPU cycles efficiency argument is that we may be prepared to pay for a computer to run the latest games and then, well, the marginal cost of using those CPU cycles is very low.  In fact, they are starting to be used (Seti @home and others) by central applications – rented back (or donated back) to others. It is clear that the utility model is making ground in the consumer space (how much of our time do we spend in the browser) but the reasons are, I think, different from the electricity analogy Carr pursues.

But this is splitting hairs, because the broad trend Carr identifies is surely largely correct, even if it takes a decade or two for the switch to be made. We’ll still have a lot of computing cycles happening everywhere, but perhaps it may turn out to be true that the world really does need only five computers. And if there are limitations to the expansion of the megacomputers because of the complexity of computing as a utility, well there are other limits to growth (national ones, legislative ones) that electrical utilities have always faced that don’t constrain Google, Amazon and our other suppliers.

So that’s part one. Part two is a welcome counterweight to the techno-utopian fluff put out by some prominent commentators. Carr covers the centralization of control (how mass participation ends up with just a few people getting loads o’ cash), what he calls "the great unbundling" – the change from buying whole albums/newspapers to viewing individual stories and how that may affect production – and the darker side of the web: spam, identity theft, loss of privacy, and so on. Of these, the best chapter is on unbundling because it’s just not clear to me how or why that phenomenon will work itself out. The argument is that (to take newspapers as an example) when advertizers sell by the click and when content cannot be sold for its own sake, newspapers lose the ability to fund such expensive endeavours as investigative reporting and foreign news desks. Cross-subsidization of different parts of a newspaper, Carr argues, is the only way that quality content has managed to keep being produced, and once that model goes then so does the quality.

There’s a lot of possibile futures here, and I wish Carr’s book was twice as long so he could explore some of them more seriously. In the end, the book is a survey more than a deep investigation, but it is a survey that asks the right questions and we could use more books like this to chart, and perhaps help alter the course, of technological change and its many social impacts.

Bookmark the permalink.

5 Comments

  1. While he may have an economic point, there is also human behavior.
    When you look at pre-PC days, you have underground computing all over the place, where engineers would literally run simulations on HP and TI calculators to avoid dealing with PCs.
    When PCs hit the scene, you had people engaging in all sorts of subterfuge to get them in past IT, such as invoicing them as printers or Wang type word processors.
    This is a social, and not an economic imperative, and it is driven by two things:

    • The fact that IT departments suck, because if they do a perfect job, they are invisible, and have no power.
    • People want control over what they do on their own desk.
  2. Is this technology available now, I have the occassional need for running large scale statistical programs and I have a 30k server that I use probably 2% of the time. I’d love to just be able to upload my data, code, and apps (Matlab mostly) and run it.
    Is there somebody who does this?

  3. Matthew – It is possible that for some applications utility computing will provide a way of working around IT departments (no capital expenses, no machines to manage). And as you suggest, anything that can help people work around their own IT departments has a bright future.
    CalDem – The closest I know of for that kind of problem may be Amazon’s EC2 initiative. See http://www.amazon.com/gp/browse.html?node=201590011.

  4. “The fact that IT departments suck, because if they do a perfect job, they are invisible, and have no power.
    People want control over what they do on their own desk.”
    Posted by: Matthew G. Saroff
    I was told that when the first PC’s came into the national labs, the IT guys ridiculed them as ‘milli-crays’ (Cray supercomputers being the big hot sexy iron of the time). The IT guys couldn’t figure out why anybody would want them. I realized that this was because the IT guys didn’t have to go through the BS process of getting Cray time.

  5. I absolutely agree that there’s a lot of gains to be made helping people work around their own IT departments. I can see utility computing being driven by that for departmental/non-crucial apps in the same way PC’s were driven earlier.

Comments are closed