Quantum Computing: Gripes from a Quantum Fuddy-Duddy

Waterloo is an interesting place to live these days for an ex-quantum-mechanic, mainly because of all that techno-geek BlackBerry money that is being splashed around. When I bike to work I go past the Perimeter Institute in Theoretical Physics at the beginning of my ride, and then go within a stone’s throw of the Institute for Quantum Computing at the end, both of which are making waves these days. All these brainy young things pushing the boundaries of what we know and don’t know, it’s fun to watch.

The main media event recently has been the publication of the book “The Trouble with Physics” by the Perimeter Insitute’s Lee Smolin, which is a criticism of String Theory and its 30-year failure to prove itself as something more than a promising candidate for a theory of everything. Smolin’s book has been widely reviewed, often in conjunction with Peter Woit’s “Not Even Wrong” which argues much the same thing. The title “Not Even Wrong” was a devastating putdown coined by Enrico Fermi of another physicist’s work – the implication being it was so mistaken that you couldn’t even show why and how it was incorrect. My own work was in the relatively mundane work of molecules rather than cosmological elementary particles; that is, it was quantum, but on our side of the Heisenberg Uncertainty Principle, not the other side. Both books argue that String Theory has become so removed from experiment that it has ceased to be science, and has become prone to building elaborate edifices on an insufficiently sound physical basis.

While PI has made the biggest splash locally, the Institute for Quantum Computing is a rising star too. And Quantum Computing is something that I feel I can get more of a handle on than String Theory, so I’ve been reading a bit about it. And what I see either exposes me for a quantum fuddy-duddy or suggests that Quantum Computing (QC from here on) could do with listening to the critiques of Smolin and applying it to themselves.

So here is what’s wrong with quantum computing from what I can see. (Disclaimer: these opinions are based on a number of popular articles, my attempts to follow Scott Aaronson’s excellent weblog Shtetl-Optimized, and the bit I have read of Jozef Gruska’s 1999 text “Quantum Computing”, which I actually got out of the library last week. These opinions are worth exactly what you paid for them.)

Reading a QC book is an odd thing for a regular physics type. Where are the Hamiltonians? Few and far between, it seems (“Hamiltonian” is mentioned in only a literal handful of places — five — in Gruska). Everything is about the action of unitary operators on states. For anyone reading this who is not a physicist, here’s what that means. In physics, when things interact, that interaction is described by a Hamiltonian. If you’re trying to solve a problem, your first step is to construct a Hamiltonian operator that describes the interaction you are studying, and then solve the equations that come from that. Unitary operators, on the other hand, describe a change in a quantum system from one state to another, which may sound like the same thing but isn’t. In regular computers, circuits consist of many gates that carry out elementary operations (AND, OR, and so on). If you had a quantum “gate” then a unitary operator describes the change from input to output (that’s the
change in state), and ignores the way in which the system gets from the one end of the gate to the other. The Hamiltonian would describe the actual physical operation that happens as the light or electron or nuclear spin goes from the input to the output.

That still might not be very clear, so here’s the end result: the theory of QC is built in a manner that has become deliberately divorced from the real world. The “implementation” or the actual building of a computer is consciously separated from the theory of the logic and algorithms that you would build on top of this computer. There is a division of labour between theory and experiment that is present right down to the way the theory is constructed, and that is a bit of a problem because it means that every single thing that the theorists say is conditional. All the
theorems they prove need a big asterisk next to them saying “as long as someone can make a computer”. As a footnote Richard Feynman, who knew what he was doing, did some early explorations in Quantum Computing and did approach it by building Hamiltonians and putting together Schrodinger equations – just what you’d expect a physicist to do.

You can see why the discipline has gone in this manner. Regular old classical computing, after all, followed a similar path. It started off with mathematicians (von Neumann and Turing) who set out a logical architecture of computers and algorithms, and this work was followed (loosely speaking, and as best I understand) by the engineers once the transistor was invented. And it makes sense that those interested in exploring algorithms can do so without needing to know all about welding – this stuff is complicated enough as it is. But we should keep in mind that there is always the possibility that the success of regular computing will not be repeated. The proof of the pudding and all that.

The second thing that bothers me about QC is related to this division of labour. Everywhere you look they are talking about “entanglement” and all the things that go along with it: Bell’s inequalities, EPR experiments and so on. In Gruska’s book Everett (of the “many-worlds interpretation”) is mentioned as many times as Hamiltonian. Again, for a quantum fuddy-duddy this raises red flags. The wierdness of the quantum world is seductive – enough that my 1st year lecturer (the late Peter Dawber) felt he had to warn us “this is interesting, but it’s interpretation. My advice is learn how to calculate and solve problems, and don’t get stuck in the philosophical quicksand”. Good advice that has been repeated by many a lecturer, I’m sure. And yet here are these QC-ers diving headlong into entangled states, and spending more time on them than on things with actual Hamiltonians. Looking in Gruska’s index again, entanglement merits 42 mentions. Perhaps this is mainly a rhetorical point, but I do think it is worth making because entanglement is built into the culture of quantum computing.

Entanglement is connected to what happens when you prepare a multi-particle quantum state and then let the particles become separated. So you get paragraphs like this:

Prepare a system with two particles, each of which can have two values of spin, and send them off in opposite directions. Then measure one of the particles to find its spin. This measurement then immediately fixes the spin of the other particle, even though it’s a long way away. The state of the two particles is entangled.

But you could also write this paragraph this way.

Prepare a system with two particles, each of which can have two values of spin, and send them off in opposite directions. Then measure the state of the system by checking one particle. This measurement tells you the spin of both particles.

The difference is that in the second phrasing avoids mention of entanglement, and focuses on the fact that this is a single quantum system we are talking about here, and that however far apart the two particles end up being, they still have to comprise a single quantum system, and that means no messing from the classical world. The two paragraphs refer to the same operations and mathematics, but the second avoids extraneous weirdness.

The two most mature methods of actually preparing quantum computers are NMR spectroscopy and ion traps. Ion traps deal with fine control of isolated quantum systems (as required for “entanglement”) and involves very exotic and incredibly precise experimental apparatus. This is what you would expect if you are dealing with single systems: the expense of dealing with them grows as the size of the separation grows. NMR deals with many systems (coffee cups, for example – the link is to a PDF file) and uses the fancy techniques of pulsed magnetic resonance to give these systems a variety of kicks. It’s recently been extended to 12 “qubits” (individual spins), which is the biggest quantum computer to date.

But here is the odd thing, this most successful technique is not based on the single systems that are needed for entanglement. In fact, these same multi-pulse NMR experiments have been carried out for some years and the word “entanglement” never raised its head so far as I know until the Quantum Computation people got interested. In NMR you are dealing with qubits that are not
spatially separated (they are nuclei on the same molecule) and you are not dealing with a single quantum system described by a single state vector (you are dealing with a thermodynamic ensemble of quantum systems described by a density matrix). Myself, I could never follow the theory of multi-pulse NMR, but I’m pretty sure there was no mention of many-world interpretations in it.

So QC should realise that the consequences of “entanglement” are limited to inherently exotic systems with which you are unlikely to be able to build a real computer. It’s useful in PR material to highlight the weirdness of the quantum world, but when talking science you should follow Occam, who said “avoid talking weirdness whenever possible”. For example, when QC people talk about “maximally entangled states” or “Bell states” they simply mean a state in which you’ve measured spin along one axis when you’re going to measure spin along another axis later. This can be talked about without reference to Bell or entanglement.

This thing with entanglement shows up in all kinds of popular articles by the practitioners of WC. As one example, here is a popular piece by some of the theorists (Steane and van Dam) who
have demonstrated that you can use entanglement to enhance communication – that you can exploit the entanglement of a quantum state together with regular messages among observers at different places to get more efficient communications. The article reads in a fun enough way (it’s about participants at a game show who each carry their own little qubit into a separate cubicle and then pass messages). But it’s not possible. You can’t carry a qubit around because in order to exploit entanglement you have to have a single quantum system. This kind of writing comes from thinking about measurement according to the first way I wrote the paragraph above (measure the particle) as opposed to the second (measure the system).

Perhaps I’m being overly picky about this because it is, after all, just a popular article (although it works in some concepts you would need undergraduate physics to understand before the end). But I can imagine QCers saying, “well, it’s possible in principle”. But like a lot of “in principle” arguments I don’t buy it. I think Daniel Dennett dealt with this kind of argument in his brilliant “Consciousness Explained” when discussing the idea of a “brain in a vat” (aka the Matrix) where philosophers argue that you could “in principle” recreate the sensations of the world by stimulating the right portions of the brain. Dennett basically calls their bluff and accuses them of not thinking through the magnitude of the problem, and takes some time to spell out just how hugely implausible it is. Now it’s not a proof, but I think the same kind of thing applies to these popularizations of entanglement. You can “in principle” have widely separated parts of a single quantum state outside a hugely expensive laboratory only if you don’t think too hard about what the endeavour entails. In a sense, we’re back to the use of Unitary operators by the theorists so they don’t
have to think about implementations.

Well, this has been more rambling than I expected, so here’s a summary of what I see as the main points.

  • The split between abstract theory and physical implementation in the structure of quantum computing is a dangerous game. It means that everything that quantum computing theory says needs to be taken with a big pinch of salt until realistic quantum computers are demonstrated. The widespread use of the rhetoric of entanglement and other ideas that focus on the non-intuitive parts of quantum mechanics exacerbates the problem by pulling QC theory further away from actual implementations.
  • The fact that the biggest quantum computers to date are NMR based demonstrates how little entanglement adds to the actual theory of QC. And the fact that the best alternative is the inherently exotic approach of ion traps is disheartening.

I hope I’m wrong. There’s a lot of smart people working on quantum computing who I’m sure have thought through these issues more than I have, and they look like in some ways they are making progress (see here). But here are two predictions that will show whether I’m right or wrong in a few years. One is that what constitutes a major advance will be redefined. The participants in a field are always enthusiastic about the major advances that are happening, but if we see major experimental advances that are phrased in terms like “enhance the understanding of what is necessary for quantum computation” rather than “actually compute something” then watch out. Second, the goals (PDF)  set out by some people in the field will not be achieved.

Well, that’s my Canadian Thansgiving ramble. Now I’m going to plant some tulips, which, with a bit of luck, will appear simultaneously, as if by magic, in a coherent fashion next spring.

Update: A recent post at Shtetl Optimized discusses a paper that has some of the same criticisms as my post here, except done properly: “Is Fault-Tolerant Quantum Computation Really Possible? by M. I. Dyakonov.” Fuddy-duddies unite!

No Attack On Iran

It is important that Seymour Hersh exposes the rumblings from various parts of the US government about a potential attack on Iran, but on this occasion I’ve felt for some time that it’s not going to happen. It’s not that Bush, Cheney, Rumsfeld and so on wouldn’t do such a thing – personally I think they’d do it in a heartbeat if it gained them a few points in the polls – but that they can’t even if they want to. It’s getting close to the end of Bush’s second term, he doesn’t have the personal clout any more, and Iraq is such a complete and utter catastrophe that the response to any further military adventurism would, I think, be swift and damning. Now this wouldn’t reassure me if I was sitting in Tehran, but that’s how it looks from here.

And now someone with some actual knowledge says the same thing. The Yorkshire Ranter is someone who seems to know his military logistics stuff, and also comes from God’s Own County, so he can hardly be wrong, and he argues that the US just doesn’t have the needed stuff in the area to carry out any attack on Iran.

His recent posts have been excellent – I especially like his unified theory of stupidity on terrorism  where he starts off with this:

I’m beginning to think that it’s possible to
discern so many similarities between really stupid opinions on
terrorism that we can call it a theory. Specifically, if you’re talking about state sponsorship, you’re probably wrong, unless overwhelming evidence contradicts this.
As far as I can tell, the modern version of this theory originated in
the late 1970s or early 1980s. It had been about – Shakespeare has a
character in Richard II allege that "all the troubles in our
lands/have in false Bolingbroke their first head and spring" – but the
strong form seems to have originated then.

Key features are that
1) terrorist or guerrilla activity is never the work of the people who
appear to carry it out, 2) instead it is the work of a Sponsor, 3) that
only action against the Sponsor will be effective, 4) even if there is
no obvious sign of the Sponsor’s hand, this only demonstrates their
malign skill, and 5) there is evidence, but it is too secret to
produce. In the strong form, it is argued that all nonconventional military activity is the work of the same Sponsor.

and his Recidivist with alert populations, where he says this:

try out the following quote from one Robert Mocny, director of the USVISIT program at DHS:

"We cannot allow to impediment our progress the privacy rights of known criminals."

The law is what I say it is, and you’re either with us, or you’re with the terrorists. Perhaps literally
with them, in the cells. Joseph Sensibaugh, manager of biometric
interoperability for the FBI, meanwhile opines that "It helps the
Department of Homeland Security determine who’s a good guy and who’s a bad guy,"
targeting "suspected terrorists" and "remaining recidivist with alert
populations". Not to mention the president of Bolivia and a dead
bluesman, apparently.

Why does it specifically have to be
illiterate authoritarianism, by the way? What does that last phrase
actually mean, anyone? Anyway. Enquiring minds want to know more. What
was this "pilot project"? Whose records were given to the DHS? Will
they be told? What are the safeguards? Where are the guarantees?

Good questions Alex.

Scholarships: Enough With the Leadership Thing

For family reasons I’ve been looking at university scholarships. There
are those that you get if you have a certain average, and then there
are others that you have to apply for and which usually involve a
mixture of scholarship and "other stuff". And that "other stuff" is
almost always defined as "leadership". Like the Lo Family scholarship
at the University of Toronto
(http://www.adm.utoronto.ca/awd/scholarships.htm#UTscholars):

"Awarded to students who are active as leaders, are respected and considered to be well-rounded citizens in their
school and community…"

Or at Queen’s University, the D & R Sobey Atlantic Scholarship  requires
"Academic excellence, proven leadership and involvement in school or
community activities."
(http://www.queensu.ca/registrar/awards/apply/apply-scholar.html). You
get the idea. Of course there are exceptions (like the lovely John
Macara (Barrister) award at U of T: "Preference given to
applicants who can establish that they are the blood kin of the late
Mrs. Jean Glasgow, the donor of this award.") but most of the time it’s
all about the leadership.

Now
I have nothing against leaders — all successful groups need someone to
take credit for their accomplishments — but this focus on
leadership to the exclusion of all else is a crock. Apart from
being ill-defined, it does a pretty good job of saying to those 17 and
18 year olds out there that there’s only one kind of admirable person
in the world, and that’s those who join lots of things, play sports
(preferably as captain or quarterback), and Get Involved. Do we really
want a world full of the parentally-pushed, self-important,
power-hungry egotists who fill "leadership" positions as teenagers?

So,
free of charge and in search of a better future for all of us, here is
a list of scholarships I’d like to see adopted by universities:

The Wordsworth Scholarship: awarded
to students who have shown that they deeply appreciate the world around
them and pursue independent expression of their thoughts, regardless of
peer pressure.
Documentation required:
– Tear-stained copy of a letter of rejection by a former girlfriend/boyfriend
– A notebook filled with juvenile poems
– Letter banning you from school spirit club
The entrance exam will require you to sit still, in complete silence, for 30 minutes.

The Paddy Clarke Initiative Award: awarded to students with demonstrated ability to take responsibility for their own lives.
Qualifications:
– Must have lived in a two-parent home for less than half their childhood.
– Must have moved out of home at least once during their teenage years. Preference given to those who have lived in a squat.
– Preference will be given to those convicted of shop-lifting, as long as the theft was for a demonstrably useful object.

The Larkin Scholarship: awarded to middle-class students with regular parents, who have never travelled abroad. Qualifications:
– Must own and wear bicycle clips regularly
– Attendance at church or other religious institution preferred. Belief not necessary.

The Perec Prize: awarded to students who have demonstrated precocity in the realm of obscure puzzles and word games.
Qualifications:
– An essay is required which must include all of the following:
    – a list of at least 24 related items
    – a meal that is all one colour
    – a mathematical theorem
The essay must have no hypothesis or conclusion, but must include at least three words that contain all the vowels in reverse order.

The Bookworm Scholarship: awarded
to students who have demonstrated intellectual curiosity by exploring
the the world they live in through the medium of books.
Documentation required:
– Dog eared copy of at least three major works of literature.
– Must have read at least two books banned by major school boards.

The ability to articulate your ideas clearly demonstrates that you
really don’t understand the complexity of the world and will exclude
you from this scholarship.

YouTube is a phenomenon

Regular readers may remember that I posted on YouTube a little video of some caterpillars in my front yard a few months ago. I went back there a few days ago and took a look at how many times it had been viewed.

OVER 160,000.

This is obviously a much bigger audience than I will get for anything else I do in my entire life. I’m sure there is a moral in here somewhere, but I have no idea what it is.

Software Featuritis, or Why Checklists are Bad

Here is a common-sense approach to software development, which I’ll call the checklist approach:

  1. Propose a new feature.
  2. Ask if the feature is useful.
  3. If the feature is useful, implement it.

This essay uses shows why the checklist approach fails: why adding useful new features to a software product can make the overall product less useful to end users. This is a phenomenon I like to call  featuritis.

At its simplest, a software product is a set of n independent features (i = 1,…, n). Each feature has a utility to the customer of ui. The overall utility of the software product to the customer is therefore

U = Sum( ui )

This tells you to keep adding features to the software to increase its utility, end of story. Not very interesting.

But software is not quite so simple as this. Each feature has not only a utility, but also a cost — ci. This is the cost to the end user—not the developer—of using or deciding not to use the feature. The cost may include the time taken to find out about a feature, deciding how to use it, whether it is the right feature to use (among the options available), difficulty exploring and evaluating the feature, and so on.

With this cost added in, the overall utility of the software product to the end user is

U = Sum( ui – ci )

Let’s keep things really simple, and assume that all features have the same utility (u). We can’t do this for the cost though: the cost of using a feature inevitably depends on the total number of features in the product. This may be because it takes longer to find information about a particular feature in the documentation or user interface, or because it is more difficult to decide if this is the right feature to use among those that are present, or a host of other reasons (more on this below). So the cost of finding out about a feature is (c * n).

The overall utility of the software is then

U = Sum( u – c * n )

or, as all the terms are equal,

U = n ( u – c * n )

Which looks like this:

Image1_1

The utility function increases up to a maximum at n = ( u / 2 c ), and then decreases: that is, beyond a certain point, adding new features actually makes the software less useful. In other words, software is vulnerable to featuritis: it can become so complex that its complexity makes it useless. It may contain useful features, but finding these needles in the haystack of the product is frustratingly difficult. Experience tells us that this is a reasonable, if not surprising, result.

What is more, adding features that are useful in and of themselves can still introduce featuritis. For a feature to be useful, its utility (u) must be greater than the cost of finding out about it (c * n). So using this model, features will be added to the software until the following condition is met

n = u / c.

which is where the line crosses the x axis. The maximum utility of the software occurs at half this value (n = u / 2 c): if we continue to add features until the last feature is only just useful, the overall utility of the software will be U = 0. If features were stopped at (u / 2 c), the utility would be U = ( u2 / 4 c ). Even a useful feature degrades the usability of other product features, by making them harder to use (increasing the cost of using them).

It follows that checklist driven software development will lead to poor software. Checklists are simply lists of useful features, without any consideration of the costs they introduce to the customer. A longer checklist is often assumed to be intrinsically better than a short checklist, but we have just seen that this may not be so.

What it is about this very simple model that produces these results? The key assumption is that features have independent utilities, but that the costs of using features are not independent. In economists’ jargon, features exert a negative externality on each other. Is there a reason to think this might be true?

The independence of feature utility is simply a matter of defining a feature
properly. For example, if a very common scenario requires two "features" (A and B) then the utility of feature A depends very much on whether feature B is present or not. By itself, the utility of feature A may be very low — you can’t do much with it by itself. Once feature B is present, though, feature A becomes very useful. They are not independent features. The problem here is that "features" A and B are incomplete, and so are really part of a single properly-defined "feature". The model requires that we define features in terms of tasks that customers can carry out. The model does not tell us to implement 5/6 of a feature and then not implement the last sixth because of complexity worries.

The second assumption is that the cost of using a feature is dependent on other product features. The idea of a cost of using a feature is a very general one, and is not only "how long does it take to locate this feature in the documentation?". It may also reflect the confusion and uncertainty introduced by a plethora of choices. For example, if there are multiple ways of carrying out a task, the customer must decide which way is the best. If there are other features that appear to be related, the customer must investigate those (and discard them) before deciding on a course of action. If there are prerequisite features that must be understood, the customer must learn these also. They must spend time evaluating the alternatives. It seems reasonable to assume that the cost of wisely using an appropriate feature does depend on the overall complexity (number of features) in the product. The job of user interface designers and documentation teams is, at least in part, to minimize this destructive interference between product features. Good UI and documentation can help postpone the point at which featuritis sets in, but can’t hold it back for ever.

Of course, while simplified models like this might help us watch out for certain kinds of trap, they can’t help us decide which specific features to include, and which to discard. Also, there is not much to be gained from trying to pinpoint the particular u and c associated with features or products. Despite the equations, it is a qualitative model and cannot be easily quantified in a useful manner. Finally, while it is certainly true that ui and ci are far from constant in any product, there is probably not a lot to be gained by trying to refine their representation. As soon as models like this are made more complex, the result is a very open-ended and conditional prediction. Building fine software products can’t be reduced to equations.

But I do think the central ideas are broadly correct: complexity does influences cost and utility in different ways, software does tend to become overly complex. and—most importantly—asking "is this a useful feature?" is not the right way to develop good products.

Lawyerbots Again

Yappa Ding Ding wrote a lively post about automating intellectual jobs in response to my gripes about lawyerbots. Yappa is more optimistic than I was being about the impact of this automation. As she says:

Wouldn’t it be cool if we all had our own lawyerbot to protect our
interests and automatically communicate with other lawyerbots. Just put
my libel case winnings in my bank account, please!

…Similarly, we can and probably will automate vast chunks of what is
done by lawyers, doctors, politicians, bureaucrats, engineers, computer
scientists, and so on. This could lead to a reduction in prices, just
as manufactured goods are much cheaper than they used to be. That could
be important. For example, now, if you are charged with a crime and
have enough money that you are ineligible for government-paid legal
assistance, you will likely go broke defending yourself. Ditto if you
get involved in a contested divorce settlement. Even handling a real
estate transaction costs hundreds or thousands of dollars in legal
fees, when most of the work is rote. It would be a social revolution if
the cost of getting legal advice became more reasonable.

So was I being unnecessarily grumpy or just necessarily grumpy?

Well, probably yes a bit of both. The picture that Yappa paints is definitely cool. Right now justice is, as someone once said, open to everyone in the same way as the Ritz Hotel, so making legal help cheaper is going to help a lot of people. For example, JS showed me this thing called Eulalyzer which "reads" those pesky end-user licence agreements and, well, let’s let it speak for itself…

EULAlyzer can analyze license agreements in seconds, and provide a
detailed listing of potentially interesting words and phrases. Discover
if the software you’re about to install displays pop-up ads, transmits
personally identifiable information, uses unique identifiers to track
you, or much much more.

This is a Good Thing, and I’m sure there are lots of other ways that software could lower efforts and costs, especially routine documentation checks, like real estate transactions and so on. Why pay lawyer rates for a person to read through these routine documents when software could go through them and flag any unusual text. Great.

But the down side remains. There are two parts to most legal exchanges – one side prepares a document and delivers it; the other side reads it and responds. While low cost makes it easier to do the reading and responding, it makes the preparation easier as well. And more than anything, it makes the delivery easy – and this is where the problem really lies: there is a prospect of spamlike legal threats, warnings, and so on because they can take advantage of the cheap production of documents and then multiply that cheapness (if that’s the right phrase)  to  scatter them widely.

What we really want, of course, is the one without the other. E-mail without the spam; downloads without viruses, networks without trojans. Not a lot of chance of that, but perhaps we can minimize the downside of some of the new developments if decide that spam legal threats are invalid.

Economists and Sociologists on Organs


Kieran Healy, Last Best Gifts, University of Chicago Press, 2006.

Gary Becker and Julio Elias, Introducing Incentives in the Market for Live and Cadaveric Organ Donation, (working paper).


The
distinguished theoretical chemist John Murrell wrote that physicists like to solve simple models exactly, while
chemists like to solve detailed models approximately. There are
benefits to both approaches. Take the study of solid state electronic
and magnetic properties, where the early work of physicists was to
compute, using very sophisticated techniques, the properties of highly
simplified models such as a free-electron gas with a uniform background
of positive charge. Chemists, on the other hand, were busy studying
complex materials with intricate structures, but didn’t have the theory
to do more than classify the kind of observations they made. As a
result, physicists gave theories for superconductivity and other exotic
phenomena, but because their theories had so little information about
the specifics of molecular structure they couldn’t predict the kind of
material where superconductivity would be found. The discovery of
high-temperature superconductors in the 1980’s had physicists as well
as chemists scratching their heads. The empiricism of chemistry (where
theorists form about 10% of the population) compared to physics (where
the number is, I think, more like 30%) forces theoretical chemists to
deal with the messy and specific problems that their discipline’s vast
body of observation and experiment demands.

Economists are to
sociologists as physicists are to chemists. Whenever you see a
discussion of scientific method and rigour in the social sciences, the
comparison is always made to physics. This is not surprising, because
physics remains the archetypal science, but it’s a bad thing because
the social sciences — like chemistry — have to stay grounded in the
empirical side of what they study. The details of any particular
problem are so often crucial to the outcome that you can’t use "the
light touch of the physicist" and expect to come out with predictions.
It’s all very well to talk in the abstract of markets, but when reality
hits it’s the details that often matter. Sociologists spend their lives
looking at those details, like chemists they tease out conclusions from
the structure and dynamics of particular circumstances and events. And
like chemists, they get less respect than their more formal, more
model-driven, and less empirical siblings.

These different
proclivities are on show when sociologist Kieran Healy looks at blood
and organ donation in his new book Last Best Gifts and when economists
Gary Becker and Julio Elias look at organ donations in their widely
discussed working paper, and my ex-chemist self is glad to say that
Healy comes out looking better.

The basic problem is simple.
Advances in surgery and in immuno-suppressing drugs have made organ
transplants safer and cheaper since the 1980’s. The number of organ
donors (whether live or dead) has grown, but not nearly enough to keep
up with the demand, and the result is a growing shortage of organs.
People who could be saved are dying while waiting for "donor" organs.
What can be done?

Becker takes, as anyone who has read any of
his writings would expect, a forthright and straightforward approach.
Shortages mean that supply and demand don’t match. As he says in a blog essay  on this issue:

To an economist, the major reason for the imbalance between demand
and supply of organs is that the United States and practically all
other countries forbid the purchase and sale of organs. This means that
under present laws, people give their organs to be used after they die,
or with kidneys and livers also while they are alive, only out of
altruism and similar motives. In fact, practically all transplants of
kidneys and livers with live donors are from one family member to
another member. With live liver transplants, only a portion of the
liver of a donor is use, and this grows over time in the donee, while
the remaining portion regenerates over time in the donor.

If laws were changed so that organs could be purchased and sold,
some people would give not out of altruism, but for the financial gain.
The result would be an increased supply of organs. In a free market,
the prices of organs for transplants would settle at the levels that
would eliminate the excess demand for each type of organ.

You
make supply match demand by introducing a market — paying people for
their organs (eg, kidneys) or for the organs of their family members
after death — because that’s what markets do. So he and Elias do some
rough calculations on what would be needed ($32,000 per liver, while
the current cost of a liver transplant is $175,000), and say "let’s do
it". Becker is under no illusion that this proposal will be adopted
soon, but believes that there is a real chance that it may be taken
increasingly seriously over the coming years, and serious discussions
are increasingly (so I’m told) including markets as possibilities, as
in a recent issue of Kidney International.

There
are no details in Becker and Elias’s sketch of the form that the market
would take. Of course, Becker is aware of many of the arguments against markets
(many coming from Richard Titmuss’s influential 1971 book The Gift Relationship)
and returns to them in his later blog posting to address them: that
"commodification" of body parts is immoral; that payment may drive out
altruistic donations and so not yield the bumper crop he predicts; that
organs may be removed mainly from the poor and installed mainly in the
rich; that organs may be removed forcefully from people (as Falun Gong
supporters are claiming is happening in China now) in order to be sold;
that people may regret an impulsive and irreversible decision to sell;
that payment may lead to people lying about their medical health and so
lead to infected organs entering the system. He doesn’t so much argue
these issues as dismiss them. Tellingly, he uses the passive voice: the
quality of blood "can be maintained at a high level", the source of
organs "could be determined in most cases without great difficulty";
the number of impulsive donors "could be sharply reduced by having a
month or longer cooling off waiting period". It isn’t quite clear who
has the incentive to maintain all these standards, or carry out these checks, or how
much it would cost to persuade someone to do so. And yet in markets for experience goods
(and organs, surely, are experience goods of a visceral kind)
information issues are at the heart of the problem. The cavalier
brushing aside of issues of trust, fair dealing, and asymmetric
information makes Becker’s case unconvincing to this reader.

Becker also commits what Tyler Cowen of Marginal Revolution calls the libertarian vice:
assuming that the quality of government is fixed. In this case, that
means assuming that if altruism isn’t working now, then it won’t work
in the future. "If altruism were sufficiently powerful, the supply of
organs would be large enough to satisfy demand, and there would be no
need to change the present system. But this is not the case…"

Kieran Healy spends most of Last Best Gifts
exploring the very things that Becker skates over so casually, and argues
that they are the heart of the matter. Altruism is not a fixed
quantity, but depends crucially on "the cultural contexts and
organizational mechanisms that provide people with reasons and
opportunities to give" (p2). Those unspecified actors who are the passive
voices of Becker’s arguments are, Healy argues, the key to success or
failure when it comes to blood and organ donation. For example, organ
donation from the newly dead is, in practice, dependent on approval
from the relatives and the rate of approval depends in turn, it turns
out, on who asks them. If the person who is helping them come to turns
with the death is the one who asks them to approve donation of the
organs, they are more likely to refuse than if somebody else (even from the
same organization) asks them. Establishing protocols and practices,
managing logistics, establishing trust — all these matter. It is,
Healy is arguing, not useful to talk about the issue or organ transplants without
addressing the specifics of questions such as whether relatives of dead
organ donors have the right to meet the recipient (as justone example), because these are
the kind of decisions that can have big consequences.

The book is
academically written, with all the costs and benefits that implies. It
was originally a PhD thesis, so unsurprisingly it favours logic and
cautious language over passion, footnotes everything, and tends to wrap
conclusions in qualifiers. The empirical middle chapters in particular
(3 and 4) are a bit dry. But the book is an important contribution; it identifies a whole range of
important issues when it comes to organ donation, and may move the
debate away from the market/altruism dichotomy to a more nuanced and realistic debate
over coping with large-scale enterprises. That would be valuable. I definitely recommend it if you
want to learn about the issues involved, and want some ideas for
healthy ways forward.

(A minor quibble before moving on. He discusses the US and a bunch of European countries, but not Canada. This is common in American/European comparisons — The Economist seems prone to it — but it is still irritating to this Anglo-Canadian reader.)

Healy looks at both the blood system and
the organ system, and his analysis of the failure of the blood system
in the US and in Europe to handle HIV and Hepatitis C infection makes
his reality-based approach uncomfortable reading for market enthusiasts
and market sceptics alike. In some cases, the market-based blood plasma
system in the US did better than the donation-based mainstream blood
system in responding to the potential for infection, even as both sets
of organizations had the same information at the same time. That they
too failed at the hurdle of throwing out blood plasma that was already
taken is no comfort. In fact, one of the more interesting conclusions
that Healy seems to come to is that there may be dependencies among the
different parts of the system (the collection agency, the donors, the
recipients, the hospitals) that are more important than the presence of
absence of payment in the determination of success or failure. The
issue may be more one of industrialization of the system than the
commodification of organs. This makes sense, once he points it out,
because any market-based approach is going to involve local monopsonies
and a few big players in any given region. There’s not going to be a
whole lot of competition going on among agencies, offering higher and
lower prices for kidneys, and so the outcome will depend on the
detailed interactions that take place. And if the shortage is to be
tackled (there were 50,000 people on the waiting list for kidneys in
the USA in 2000) then this is going to be a large-scale, industrial
effort whether or not payment is made to "donors" or not.  Money will
be involved because big organizations – private industry or not – have
lots of money flowing through them. People will make or break their
careers based on what decisions are to be made, and as the
HIV/hepatitis case proved, a public or not-for-profit agency is no
proof against tragedy.

Much of the discussion is handled in
terms of literature on the "gift relationship". It’s not something I
know much about (the literature, not the relationship, although I am a
bit cheap), but it seems to handle what sociology is best at. There are
layers of meaning that influence our decisions and attitudes to issues
such as organ transplants. Healy reminds us that life insurance was
once a controversial industry, with its connotations of payment for
death, that had to overcome cultural resistance and find ways to stake
a position that is both morally acceptable and profitable. The life
insurance industry was, of course, one of the last havens of
large-scale mutual co-operative organizations, and in some countries
the movement away from that model to a shareholder model is one that
has not yet played out. But I digress.

The nature of the gift
relationship is subtle. If payment is made to a funeral home rather
than directly to family members, then does that mean the organs of a
just-deceased loved one have been donated (with an acknowledgement made
in honour of the gesture) or have the family been paid? If a live
kidney donor is compensated for their time and discomfort, but not for
their kidney, have they been paid? There are more subtleties in the
book, both involving payment and not. It seems that we all agree that
organ donation is good in the abstract, but not so much when it comes
to the crunch. Attempts to narrow this gap between abstract approval
and on-the-spot reluctance may be seen as delicate and considerate
diplomacy, or may be seen as cynical manipulation (Healy compares parts
of the process to the con-man’s efforts to "cool the mark" meaning to
make the object of a scam accept his/her position as loser in a
resigned manner rather than in anger, so that they don’t report it to
the authorities).

My guess is that the systems developed in
different countries will involve some forms of payment but will steer
clear of the obvious payment that Becker appears to advocate. Any
agency, public or private, will have to be monitored (oops – there’s
that passive voice), and that monitoring will cost money. Issues like
deciding on how to determine death (and debates are going on about this
now, of course) are vulnerable to all kinds of incentives I’d rather
not think about; but someone has to, and it shouldn’t be someone with a
direct monetary stake in the outcome.

Healy convinced me that the big
issue is not the economists’ issue — of markets versus altruism —
but is the sociologists’ issue of coping with complex incentives in
large-scale industrial organizations, and that alone was worth the price of the book. Recommended.