Simulations and Mechanisms

I've learned two lessons in the last couple of days.

First, if you want to get some attention for a blog post, call it something eschatological like "Online Monoculture and the End of the Niche". If I had called it "Simulation of a 48-product market under simplistic assumptions" somehow I don't think I would be writing a follow up. I don't like this lesson much. But I don't feel too guilty: if I was really trolling for traffic I could have called it "Learning from the Big Penis Book" [see Music Machinery for why].

Second, no matter how hard you try to be clear, many people don't get what you are trying to say. So maybe it's not their fault. For examples, see some of the comments here and here and even a bit here and on the original. The main complaint is that picking two example runs from a simplistic simulation of a small system with a small and fixed number of customers and products doesn't simulate the entire Internet. Where is the statistical sampling, the exploration of the sensitivity to parameters, the validating of the recommendation model? And on and on.

These folks don't get why people do simple models of complex things.

The goal of simulations is not always to reproduce reality as closely as possible. In fact, building a finely-tuned, elaborate model of a particular phenomenon actually gets in the way of finding generalizations, commonalities, and trends, because with an accurate model you cannot find commonalities.

For example (and I'm not comparing my little blog post to any of these people's work), in chemistry, Roald Hoffmann got a Nobel Prize and may be the most influential theorist of his generation because he chose to use a highly simplified model of electronic structure (the extended Huckel model). It is well known that the extended Huckel model fails to include the most elementary features needed to reproduce a chemical bond. Yet Hoffman was able to use this simple model to identify and explain huge numbers of trends among chemical structures precisely because it leaves out so many complicating factors. Later work using more sophisticated models like ab initio computations and density functional methods let you do much more accurate studies of individual molecules, but it's a lot harder to extract a comprehensible model of the broad factors at work.

Or in economics, think of Paul Krugman's description of an economy with two products (hot dogs and buns). Silly, but justifiably so. In fact, read that piece for a lovely explanation of why such a thought experiment is worthwhile.

Or elsewhere in social sciences, think of Thomas Schelling's explorations of selection and sorting in Micromotives and Macrobehaviour, or of Robert Axelrod's brilliantly overreaching The Evolution of Cooperation, which built a whole set of theories on a single two-choice game and influenced a generation of political scientists in the process. All these efforts work precisely because they look at simple and even unrealistic models. That's the only way you can capture mechanisms: general causes that lead to particular outcomes. More precise models would not improve these works – they would just obscure the insights.

That said, there are valid questions. Under some circumstances, aggregating large numbers of opinions into a single recommendation can give this odd combination of broader individual horizons and a narrower overall culture. Are there demonstrable cases of the monopoly populism model out there in the wild (aside from the big penis book)? Is this a common phenomenon or an uninteresting curiosity? Well I don't know. I do think so, obviously, otherwise I would not have written the post. But it's a hunch, a hypothesis, a suggestion, that I find intriguing and which I may or may not try to follow up. Hey, it's a blog post, not an academic paper.

Bookmark the permalink.

5 Comments

  1. Tangentially, I thought that SXSW presentation was great (apart from that damn tail) right up to the positive recommendations. (*All* music? Nobody wants to listen to *all* music, or has time to.) I felt towards the end they were basically trying to reinvent John Peel. Might it be easier (or if not easier, more productive) to focus on the conditions which made Peel’s programme possible?

  2. Jason Hinsperger

    “It is well known that the extended Huckel model fails to include the most elementary features needed to reproduce a chemical bond.”
    Seriously? Who do you hang out with where such things are well known?

  3. Fair enough – I was showing off that I used to know some theoretical chemistry once upon a time. But really, talk to any theoretical chemist you know and they’ll say the same thing. Honest.

  4. Repeating my self here but Krugman has another delightful presentation of the role of models like this – <.”>http://web.mit.edu/krugman/www/dishpan.html>.
    Introspectively … since reading that essay years ago I a) now use the phrase “dishpan model” as if everybody knows what it means, and b) consider it kind of unforgivable if you fail to construct at least one such model about the question at hand.
    The Slate article you link to above is delightful in it’s emphasis on the playful nature of these and the tendency for far too many people to go all pompous as soon as they capture one.

  5. I believe that there is too much miss-informed notions of conceptual thought in this planet that it is truly hard for one person to determine what and who should be the valid source for any subject.
    There is an interesting excerpt from http://www.brooklynrevue.com/2009/03/24/the-110th-book-revue/
    This is discussion Noam Chomsky’s work and how it relates to the article above.

Comments are closed