So You Think You Can Tweet

It's confession time.

I try to put on a countercultural, intellectual persona here, but the truth is that I'm a fan of reality TV – some of it, anyway. In particular, despite having two left feet and being completely the wrong demographic, I adore So You Think You Can Dance (the US version; the judges ruin the Canadian version — take it up in the comments if you want to fight that one). Season 8 finished yesterday, with the brilliant Melanie Moore winning over the equally brilliant Sasha Mallory, and I'll miss my weekly fix for the rest of the summer. Here's two minutes of why I love it.

 

There. Don't you feel better?

I know I'm getting sold a packaged bill of goods by a mega corp, but it works because even the best efforts of Gatorade and other sponsors, and the sometimes high cheese factor, cannot smother the heart of the show, which is that a lot of very talented young folks work really hard, put their heart and soul into what they do, and produce some great routines. Plus, Cat Deeley is adorable.

Anyway, one thing many people do after watching shows like this is see what others are saying on various forums and so on about the performances. And I do the same. It's not just me either – there is a trend for TV shows and the Internet to be jointly marketed (Jeff Probst tweets along with Survivor episodes as they broadcast) and as computers move out of the study (if you have one) and into the living room (hello iPad), it's only going to get bigger. A lot bigger.

Jenny Davis at Cyborgology tells us that the biggest twitter volumes ever recorded were at the end of the Women's World Cup final last month. Glee fans are apparently big at using social media while watching, according to ReadWriteWeb. Yahoo! claims! that! over!80% of! TV! watchers! use a mobile device while watching. The Beeb chimes in with the same kind of numbers. For the TV companies, a big benefit is that social media makes audiences watch the show when it's on, rather than timeshifting it to skip the ads and watch a more convenient time.

It's time to get to my point, which is that the standard story of how we have moved from the old world of mass media TV to the fragmented new worlds of the Internet is misleading. It's quite possible that the Internet will end up complementing mass media, rather than competing with it.

I'll post more about this mistake later, but I'll save you that for now and instead give you what's probably the best known routine for any SYTYCD fan, the Tim Burton-esque Ramalama. Enjoy.

 

 

 

A Riot to Tweet?

If there is one thing that could make me look with favour on the idea of "banning suspected rioters from social media", it's reading Jeff Jarvis lecture the UK government about free speech, pointing to the US constitution and continually promoting his new book as he does so.

Jarvis' argument combines slanted rhetorical questions with banal platitudes. He asks "Who is to say what communication and content should be banned from whom on what platform? On my BlackBerry? My computer? My telephone? My street corner?" To which Mr. Cameron would probably say "Me and the police. Weren't you listening?"

And then there are these:

  • When anyone's speech is not free, no one's speech is free.
  • Censorship is not the path to civility. Only speech is.
  • Restricting speech cannot be done except in the context of free speech.
  • A tool used for good can be used for bad.
  • When debating public identity, one must decide what a public is.

To which I say: "If you have something to say, just say it."

In short, Jarvis knows nothing about the riots but knows that any interference with Twitter is a Bad Thing. And we should read his new book; the issues it tackles are full of, in a word that has gone viral among the digerati, "nuance".

Jeff Jarvis is backed up by Mathew Ingram, here and here. Ingram also does not know anything about the riots, but is also sure that the wrong thing to do is to interfere with Twitter because that "represents nothing less than an attack on the entire concept of freedom of speech, and that has some frightening consequences for any democracy." He does, thankfully, avoid telling us to read Jeff Jarvis's book.

Ingram claims that the mainstream media has covered the use of social media "hysterically", linking hilariously to The Sun as proof. Listen Mathew, The Sun could cover a Queen's Park question time hysterically.

While Jarvis stays away from saying anything at all about the riots, Ingram seems convinced that social media played a role in the riots because, well, because. He has no specific evidence, but will that stop our critic of hysterical coverage? It will not. "While they may not cause revolutions, there's no question that these kinds of mobile, real-time networks and technologies can help to fuel them when they occur" he writes, and then quotes Googler Jared Cohen's claim that "social media tools may not be a trigger for such events, but they can clearly act as 'an accelerant'". And in case you wondered, he goes on "It seems totalitarian states like Egypt and Libya aren't the only ones struggling with the impact of social media and the desire to muzzle services like Twitter and Facebook."

The message from both comes across loud and clear. The riots are a secondary story, what really matters is the ability of Twitter and Facebook to carry on their businesses. It's a silly message.

Linking to Dystopia: Wikileaks, Google, and Amazon

  • [WikiLeaks claims credit for the Egyptian uprisings] The frustrating thing is that WikiLeaks is an important and worthwhile development (and yes I have donated money to it). But there is a self-centred nature to its publicity that doesn't help, especially when it's way off the ball like this one. (via Jodi Dean)
  • [Making the world's information available, until we get bored] Google's massive resources and short attention span is becomign a problem. In 2008 Google announced, in its usual self-congratulatory tone, a project to digitize the world's newspaper archives as part of its mission to make information accessible to everyone.* In May, Google shut down the initiative, with less fanfare, choosing instead to focus its efforts on a new payment system.** According to the Richard Salvucci* Google has done this before, buying the Paper of Record initiative and then shutting it down without explanation.* 
  • [Everyone Knows You're a Dog] El Reg is reporting that YouTube is moving towards a "real name" policy, which would seriously screw anyone wanting to upload videos of repressive actions. This looks to be part of the Google+ move to respond to Facebook: Google needs real names to better track your activity and hence to sell you to advertisers. The scope for unidentified and pseudonymous use of the web as described by Google here is about to get smaller.
  • [Amazon joins the Tea Party?] Amazon has long been the beneficiary of US sales tax laws written before online commerce, which charge sales tax only on purchases in states where companies operate. This means that in most states, buying from Amazon is cheaper than buying from a physical bookstore. You could argue that it's too much to expect Amazon to volunteer to pay taxes, but it's now actively opposing California's recent modernization of its sales tax code, complaining with a straight face that tracking tax information is too difficult. If only there was some kind of souped up calculator-type-machine that could help with that.* 

Links: Carr, Facebook, Ravelry, Amazon

  • At The Economist, Nicholas Carr and Jay Rosen are debating "This house believes that the internet is making journalism better, not worse". *  No prizes for guessing who I agree with: opening statements are factual from Carr, wishful thinking from Rosen. Also mentioned at Nicholas Carr's Rough Type blog.*
  • Also on Rough Type, Nicholas Carr is adopting Clay Shirky's asterisk-style* linking, at least for one post. I'm doing the same but can't be bothered to get rid of the underlining.* 
  • Israel uses Facebook to blacklist pro-Palestinian protesters.* The window in which Facebook was a space where the younger generation could meet outside the view of officialdom* is now closed.
  • Ravelry, the social network for knitters, discussed at Slate.* Quotation: "The company that runs it has just four employees, one of whom is responsible for programming the entire operation. It has never taken any venture capital money and has no plans to go public. Despite these apparent shortcomings, the site's members absolutely adore it." I'd say "Because of these characteristics" rather than "Despite these apparent shortcomings".
  • Amazon wants to buy UK online bookstore The Book Depository.** Ofcom is looking into it.* Others are not happy.*

An Uncertain World III: Everything is Obvious, by Duncan J. Watts

Before leaving for a holiday (it was lovely; thanks for asking) I was going through a trio of books on the topical topic of “prediction is difficult, especially of the future”. I decreed that Dan Gardner’s Future Babble was limited, but otherwise OK, and then deemed Tim Harford’s Adapt a failed attempt to justify free-market thinking in the aftermath of its biggest failure in decades.

So now it’s onto the final book of the trio, and my favourite by some distance. Australian Duncan Watts is a physicist-turned sociologist who now works! at! Yahoo! (I called him American a few posts ago. Thanks to Kevin Horgan for correcting me.) You may know Watts from such hits as his explication of network science Six Degrees of Freedom; his Music Lab experiments with Matthew Salganik and Peter Dodds (pdf) showing that social influence can overwhelm any special quality of particular songs in separating hits from misses; and his arguments against Malcolm Gladwell’s The Tipping Point. Each of these reappears in his latest book, Everything is Obvious (Once You Know the Answer) – henceforth EIO (home page).

(Attention conservation notice: for some reason this took for ever to write and I still don’t like it, but I said I’d post it so I’m damn well going to, and then I can move on.)

If I were writing for a real publication I’d try to be dispassionate and objective, but the fact is that EIO‘s goals are ones that I am hugely sympathetic to, and so I found myself cheering Watts along rather than reading him critically. I try not to do this, but between you and me and the wall we all do it to some extent. Certainly the reviewers of Adapt could have used a little less enthusiasm and a little more critical thinking before putting their praise in print. Still, I do think that EIO is the book that Adapt and Future Babble were trying to be.

According to its subtitle, EIO is about “how common sense fails us”, but this is misleading. “Common sense” has become a shorthand for anti-intellectual conservative approaches to government and social problems based on down-home practical experience, as in Common Sense Revolution, but Watts has a more ambitious target than that; “common sense” is the need (among intellectuals and others alike) to impose patterns on current events and on history, and our belief that once we see these patterns we will have a better grasp of the future.

On the one hand, the achievements of common sense are easily overlooked, but remarkable. Our socially-honed and culturally-specific intuitions are remarkably good at helping us to navigate the maze of implicit rules and norms that help society function. Common sense has proven subtle enough to derail whole avenues of artificial intelligence research.

On the other hand, the strengths of common sense in navigating the particulars of travelling the subway or getting by in the workplace are weaknesses when it comes to understanding anything other than everyday life. Common sense is “not so much a worldview as a grab bag of logically inconsistent, often contradictory beliefs, each of which seems right at the time but carries no guarantee of being right at any other time” (17). When it comes to understanding society, we appeal to it at our peril.

Watts’s most obvious target is the kind of thinking that is commonplace in business, in politics, and in punditry – analogies, stories, and overextended but half-baked theories. But as a sociologist, Watts also has in his sights the economists’ belief that sociology is plagued by woolly thinking, and that economists can do sociologists’ work more rigorously and with more insight than sociologists themselves because economists understand incentives. He quotes from Freakonomics: “The typical economist believes that the world has not yet invented a problem that he [sic] cannot fix if given a free hand to design the proper incentive scheme” (51).

The usual modern take against homo economicus theories of society is that we are not rational beings. But the problem is not just that “our mental model of individual behavior is systematically flawed”, although Watts does run through failures of framing, priming and other foibles of the Predictably Irrational kind. The problem I’ve always had with such psychological explanations of our failures is that, once we are made aware of our systematic flaws, we can surely work on overcoming them. Would that solve the problem?

Well no. And it’s not just that we sumble into fallacies of composition either, although he does show why intuitions and theories based on “representative individuals” whose “actions stand in for the actions and interactions of the many” are both tempting and doomed to fail. The focus on “special individuals” as the sparks that light the fires of viral success, Malcolm Gladwell’s “Law of the Few”, also comes in for debunking, including some convincing experiments based on Twitter cascades. And expertise as a whole is not of much use: Chapter 6 covers much of what’s in Future Babble. But the smartest and most self-confident will still see themselves, plausibly, as able to surmount these barriers, cutting through the deadwood to find real causes, real incentives, and real mechanisms.

Watts continues though. It is in his treatment of successes and failures that he makes his strongest arguments. Watts convincingly shows many suggested explanations of successes, from the Mona Lisa to Facebook, to be exercises in circular thinking, where “Harry Potter was successful because it had exactly the attributes of Harry Potter, and not something else” (60). It is tempting to think that there is something special about the Mona Lisa that causes it to be the most famous painting in the world, some quality that it has that no other painting has. But there isn’t.

In the Music Lab experiments, different “worlds” of listeners were given a set of songs to listen to and, in some cases, information on the listening history of others in their world. The experiment showed that songs that were hits in some worlds were unpopular in others, and that this variation came about specifically because of the information about others’ listening histories. Not only is it a mistake to look for Gladwell-like “special people”, it’s also a mistake to look for special qualities in hit songs as the “cause” of their success. As the fictional Mark Zuckerberg says to the Winklevoss twins in The Social Network, “if you had invented Facebook, you would have invented Facebook”. There was no crucial idea that separated Facebook from the pack; it just separated from the pack because social networks are governed by cumulative advantage, and that’s how cumulative advantage works.

And he doesn’t stop there. We’ve perhaps read before that “history is only run once”, causing us to mistakenly “perceive what actually happened as having been inevitable” (112), but Watts goes further in explaining why we misunderstand the causes of events: events themselves are identifiable only in hindsight. Was “the US financial crisis of 2008” a blip in the recent history of capitalism, or was it just the first part of “the international financial crisis of 2008 to 2015”, including the splitup of the euro zone and who knows what else? We can only know in hindsight.

In summary, Watts is saying not only that the “right lever” is difficult to find when it comes to understanding society, but that in many social phenomena there never was a lever at all; “what appear to us to be causal explanations are in fact just stories — descriptions of what happened that tell us little, if anything, about the mechanisms at work” (27).

There’s a lot here, and by two-thirds of the way through the book I was happily convinced that I would never again need to reach for a business strategy or populist social science book. It’s a thorough and convincing debunking of attempts to understand the chaotic path of social progress.

The difficulty, of course, is that once the Emperor’s nudity has been pointed out, the man still needs a set of clothes. What to put in the place of mistaken theories? Part II is an attempt to address this question, and it’s less successful than part I. There are three main chapters (Chapter 7 belongs, to my mind, in Part I rather than Part II). Chapter 8 covers much the same ground as Adapt, with some of the same suggestions: experiment, measure and react, rely on prediction as little as possible. Condensed into one chapter I didn’t mind this as much as I did in Adapt, but it’s still the weakest chapter in the book. It’s a smattering of ideas, some stronger than others, but still searching for that magic recipe which he has just convinced us doesn’t exist.

Chapter 9 is far more interesting though. If success and failure are governed largely by luck, how does that affect our views of society? Watts looks at theories of justice in the light of what Part I has told us, and comes out firmly in favour of an egalitarian, or at least Rawlsian, point of view. It’s just one chapter, and Watts would (I would guess) be the first to admit it’s just an introduction, but it’s a bold step for a book on popular social science to take, and a valuable one.

It’s enough that I forgive Watts his final chapter, which is really a set of reflections on physics and sociology that somehow forgets to mention that of all the natural sciences to base social theories on, especially when you have convinced us that specifics of particular situations are crucial, physics is surely the least plausible. It’s easy to forget how little predictive success physics has. Sure it does a good job with the tides, but even the best physics theories of complex phenomena (say the BCS theory of superconductivity for example, or the Kondo model) have little success when it comes to prediction. BCS did not guide or anticipate the discovery of high-temperature superconductors, for example.

After reading these three books I am left with several unanswered questions about how to proceed once we know that large parts of the future are hidden from view? When it comes to decisions, well we still need to make them (including big and irreversible decisions) even in the face of uncertainty, but can we still put heart and soul into new initiatives when we know that their outcome is, at least in part, out of our control? How do you motivate teams without misrepresenting the role of luck? How do we approach questions of reward and punishment when the consequences of our actions are unforeseeable? I’m sure there’s lots out there on these questions, and I think I’ll be looking for it with a new appreciation of their importance. Any suggestions on places to start?

Postponement

I hoped to have a review of  Everything is Obvious up by now, but, well, life and that. And now I'll be offline for about ten days. Perhaps my batteries will be recharged when I get back. It's a good job nobody is paying me for this or I'd be fired.