Nina Power is pissed off that this week's student protests are always compared to ye grande olde protests of 1968.
My generation was pissed off about that thirty years ago.
Nina Power is pissed off that this week's student protests are always compared to ye grande olde protests of 1968.
My generation was pissed off about that thirty years ago.
There were some lively comments on my previous post, which was really only half-finished, so I’m going to write the other half here. But first, let’s get a few things clear.
I like transparency in government. I think it’s great that people campaign for a more open government, especially here in Canada. I am impressed with the kinds of things some people manage to achieve by scraping through government data.
Got that? OK. Now here is what’s wrong with Government 2.0.
So, one at a time.
Information is not always democratizing
In comments on the previous post, Kevin Donovan and Fazal Majid both pointed to this article by Michael Gurstein, who asks “who is in a position to make “effective use” of this newly available data?” and answers himself
‘open data’ empowers those with access to the basic infrastructure and the background knowledge and skills to make use of the data for specific ends. Given in fact, that these above mentioned resources are more likely to be found among those who already overall have access to and the resources for making effective use of digitally available information one could suggest that a primary impact of “open data” may be to further empower and enrich the already empowered and the well provided for rather than those most in need of the benefits of such new developments
It’s worth reading the whole article, and the example of land ownership record digitization in Bangalore, which “allowed the well to do to take the information provided and use that as the basis for instructions to land surveyors and lawyers and others to challenge titles, exploit gaps in title, take advantage of mistakes in documentation, identify opportunities and targets for bribery, among others. They were able to directly translate their enhanced access to the information along with their already available access to capital and professional skills into unequal contests around land titles, court actions, offers of purchase and so on for self-benefit and to further marginalize those already marginalized”
In a recent talk, Kentaro Toyama asks his audience (at the 8 minute mark), “You and a poor rural farmer are each given a single e-mail account and asked to raise as much money as you can for a charity of your choice. Who would be able to raise more money?” The answer, of course, is that “you” (like me probably urban, western), because we know wealthy people, we have lots of friends with e-mail accounts, we are literate and have lots of experience writing e-mails, and so on. The technology is identical for each, but the result is different because of the context in which it is being used. The Internet does not democratize. Instead, technology amplifies people’s ability to get things done, and the more ability you start with, the more you get.
How would this work with Open Government? How about the "Open311" services (http://open311.org) which let citizens report non-emergency issues to local government. The archetypal case is potholes that need fixing. If these services come into widespread use, they will be pushed to skew services towards neighbourhoods where people report most frequently (else why implement the service?), which will be the smartphone-owning home-owning better-off neighbourhoods. The better off get better services.
Information is not always the problem
The pothole reporting case is an example of this too. Round the corner from my house is a very rough patch of street that has been that way for a couple of years. The city knows it’s in bad shape. Why isn’t it being fixed? Because there are other, busier stretches of road that are higher priority. Information is not the problem; resources are the problem. There is no group of city workers sitting round waiting for a call about a pothole so they can go out and fix it.
In a different vein, it’s worth reading danah boyd’s talk about unintended consequences of transparency when not accompanied with interpretation. Raw data such as the registered sex offender list certainly counts as “open government” but she retells an expose from The Economist which began
with the story of Wendy Whitaker who was arrested in 1996 at the age of 17 for having oral sex with a classmate three weeks shy of his 16th birthday. She was convicted of sodomy against a minor, ended up in jail for a year and is now listed on the registry. "She sees people whispering, and parents pulling their children indoors when she walks by." Not only did she have to pay the price for her teenage indiscretions by going to jail, but she's forced to deal with them day-in and day-out for the rest of her life. Because of the registry. I wish that I could say that Wendy's story was rare, but I hear stories like this over and over again. People's lives ruined because of the registry.
Seeing the problem as one of information can lead us down the wrong path. As boyd says:
In focusing on the first step – transparency or access – it’s easy to forget the bigger picture. Internet access does not automagically create an informed citizenry. Likewise, transparent data doesn’t make an informed citizenry. Transparency is only the first step. And when we treat transparency as an end in itself, we can create all sorts of unintended consequences.
Transparency is an arms race
The Open Government initiatives are getting lots of information out into the public that wasn’t effectively public before. That’s good. But now that the information about, say, voting behaviour and donations is open there is a clear incentive for donors and donation recipients to muddy the waters. Intermediaries will appear. Money will be exchanged not as donation but in some other form. I’m sure there are endless ways of working the system, now it needs to be worked. Systems respond to changes, and in this case a move to transparency will be met by a move to obfuscation.
Privacy is the other side of the coin
There is one essay in the Open Government collection (by Jeff Jonas and Jim Harper) that addresses privacy concerns. Once data is targeted to be made public, it becomesimportant that improper or inappropriate data not be bandied about. So they recommend such strategies as limiting backups (!), destroying old data, “minimal disclosure in transfer between projects” (which goes counter to the Gov 2.0 direction) and more. It’s almost like the arms race, except instead of mischievously hiding data that could harm yourself, IT professionals are dutifully hiding and destroying data that could harm others.
Data anonymization and aggregation are often cited as ways of handling these issues, but recent developments in “re-identification” have shown that such efforts may be doomed. The identification of individuals within the Netflix Prize data set (paper) led to the cancellation of the second Netflix contest, and has cast a chill on crowdsourcing contests. Legal scholar Paul Ohm’s recent long but highly readable paper on Broken Promises of Privacy suggests that we need to re-assess the basis for many of our data privacy laws because information previously thought to be anonymous has now got to be treated as potentially privacy-damaging.
Bill Schrier, CTO for the City of Seattle, addresses privacy in an essay in the Open Government collection. As just one example, many elected officials maintain lists of email addresses for use in contacting constituents, and these lists are a common target for public disclosure requests. It is difficult to prevent misuse of these lists for spam emails and other inappropriate uses. Open Government is good, but receiving penis or breast-enlargement emails as a result of emailing your councillor? Not so much. Other examples he has come across include a chilling effect on grievances because complaint investigations are public records, and the city being forced to provide a complete list of full legal names and dates of birth to a local radio station, “two of the three pieces of information needed to steal employee identities”. Schrier is an open government advocate, but making this data public is “not a trivial or inexpensive task” if it is to be done with care. As he asks, “most of [the problems] can be overcome. But do we really want to make it that easy for everyone to obtain and use that data?”
Money flows to Silicon Valley
I’m not quite sure how to articulate this final point, but as I haven’t seen it elsewhere I’ll give it a go. Let’s take an example. Google and the City of San Francisco developed a format the General Transit Feed Specification, which “defines a common format for public transportation schedules and associated geographic information.” Using this, cities can push their transit schedules to Google for publication on Google Maps.
Useful, for sure. I love to be able to plan trips and Google Maps is a convenient way to do it. But there is a downside, which is that a purely local transaction, of me planning a bus trip in my home city, now involves sending money (via advertising) to California. The winner-take-all nature of the Web means that an increasing number of apparently local exchanges are done via California, such as personal ads via Craigslist. Is that a good thing for the local economy? How would we feel if it weren’t Google but a Chinese company that got the information and the money along with it?
So am I opposed to Open Government? No, but I’m far from convinced that the digital approach to the problem will actually lead to an increase in openness.
– why open data needs a non-commercial-use license, and the lessons of microcredit.
“Government 2.0” is the initiative to make government data open to the public using Web-based technologies. Leading light Tim O’Reilly describes it as “government as platform”. The idea is that open data – provided in such a way that programmers can write software to read it, analyze it, and transform it – increases transparency and promotes innovation.
Government 2.0 is on a roll. It got a big boost in the US from Barack Obama’s early memo on transparency and open government, and the setting up of the data.gov website. In the UK there is data.gov.uk and David Cameron’s “Big Society” initiative. Even Canada’s notoriously secretive government is consulting about it and many cities have opened up data feeds. And there are other initiatives around the world. Sounds great? Well yes and no.
2.0 Agendas
The rhetoric of Government 2.0 draws heavily from efforts by private citizens and non-profit groups to make government more accountable. It has a civil liberties flavour, with talk of citizen engagement and of citizens’ rights (“giving citizens access to data that is theirs”), participatory society, collaborative democracy, transparency, and so on. Take the new collection of essays Open Government. The examples of Government 2.0 initiatives almost all deal with citizen access to the US government’s inner workings: campaign contributions, lobbying data, congressional votes, legislative proceedings, federal government contracts and spending, court proceedings. It describes efforts by non-commercial groups such as opensecrets.org, maplight.org, followthemoney.org, govtrack.us and so on to use this data to enforce greater accountability. This is all fine and good – although proponents should recognize that this access to information is one step in an arms race and that those who want to hide information will now look for ways to do so.
But the “Open Government” project has a second agenda: it demands that data be made open not only to citizens, but also to private companies. The recent Gov 2.0 Summit and Gov 2.0 Expo, also organized by O’Reilly Media, are big Washington events sponsored by major technology companies. It is clear that there is money to be made in Government 2.0 – money that is mentioned very sparingly in the Open Government book.
Participation or privatization?
Should we care if private companies get our data for free? After all, maps and weather have already been made available, and innovative commercial applications such as Google Maps have made it useful to the public. Many Government 2.0 enthusiasts see commercial opportunities complementing non-commercial activities. Tim O’Reilly writes, “The whole point of government as a platform is to encourage the private sector to build applications that government didn’t consider or doesn’t have the resources to create. Open data is a powerful way to enable the private sector to do just that.”
There’s another name for outsourcing government services to private industry, and that’s privatization. Talking of “a dramatic redistribution of power from elites in Whitehall to the man and woman on the street”, as David Cameron did, is a lot like the Margaret Thatcher line that selling off nationalized industries was restoring them to the people. As Ed Miliband says, “for all the talk of a big society, what is actually on the way is cuts and the abandonment of community projects across Britain.” O’Reilly is very close to supporting the Big Society program (he praises it in a recent tweet) and says that “In some sense, government hitting the wall on deficits is a good thing, if it forces a real reboot,“ which is not only callous, but uncomfortably close to Naomi Klein’s “Shock Doctrine” thesis. You have to wonder, is this what the civil libertarians who have been pulled into the Gov 2.0 effort really want? Some of them need to think a little harder before following Tim O’Reilly’s tune.
The problems with profit
Many of the heavy hitters in Silicon Valley believe in “social entrepreneurship”, in the belief that they represent an enlightened, humane, and smart capitalism. But money has a way of corrupting. The story of microcredit has lessons that Government 2.0 promoters could learn from.
Mohammed Yunus won a Nobel Prize for founding the non-profit Grameen Bank that extended “microcredit” loans to groups of poor people, with very encouraging results. When Pierre Omidyar, the eBay billionaire, got involved in microfinance he decided that the “non-profit” part of the story was a limitation. As the New Yorker’s Connie Bruck reported a few years ago: “Yunus is now seen by Omidyar and many others as the archetypal founder, too wedded to his original vision. In recent years, younger and nimbler players have been taking microfinance—their preferred term—toward the idea of building a fully commercial, profit-making sector. This conflict, between pure do-gooders and profit-minded do-gooders, has come to define the current debate in the microfinance world.”
Unfortunately it has not turned out so well. Yunus’s worry that profit-oriented companies were “pushing microfinance in the loansharking direction” has been born out. This year’s IPO by SKS Microfinance, a for-profit company active in India, made tens of millions for some of its board members, as well as for board members of Seattle-based nonprofit Unitus, which has invested in SKS. Meanwhile, the loans from these commercial microfinanciers have become “death traps”, and after seventeen SKS clients in the Indian state of Andhra Pradesh committed suicide the clamour for answers about the role of profit has become louder. There are real differences between profit-seeking and not-for-profit groups, and they must be kept in mind.
A step forward
There are other issues with Government 2.0 that I may come back to later, but let’s wrap this up. Glossing over the difference between companies and citizens obscures key issues at the heart of Government 2.0 and risks corrupting the whole enterprise. Fortunately, there is a solution. When government puts its data into the open, it does so with a license attached to it. In most cases this license permits both non-commercial and commercial use of the data, so long as the source is acknowledged. It’s time to consider licensing the data for non-commercial use only, and having a separate license for commercial users. Perhaps permitting some limited commercial use for free, but charging for more extensive use, would be a first step to protecting our data from the temptations of profit.
Update: Part two is here.
Macrowikinomics opens with the rescue of a young Haitian girl after the earthquake of January 2010. Some of her rescuers were far away in the USA: as soon as the earthquake struck, a group of American volunteers put together a web site using Ushahidi, the Kenyan-created “crisis mapping” software, and together with expatriate Haitians started to turn text messages and tweets into points on a map that could be shared with aid workers. In doing so, they “found themselves center stage in an urgent effort to save lives during one of the largest relief operations in history” [5], and helped to save the seven-year-old.
Macrowikinomics presents Ushahidi a “new paradigm for humanitarian efforts” that
turns much of the conventional wisdom upside down. Rather than sit idly by waiting for help, victims supply on-the-ground data using cell phones or whatever communication channels are open to them. Rather than simply donate money, a self-organized network of volunteers triages this data, translating and authenticating text messages and plotting incidents on interactive mapping displays that help aid workers target their response” [6].
The “Mass-collaboration” approach is a contrast to “the old crisis management paradigm” in which “big institutions and aid workers parachute into a crisis, assess the situation, and dispense aid with the limited information they have” [6].
It is admirable that volunteers would put so much effort into helping people in crisis, and it is good that this software enabled them to make a contribution. But is Ushahidi “a new paradigm” for urgent relief operations? Does it really change idle victims and passive donors into active participants? And how much of a contribution did it really make to dealing with the Haiti earthquake? I want to spend a whole post on these questions because the story sets the tone for the whole book.
Is Information the Bottleneck?
In a fine talk posted on YouTube just today, IT researcher Kentaro Toyama talks of the Five Myths of Information Technologies for International Development. One of these myths is that “Information is the Bottleneck”.
Toyama asks his audience why they are not richer, more educated, and more compassionate than they are, given that they would like to be and that the information needed to become so is available to them (eg, MIT’s Open Courseware)? The answer he gives is that information is not bottleneck to achieving any of these goals. It’s a theme I have touched on before in the context of digital activism and which will come up again in these posts, and it applies to crisis response too.
In crisis situations, and in aid operations more generally, information may not be the main problem. As Toyama says on his blog, the Ushahidi effort is two distinct things: (1) the technology platform and (2) the individuals who built and use it, and who are dedicated to helping.
“Much of the excess hype around Ushahidi comes from people who think that (1) is the secret sauce, and that it offers a new hope for development. But, actually, it’s (2) that makes Ushahidi great, and it’s not particularly new. … for aid purposes, even (1) and (2) only go so far, because (3) is missing. And, what’s (3)? (3) is human/institutional intent and capacity on the ground. As wonderful as Ushahidi (1)+(2) is, it makes no difference if there isn’t (3), a force on the ground that can actually respond meaningfully to the noisy information (1)+(2) produces. In the case of Haiti, response teams were already overwhelmed. Additional information, per se, was only adding to the unread mail.”
And here is my favourite book on international aid, Elizabeth Pisani’s The Wisdom of Whores, on information as it affected the AIDS effort:
“We were collecting more and more really good information, and then not acting on it. Two things were getting in the way – ideology and money. In the AIDS industry, we have too much of both.” [11]
Ushahidi in Haiti
I can find only a few public attempts to evaluate the Ushahidi effort in Haiti. One is a 16-page report published in September by the United States Institute for Peace, an organization that donated to the effort. It describes some of the contributions that Ushahidi effort made, but presents little in the way of evaluation or analysis. A set of thoughtful reflections by Jaro Valuchi is posted at pakreport.org. A reflection by audiencescapes is doubtful of the impact of Ushahidi and of crowdsourcing:
Despite the enthusiasm surrounding mobile communications, they have real limitations in an environment such as Haiti where most households lack access to the electrical grid and the mobile subscription rate is less than 45 percent. Even Thompson-Reuters, which launched a service that allowed survivors of Haiti's earthquake to receive critical information by text message directly to their phones, free of charge, only had 24,000 people register.
This is a recognized limitation even by the resource’s proponents. Where the tool draws its strength is in its ability to provide “real-time” information. Ushahidi’s answer to critics of crowdsourcing’s lack of verifiability is SwiftRiver, a new open source software platform that acts as a verifying filter that sifts through information through the multiplicity of channels that feeds crowdsourcing. However, the platform has only just recently been made available to the public, so it too is somewhat untested.
The question of how useful Ushahidi was during the crisis in Haiti for the most part remains unanswered.
Another is a pair of articles by Paul Currion, an IT specialist who runs a consultancy for humanitarian operations. He studied the roughly 3,000 messages that came in to the Ushahidi effort over the 4646 shortcode that was set up for the purpose and his efforts resulted in frustration:
In the end, I was reduced to bouncing around the Ushahidi map, zooming in and out on individual reports – not something I would have time to do if I was actually in the field. Harsh as it sounds, my conclusion was that the data that crowdsourcing of this type is capable of collecting in a large-scale disaster response is operationally useless…
Disaster response on the scale of the Haiti earthquake or the Pakistan floods is not simply a question of aggregating individual experiences. Anecdotes about children being pulled from rubble by Search and Rescue teams [such as that told on the first page of Macrowikinomics – ed] are heart-warming and may help raise money for aid agencies but such stories are relatively incidental when the humanitarian need is clean water for 1 million people living in that rubble. Crowdsourced information – that is, information voluntarily submitted in an open call to the public – will not ever provide the sort of detail that aid agencies need to procure and supply essential services to entire populations.
That doesn't mean that crowdsourcing is useless: based on the evidence from Haiti, Ushahidi did contribute to Search and Rescue (SAR). The reason for that is because SAR requires the receipt of a specific request for a specific service at a specific location to be delivered by a specific provider – the opposite of crowdsourcing. SAR is far from being a core component of most humanitarian responses, and benefits from a chain of command that makes responding much simpler. Since that same chain of command does not exist in the wider humanitarian community, ensuring any response to an individual 4636 message is almost impossible.
Currion asks “could crowdsourcing add value to humanitarian efforts?” and answers himself:
Perhaps it could. However, the problem is that nobody who is promoting crowdsourcing currently has presented convincing arguments for that added value. To the extent that it's a crowdsourcing tool, Ushahidi is not useful; to the extent that it's useful, Ushahidi is not a crowdsourcing tool.
The article provoked some useful discussion (a critical response from Robert Munro, who was part of the text message processing effort, is here) and a follow-up from Currion, in which he concludes:
there’s a lot of grandiose yet vague promises that crowdsourcing will revolutionise humanitarian response, and I think we need more than vague promises. Misinformed reporting plays a role in my frustration, but nobody seems to be interested in correcting that misinformation – and when people persist in claiming that their tool will revolutionise the sector based on no evidence, I get suspicious.
When I was writing the article, I could only judge whether crowdsourcing added value based on the evidence that was available to me – just like everybody else. If presented with new evidence (perhaps an expanded dataset or actual testimonials), I’m prepared to change my opinions – but nobody has presented any such evidence, and we just get repeated anecdotes about how the director of FEMA really liked the Ushahidi map. In particular I asked whether anybody had a clear use case scenario, but none has been forthcoming.
Those on the inside also recognize that Ushahidi is a complementary tool to other efforts rather than “a new paradigm”. Chris Blow, “one of the longest serving community members”, makes the point that “Systems like Ushahidi have turned enormous communication barriers into a trivial installation and training process. But there is a whole other 90% of real work”.
In other words, Ushahidi and other software projects are one more way that people with a particular set of skills can contribute to disaster relief, but they are no more (and no less) valuable than many other efforts that don’t get attention from high-profile media and books. In a theme that will be repeated throughout this review, the closer you look at examples of Internet-based collaboration, the more they look like a new medium for realizing old (and often admirable) commitments. There is no paradigmatic shift separating Oxfam and Ushahidi.
Clay Shirky also highlights Ushahidi’s impact on reporting electoral violence in several places in his recent book Cognitive Surplus, and again makes the comparison between the information Ushahidi provides and that of the mainstream media, suggesting a fault line between old ways of doing things and the promises of the new technology. But the comparison is again misleading. Mainstream media has never been the way that those on the ground get their information about politically charged events. Information about human rights abuses in Central America during the 1980’s spread throughout North America via networks of solidarity groups, small independent publications, and courageous individuals who went to record what was happening and came back with their reports. Is our collective understanding of violence in Congo today better than that of El Salvador 30 years ago? Not noticeably. The dividing line between activists and mainstream media is not erased by the Internet world.
Turf wars
One of the benefits of self-organized mass collaboration, according to Macrowikinomics, is that crowdsourcing avoids the inter-organizational turf wars that plague old-style aid institutions. The authors are damning of the old ways of doing things, claiming that the top-down approach leads to “poor decision making, redundancy, and confusion, and often to wasted money and wasted opportunities. To make matters worse, the end recipients of disaster relief are almost always treated as helpless victims and passive consumers of other people’s charitiy.” [6] Harsh words, and ones that many grass-roots aid organizations involved in Haiti may argue with.
But does the digital paradigm avoid these problems? The answer is no.
Ushahidi plotted incidents and reports on maps of Haiti, but getting realistic maps of post-earthquake Haiti was far from simple. Ushahidi relied on the work of volunteers from OpenStreetMap (OSM), a remarkable effort to build “a free editable map of the whole world” using volunteer contributions. OpenStreetMap was not the only effort to map Haiti during the crisis; other teams were using Google MapMaker. It would have been better if the two efforts could have been combined to avoid wasteful duplication, but that was not possible, at least in the crucial early days, despite good intentions. The problem was licensing. As a Google representative explained on a forum:
We have previously sponsored OSM's efforts and explored with them how to work together. A sticking point in these past discussions has been OSM's share-alike license clause, according to which any data we combine with their data set must be shared back to OSM. Since we routinely combine proprietary 3rd party and user-contributed data sets on Google Maps in order to create the world's richest base map, we cannot share these combined data sets back to OSM.
Efforts by the Open Geospatial Consortium helped to resolve some of the duplication, but blogger Matt Ball concluded that “While all these efforts were helpful, clearly more work needs to be done for greater coordination and easier portability of data between different platforms and different creators and users of the data.” So turf wars, in the form of data licenses, complicated the digital activists efforts just as they complicate the work of other aid groups.
Conclusion
The lesson here is not that Ushahidi is a failure or a waste of time. But the evidence so far shows that it is a valuable but small contribution to the daunting task of disaster relief. New technology can help humanitarian operations, but it needs to be seen as a complement to existing work, not “a new paradigm for humanitarian efforts”. The difference between digital “self-organization” and the aging institutions criticized throughout Macrowikinomics is not so big – both are plagued by issues such as legal contracts, confusion, contradictory commitments, and even rivalries. People are people, whether they interact digitally or face to face. In the end, it’s people on the ground that matter the most, and governments, large NGO’s and experienced staff – for all their clumsiness and bureaucracy – will continue to be central to crisis relief efforts.
The Ushahidi effort in Haiti points to one of the central problems with MacroWikinomics: if information is not the problem, then information is not the solution. Viewing the world through a technology-centric, information-centric lens leads us down the wrong path as we try to diagnose the ills of society and to build a better world.
This is the first of a several-part series of posts on the new book Macrowikinomics, by Don Tapscott and Anthony D. Williams. This post is a broad statement of what I think of the book: subsequent posts will look at particular case studies. Numbers in [square brackets] are page numbers in the book.
The Internet is a new terrain on which old conflicts of class, gender, wealth and power are being played out, and it’s not clear which contestants this new battleground favours.
That’s not how Don Tapscott and Anthony D. Williams see things. In Macrowikinomics, their follow-up to the hugely successful Wikinomics, they portray the Internet itself as a revolutionary force for change, carrying us to a radically different future. To them, society has a new set of fault lines and they are technological rather than political or economic. They divide the failing, decaying institutions of a bygone age (musty, industrial, closed, and hierarchical) from the blooming organic forms of the digital world (dynamic, self-organized, collaborative, open, and democratic). It’s get on board or be left behind. In this way the book is at right angles to reality. I don't completely oppose what they say, I just think the fault lines that matter run along a different direction.
Tapscott and Williams value things that matter to many people, myself included. They value voluntary collaboration and sharing, openness and integrity in politics and business, and democracy in the sense of people having a say in the shape of their society. They seek ways to extend civic engagement, promote volunteer action, and encourage “global governance from the ground up” [302]. Their attitudes to egalitarianism are probably different from mine, but in terms of what constitutes a fair society we have some things in common. So it is unfortunate that I disagree with them deeply. Yet disagree I do, and these posts must be a largely negative commentary on their book.
You can see the root of my disagreement, plain as day, on the book jacket, where six of the seven blurbs are from CEO’s of major companies. Appealing to these elite voices contradicts the message of the book, which is that we need to abandon hierarchy and move to a more democratic “age of networked intelligence” [24]. The contradiction continues inside, where the book adopts a consistently populist tone, arguing against “the cult of the policy expert” [270], with sentences like this sprinkled liberally throughout.
The closed, hierarchical, and static regulatory structures of today must give way to new processes that embody values of openness, empowerment, inclusiveness, and knowledge sharing. [294]
Then a few pages later they enthusiastically claim that “the Global Agenda Partnership shows a way forward”[307] for global governance. What is this Partnership? It is an initiative of the organization that hosts the Davos summits, the exclusive, by-invitation-only annual gathering of the world’s richest and most influential. If the Davos attendees are not elite, I don’t know who is.
I suspect that Tapscott and Williams do not see this juxtaposition of billionaires and populism as contradictory. After all, if the crisis is one of vision, inspiration, and having the courage to take the digital plunge then anyone can lead that transformation, be they CEO or student or social worker. The new revolutionaries and the defenders of the old order are distinguished by attitude and aptitude, not by class and gender. Some are prepared to bet on the new technologies of participation and some have closed minds, remaining tied to outdated business models.
I can’t agree. Macrowikinomics sees the world through the lens of technological determinism, and it is a distorting lens. It sees the technology of the Internet as unleashing change on society, but is blind to the ways that society has changed the Internet and the ways in which, as the digital world has gone mainstream, it has increasingly taken on the characteristics of mainstream culture.
On one side, Macrowikinomics exaggerates the political and economic possibilities of digital collaboration as well as the discontinuity between today’s digital culture and the activities of previous generations. On the other side, it ignores the unsavoury possibilities that seem to accompany each and every inspiring initiative on the Internet (every technology has its spam) and inspirational initiatives for change that take place away from the digital world. Most importantly, it does not register the corrosive effect of money (and particularly large amounts of money) on the social production and voluntary networked activity that they are so taken with.
Bu do these distortions matter? Macrowikinomics is primarily a call to action. The program of the book is to relay stories about inspirational Internet-based initiatives, often based on conversations with the leaders of those initiatives. You will find few nay-sayers in these pages, and most often we must take the cast of characters at their own evaluation. The description of the Huffington Post, for example, is taken almost entirely from statements by Huffington herself and by admirers. I don’t mind that: who would want every book to be a cautious and balanced, analytical tract? The authors have an explicit agenda, and they are committed to making their case for a shift to digital collaboration. And so long as they are calling for people to use the possibilities of the Internet for good, isn’t that what matters?
Unfortunately no, because the unrealistically sunny pictures of digital culture painted in Macrowikinomics will lead idealistic people to misdirect their talents and energy. The book promotes many forms of digital activity as contributions to positive social change when the reality may be that they are making little difference, or even doing the opposite, perhaps transferring money from local communities to the pockets of Silicon Valley billionaires. A corrective set of lenses is needed, and that's what I'll try to provide in these posts.
There is a danger that I’ll come across as an anti-Internet malcontent. But digital curmudgeons such as Andrew Keen tend to agree with many digital utopians that the Internet is forcing a change in the balance of power in society: the difference is that they are repelled by the change rather than entranced by it. Myself, I disagree that the balance has shifted very much at all. We live an increasing portion of our lives digitally, but the struggles we face remain the same and the major sources of conflict in society are unchanged. There are many inspiring digital initiatives and experiments carried out by admirable people, but it’s the people who are inspiring, and the technology is more often than not secondary. The Internet simply happens to be the natural terrain for today’s activists, because that’s where the people are.
I am a big fan of CBC's Writers and Company, and Eleanor Wachtel's interviewing in particular.
Her two-parter with John Le Carre is brilliant, with Le Carre's reflections on his father, on the deep state, on the writer's "chip of ice in the heart", on Tony Blair ("We've had a Prime Minister who to my mind committed the biggest crime any Prime Minister – any Leader – can commit: that is, to take a country to war under false pretences.")
Le Carre has this to say about the information society:
"The dissemination of information on a vast scale is not the same as the dissemination of the truth. Thus we still have an extraordinary percentage of the American people who believe that Saddam Hussein was responsible for the twin towers; that the war against Iraq was a war to avert a threat to the United States. I am appalled by the extent to which the increase in communication adds to the power of manipulation by politicians."