Alone Together II: The Unburdening

[After my first Alone Together post (link), about how much I like Sherry Turkle's use of closely-observed stories, the prolific Rob Horning, now of The New Inquiry among other things, wrote a companion piece (link) on Authenticity, which this post follows. Other things you may want to read about this book include Tom Stafford's Why Sherry Turkle is So Wrong and Mr. Teacup's review.]

Slippery words: caring and conversation

When I started reading Alone Together, I didn't expect to end up wondering what a conversation is, but that's what happened, so that's what you get here. Spoiler: my wondering wandered in a circle, first agreeing with Turkle, then disagreeing a little, then a lot, until I ended up largely agreeing with her again.

What do you think about a seventy-two-year-old woman, Miriam, finding comfort in telling stories toParo, a furry machine designed to care for the elderly and infirm? 

"Care for?" Turkle writes, "Paro took care of Miriam's desire to tell her story – it made a space for that story to be told – but it did not care about her or her story" {106}. It's not just the word cares that Turkle objects to, but also the word conversation. "To say that Miriam was having a conversation with Paro, as these people [Paro's designers] do, is to forget what it is to have a conversation" {107}.

Turkle worries about these inauthentic conversations partly because we are so easily seduced by the appearance of caring. We read caring into the actions of robots at the first opportunity, even when we know better. When a robot called Domo simply touches its designer, Aaron Edsinger, he says that "there is a part of me that is trying to say, well, Domo cares." Turkle concludes: "We can interact with robots in full knowledge of their limitations, comforted nonetheless by what must be an unrequited love." {133}

Rob Horning writes that Turkle has always been concerned with the way that "Users begin to transfer programming metaphors to their interactions with people and psychological metaphors to the behavior of machines." She believes that authenticity is important (real caring, real conversations with real people) and that technology is replacing the authentic with the fake. As she tells the stories—the robot ones anyway, the network ones less so—I can't help but agree with Turkle. Caring robots may be seductive, but in the end their companionship is a deception and should be avoided.

Authenticity: if you can fake that etc

There's a catch in Turkle's argument, which many others have noticed: many "real" conversations are not authentic either. Horning takes this point to the extreme: "Nobody can ever show you their 'real' self." Turkle's "concern with authenticity is an expression of nostalgia", and authenticity is "a pressing personal issue now for many not because it has been suddenly lost" but because it is being lost in new and different ways thanks to digital networks. Authenticity itself is not what it used to be.

I kind of agree with Horning. Even without taking it to that extreme, we clearly spend much of our day in inauthentic interactions, robots or no robots. When the cashier says "Have a nice day", is it the voice of the cashier or the company whose policy they are following? Small-talk and passing greetings are more the enacting of social scripts than authentic exchanges of emotions or views. The scripted responses of call-centre employees just waiting to be replaced by a cheaper technology are obviously inauthentic. The coffee machine at work even tells me to "enjoy your beverage" and I don't really mind that. Where does inauthenticity become a problem?

Even Turkle's own profession by training (psychoanalyst) has always seemed to me to have a big dose of inauthenticity to it. As a man of a certain age from the north of England the whole "talking about feelings" thing is foreign to me, and as a leftist the idea that markets alienate people from their work is easy to identify with. So I remember being shocked when I first came across people going to therapists in the 1980's. If you do need to talk to someone about your problems, I thought, you should at least do it with friends or family. Going to a counsellor is a bit like going to a prostitute, I thought: paying for something you should get out of affection. You may talk with them, but it's not a real conversation. It's not authentic.

To be fair, Turkle has heard all this before and acknowledges that "We assign caring roles to people who may not care at all." When a nurse at a hospital takes our hand during an operation, does it matter if the gesture is rote? {133} The market has moved us closer to what she calls "the robotic moment", when certain kinds of interaction are ready to be automated. We know it's not a real person at the other end, so why not just replace them with a machine anyway.

So if a nurse's hand is OK, but a robot's is problematic, what about when people receive comfort over networks? Is this inauthenticity any worse than that of the market? The market brings new forms of inauthentic interaction into our lives all the time: people who use the language of caring but who are just doing a job. Personal trainers, massage therapists, and all the way to automated voice systems. Or how about conversations while drunk (are we really ourselves?) or how about talking to someone using antidepressants to get through the day (is that really them speaking?) Inauthenticity is everywhere.

What's a conversation?

So I started to disagree with Turkle about authenticity and its importance. Next up, the idea of a "real" conversation. A later chapter of Alone Together, about the online confessional site PostSecret (link), made me reconsider my initial agreement with Turkle on this. Maybe Miriam, telling her story to Paro, is not looking for what we think of as a conversation anyway, so much as an opportunity for "unburdening", and unburdening has its own dynamics.

An example. Sixteen months after physicist Richard Feynman's first wife Arline died of cancer, Feynman – a convinced atheist and one of the most consistent denouncers of fuzzy thinking on record — nevertheless wrote a moving love letter addressed to her (link). "I thought there was no sense to writing. But now I know my darling wife that it is right to do what I have delayed in doing, and what I have done so much in the past. I want to tell you I love you." .

The letter served an important purpose for Feynman. His daughter Michelle notes that it was well worn: Feynman had re-read it often. There were things he needed to say, and even though Arline was not there to hear them he needed to address these things to her. Feynman's letter is a way to speak out loud (yet in private) to an audience that is important (yet not present). It's sort of a conversation, but also not, and that's the way that unburdening often works.

Some forms of unburdening have become formalized and surrounded by ritual. Take the Catholic confession: it takes place in a private space, behind a curtain; there is a barrier between the penitent and the priest that serves to emphasize the distance between the two; the solemn ritual surrounding confession serves to emphasize a commitment to privacy, so that the penitent can speak out loud to an audience that is barely there, knowing that what he or she says will go no further.

Psychoanalysis has taken on many of the same characteristics as confession. The quiet office formalizes the encounter. The guarantees of privacy are emphasized. The patient lies on a couch and the therapist sits out of sight so that the patient is as close to alone as possible. Again, in order to speak things that really matter, we seek an audience of almost zero.

Even in less formal settings, unburdening is most commonly carried out by expressing intimate thoughts out loud (on paper, on a screen, or by voice) but to an audience that does not know us intimately. How many people have told secrets to strangers they will never meet again? People talk to their pets. Sometimes unburdening has no audience at all; for centuries, private diaries have been receptacles for sorrows and a place to work out problems and dreams. These are the most private of public gestures: secrets we whisper aloud and then lock up, needing to speak them but not wanting them to be heard.

The posting of confessions to PostSecret is another form of unburdening, of speaking without full conversation. In this case anonymity hides the identity of the penitent, and the role of the priest is taken by the readers and commenters on the site. The site has its own rituals; you don't e-mail your secrets, you must mail a physical postcard with your secret (as it passes through the mail system, it is visible – another anonymous disclosure).

Turkle sees many acts of unburdening as somehow less than satisfactory. "On the face of it, there are crucial differences between talking to human readers on a confessional site and to a machine that can have no idea of what a confession is. That the two contexts provoke similar reactions points to their similarities. Confessing to a website and talking to a robot deemed 'therapeutic' both emphasize getting something 'out'. Each act makes the same claim: bad feelings become less toxic when released. Each takes as its promise the notion that you can deal with feelings without dealing directly with a person. In each, something that is less than conversation begins to seem like conversation. Venting feelings comes to feel like sharing them" {231}

It seems to me that she is understating the benefits that unburdening can provide, and missing the long history of practices that surround the act, for good reason. There is more to unburdening than "bad feelings become less toxic when released", and more to it than a simple venting; it's just that what we seek is not always conversation. We should pay attention to gestures like Feynman's.

  • Aside: Unburdening and anonymity

    There is much talk about speech, anonymity and privacy on the Internet, but sometimes our ideas of speech and what it is for are too narrow and this steers the debate off the rails. Perhaps it's the information-centric view of dialogue as information exchange that blinds some people to the other purposes of speech, but acts such as unburdening need to be thought of differently. There is an idea (see Eric Schmidt's famous "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place", or Douglas Rushkoff in his recent "Program or be Programmed") that so long as we are honest with what we say, then privacy is not that big a worry. But it's not only stupid mistakes or criminal/offensive acts that we don't want out there in public, associated with our names. Unburdening is just one kind of speech that needs to be private for completely legitimate reasons, and that we need to be able to enter into knowing that its consequences are limited.

What is conversation for?

Perhaps, I wondered, unburdening is one form of conversation that doesn't need a full participant on the other end. The point is sometimes to hear ourselves speak. A friend of mine at work had a teaching assistant in a science lab class who said "you can ask me any question you want, but first you have to ask the bear". The bear was a teddy bear sitting on a filing cabinet. It was surprising, my friend says, how many students asked the bear their question, and then realised they didn't need to ask the teaching assistant.

So then, why does it matter if it's a robot, a dog, a diary or a teddy bear on a filing cabinet if what we want is to just use speech to work out some problem or explore some emotion? Turkle writes "How can I talk about sibling rivalry to something that never had a mother?" {19} But a diary doesn't have a mother, and a dog doesn't understand what you are talking about, and people have used diaries and dogs to sort out sibling rivalry problems for centuries.

Back to where I started

So there I was, disagreeing with Turkle about authenticity and about conversation. Yet the more I think about it, the more I am coming round to agreeing with her (on the robots anyway), for a mixture of reasons.

One is what I think of as the "postmodern mistake". Postmodernism takes things we think of as certain and shakes them up, showing us that foundations we thought were solid are in fact wobbly. But sometimes it leaps from "everything is uncertain" to "everything is equally uncertain" and that just ain't so. Is there a well-defined border between India and Pakistan? No, but we can still talk unambiguously about "India" and "Pakistan" despite that. A little fuzziness doesn't mean we have to abandon everything.

So conversations with checkout workers are inauthentic, and conversations with robots are inauthentic, but that doesn't make them equally inauthentic. The limits of a robot are orders of magnitude more constrained than even the most robotic of phone-call workers. In "Everything is Obvious", Duncan Watts explains how Artificial Intelligence has been more challenging than its proponents believed because, in attempting to design intelligent machines, they failed to realize what was relevant. The fact that we are so ready to add meaning to robot actions will lead interactions rapidly into unexpected and problematic areas.

Why am I so pessimistic that the interactions will be "problematic"? Because the robots we may see soon will be commercial products, and that means they will have an agenda. Rob Horning has the same worry about networks; in his final paragraph he writes "The problem is that we believe that we construct this social-media identity autonomously and that it is therefore our responsibility, our fault if it’s limited. The social-media companies have largely succeeded in persuading users of their platforms’ neutrality. What we fail to see is that these new identities are no less contingent and dictated to us then the ones circumscribed by tradition; only now the constraints are imposed by for-profit companies in explicit service of gain." In the same way that the algorithms of social networking sites have a straightforward agenda (keep us on the site, to maximize advertising exposure), so robots will have a straightforward agenda (cost effectiveness, keeping old people occupied, and so on) that will not be obvious to us as we interact with them.

The other problem I see, which applies to networks and to robots, is their uniformity. People have their faults, but different people have different faults, and so humanity as a whole is not dragged down any single rabbit hole. The introduction of global communications media that have very specific (and limited) formats (the text message, the tweet) cannot but have a homogenizing effect on our interactions.

But these conclusions are not so important, and I could be talked out of them. What's more important about the book is the unexpected avenues it leads you down. I'd rather have a book that prompts a week or two of thought than one that gives me the answers I'm looking for. I'd rather a book with the foibles of a human conversation than the reassuring, stress-free interactions of a robot, and Alone Together was such a book for me.

Bookmark the permalink.


  1. from Weizenbaum’s paper in the ACM, 1966,
    ” This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. If, for example, one were to tell a psychiatrist “I went for a long boat ride” and he responded “Tell me about boats”, one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker. ”
    (notice the DARPA funding in the background: the military has always longed for robots, far better than the meat-and-potatoes machines they have to work with now).
    wandering the byways of Eliza searches finds also the Digital Antiquarian who posits rather than simple assumptions on the part of the speaker, instead that old friend, the willing suspension of disbelief: thus a kind of solitary performance art, an avatar of poetic faith, if considered optimistically; more likely gamification, given the usual suspects.
    And, from an AI programmer,
    “Just imagine the potential impact in consumer brand loyalty that a well-designed assistant like Siri could impart, should users willfully engage in the illusion of a human-like assistant and even actively maintaining this self-deception.”
    I prefer “willingly” to “willfully” though..
    even when the agenda is explicit as in the gamification link, we consent to be governed.
    it seems we have a constant hunger for meaning: and will make it if we cannot find it.
    (I’d like to write a more coherent response, but it may never happen)

  2. also, to expand the human conversation, Lance reads Vonnegut by the light of an iPhone:
    “This is the theme of “EPICAC.” It’s the theme running throughout Vonnegut’s life’s work. What makes us us? What makes us alive? And Vonnegut’s tentative answer is other people thinking of us as alive.
    EPICAC’s heart breaks when he, now an it again, realizes that the woman he loves doesn’t think of him as alive.
    A moral of the story is that our lives have only as much meaning as other people are willing to grant them.
    Our human-ness depends on us thinking of each other as human.”
    The corollary of course: if we are willing to grant meaning to robots..

  3. Doug – Thanks for these. With Siri and its like we seem to becoming more and more comfortable conversing with robotic voices. So far we stay away from human appearances — too creepy — but I do wonder how long that will last.
    I really like the Vonnegut idea. Have you read “Sum: Forty Tales From the Afterlives”, a short story collection by David Eagleman. There’s one where we stay in limbo after death until the last time someone says our name, at which point we die for good. People whose name gets applied to bridges or streets stay in limbo for ages…

  4. sorry for slothful response time..
    I’m familiar with the idea though not the story..
    in the Orthodox church, when you say the name of the dead, it is usually followed with ‘may their memory be eternal’. If we live as long as we are remembered, then a memorious God grants us that immortality.
    In the frail structures of the internet (still built on coal, barges and
    train lines) a weblog is as Justin says, both mask and gravestone. If the robots that crawl the page are reading, do we live ?

Comments are closed