The New Mechanical Turks
I received a letter in the mail last year. I can’t remember what it was about, but I do remember that it was handwritten in blue fountain ink. I looked at it for a long time before I determined that the letter was made with an automatic handwriting machine. The e’s gave it away: they were all identical.
Was the letter real? I felt, while I was squinting at the e’s, that if this letter was fake, then anything in the world could be. Maybe I wasn’t paying close enough attention. Maybe the paint rubs right off, the voices on the other end of the line are all prerecorded, and the world is already on autopilot.
It’s an anxiety we all live with. Consider the Schrödinger’s cat of customer service: a chat window appears, unbidden, browser bottom right, as you’re shopping for a new sofa. “How can I help you?” it asks. You don't know if it’s a person or a machine. Even more distressing: you don’t know if it’s a person pretending to be a machine.
The term for this, coined by Astra Taylor in a 2018 essay for Logic magazine, is “fauxtomation.” Taylor defined it as the process that “renders invisible human labor to maintain the illusion that machines are smarter than they are.” The chat window might be a bot, the car might be driverless, or they might all just be underpaid contract workers, somewhere else. Which is more reassuring?
In Joanne McNeil’s debut novel, Wrong Way, this ambiguity has reached its zenith. An Amazon-like tech company rolls out a driverless car with a mortal secret: a living driver, hidden in a secret compartment. The passengers don’t know, or refuse to know, the truth. For them, luxury without consequence or supervision is worth the illusion. "Ordinary interpersonal experience, when it's delivered in this charade, has the elements of magic,” explains McNeil.
McNeil is best known as an essayist and reporter with a populist eye on tech; her 2020 book, Lurking, was a corrective to internet history that centered users over coders and CEOs. In Wrong Way, her protagonist is a gig worker named Teresa, and the novel interleaves its speculative imaginations with Teresa’s employment record, painting a picture of unrewarding temp jobs and office harassment in the greater Boston burbs. In comparison, piloting a “self-driving” car, although uncomfortable, has its pleasures, chief among them an unrestricted eye on the secret lives of her passengers. It might be the first form of power she’s ever had.
McNeil just moved to Los Angeles, the land of cars. We spoke a few days before California's Department of Motor Vehicles ordered General Motors to remove its Cruise driverless cars from roads after a string of gruesome accidents.
You used to work in a call center. How did that experience color your approach to writing about automation?
It was the Great Recession, fall 2008. I was applying to every job on Craigslist, and this was the place that hired me. I've been trying to remember anything about that job that would make for a funny anecdote, but I really don't have great stories. It was just a shitty job, a really dark time in my life. I can laugh at it now, but I sat down at my desk in a cubicle and read scripts on a computer screen from 10 to 6. It felt like, "Okay, is this the rest of my life?" More than a decade later, I've been able to reflect on it and see how replaceable I was. It would have been easier for the company if I had been an automated system.
Maybe it is an automated system now, but it might be hard to tell. While AI can convincingly pass as human in some limited contexts—say, customer service or telemarketing—we regularly enter into transactions unsure if we're interfacing with a person or a machine.
I’m really interested in circumstances where people might prefer the illusion of automation. One thing that comes up in the book a few times is this idea that it's incredibly awkward to interact with someone who has what you want. But what we don't often see expressed in fiction is the reverse: when someone does have something and they don't want to interact with the people who have less. The very wealthy, they're going to keep their human drivers because it's a status thing. They're going to have someone who drives their Bentley for them, drives their Rolls-Royce. That isn't going away. It's the upper middle class that will gravitate towards things like self-driving cars, because bearing witness to the working class reminds them of their own uncertain futures.
You do such a good job in this book of showing how companies justify fauxtomation: by claiming that the human workers are stopgaps that will be phased out as soon as the technology catches up. In the book, and in reality, human labor is a placeholder for an ideal that never comes. It's always out of reach. And so effectively, what happens is that all the people involved are acting out a fantasy by pretending to be machines—and, of course, without any real labor protections.
People who follow tech news know about content moderators. In some little office park in the southwest, or in the Philippines, the lives of contract workers for Facebook and Google are made miserable because of the content that they scan through. AI is not reliable enough, and probably will never be reliable enough, to scrub, with the efficiency necessary, something like images of child exploitation. AI can do a little bit, but the risks are so vast, and the consequences are so enormous for a company like Facebook or Google, that they will always have some humans doing this work.
My understanding of automated vehicles is that they operate not that much differently. In my book, there’s a human driver in the car, in a hidden compartment. But in the case of the actual so-called driverless vehicles, there are remote operators involved in the process. Perhaps these workers do not have a steering wheel in front of them, but they do press buttons and take control of a car remotely when it gets stuck. Maybe this isn't what would be considered a "hard" science-fiction approach to AI, but this is the reality of AI—this AI underclass that every single application is relying upon somehow, exploiting somehow.
It's telling that in your book, the drivers are called "seers." They're the ones who see. People pretending to be machines see the rest of us at our most human, because they see how we act when we think nobody's looking. Your protagonist, Teresa, is always looking at her passengers. Is there something about being hidden inside of an automation illusion that makes her feel like she's perpetually on the outside of the world, looking in?
The illusion makes explicit the view any powerless worker has on customers or management. Throughout her entire life, Teresa’s been an observer. As a worker, in previous jobs, she was made invisible, while necessary to upper-class people to maintain their lives. A worker like Teresa learns family secrets; they learn aspects of their employer's character and values that they might hide from their professional context, or even some of their closest friends.
Proximity has degrees, though. People will behave one way when they think they're alone, another when there's a camera recording them, and still another when someone's physically in the room—or in this case, in the car—with them.
After the Snowden revelations, you would get emails from people who were joking but not really joking: "Hey, NSA, did you see that?" Everyone had the sense that we are being watched. And then, over time, we became accustomed to the surveillance. Being humans with other priorities, being stretched thin due to work, childcare, or healthcare issues, that incentive to maintain privacy online, for a lot of people, went away. Still, no one's really going to fuck around if they see a camera right in front of them. Because you don't have control over what's being captured. There is "being good" for the sake of cameras, and then there's being good for the sake of another person. Sometimes cameras might encourage better behavior than another person in the room. If parents don't respect a nanny, for example, they're not necessarily going to act as well mannered in her company as they would if they knew that there was a surveillance camera in the room.
In the Snowden years, the conversation around surveillance was about civil liberties and the incursion of the state into our private lives. That almost feels antiquated now. Now it’s “What are these companies doing with my data?” If you're looking for examples of fauxtomation, a good one is the human workers listening in on Siri and Alexa. All of these utterances people are making in their own homes are being listened to by third parties, and used to refine an AI system that's going to be sold back to us.
It's a positive thing, I think, that concerns about privacy expanded from the NSA to the corporations who gather and control our data. Because they are opaque and they are accountable to no one.
But I'm not optimistic that a company the size of Meta will just fade out of favor. It's so big, always hovering under or above a trillion dollar valuation. I don't see where that changes without government intervention—without antitrust or stricter regulations that make things like targeted ads basically impossible to do. When I was working on the book, there was still some of that galvanizing spirit: the belief these tech companies had become too big and too powerful, and we needed to do something. But that's one of many progressive ideas that have stalled. Maybe there will be a moment again when we can breathe new life into this issue, but it does worry me a lot that—confronted with the reality of things like generative AI and how they're trained, how these companies are organized, their size, their ambition—I see some broader nonchalance.
You were talking about the valuation of tech companies. So much of that value is based on fiction. Tech companies lie to us all the time. They don't tell us what their products or tools actually do. They tell us what their tools should do, or will do, in the future. They tell us what their tools represent. I was revisiting Ballard the other day—I thought maybe we could talk about Crash, another great science-fiction novel about cars—and I kept coming up against the very Ballardian observation that the world is fiction, the world is a novel. In the context of this conversation, and your book, that rings particularly true. What do you feel your role is, as an author of fiction, when the world that you're addressing is already so fictional?
One thing I can do is approach my work with conviction, and continue to believe in something when it may not feel popular, or may seem like a book no one in a million years is going to publish, that I'm going to be selling it out of the trunk of my car, as stapled-together Xerox printouts. That’s very much what I imagined when I was working on this novel. And this is something I see as lacking in the culture, broadly: believing in your own taste, committing to working outside of trends, because they are driven by processes that could very well be automated.
You’ve written extensively about Silicon Valley as an essayist and reporter. What was your research process like for this book?
I wanted there to be some legitimate technology to this car. If you're in the back of a Waymo, you can see a panel that reveals the computer's vision, how it identifies other cars and pedestrians. You can see how the computer sees the road. It didn't really interest me to make that more advanced, because even the Waymos in San Francisco and Phoenix, they're geofenced to such a small location. They do not go out in the rain. I've seen them and I'm really not impressed. My first time encountering them, I just remember thinking, "You've been training for so long and that's all you can do?" I researched a lot about Amazon’s logistics to get a sense of how to fictionalize that. I read a lot of interviews with Jack Dorsey because, more than Elon, I felt like he’s a great inspiration for a villain, because he’s someone who doesn't speak with conviction, who stands for nothing but to be charismatic and compelling to whomever is in front of him.
There's a key scene in the book where a group of trainees is tasked with playing chess against what appear to be robotic ducks. The robo-ducks reveal, at the outset, their inner workings—as though to prove that there is no trickery involved. At the end of the game, the trainees discover that a person was hidden inside the machine all along, behind a pane of fake cogs and gears. Obviously this is a reference to the Mechanical Turk, the most famous example of a real person pretending to be an automaton. Can you speak to how you wanted to engage with that history in this novel?
I guess I was really engaging with how a tech company would engage with that history. They would name different areas of the building after Turing—
The Lovelace conference room. That was great; I see you.
They’d establish themselves as following in this lineage of absolute greats, as they would see it. Ultimately, the technology that I'm depicting is that old, is the beginning of the fraud that is automation. That Amazon would use the phrase "Mechanical Turk" to describe an incredibly exploitative labor arrangement that it put together without any sense of irony—that interested me.
When the Mechanical Turk equivalent is presented in the book, the corporate people talk about how although the historical Mechanical Turk may have been a trick, it inspired "real" innovation. Alexander Graham Bell, ostensibly, invented the telephone because he believed the Mechanical Turk was real. It reminded me of how tech corporations are always citing science fiction as inspiration for their vision of the world—the metaverse being the most recent example—often intentionally misreading fiction to justify bringing it about.
And there are, of course, so many science-fiction writers that have been on the payroll for these companies, to continue to spin personal visions of the future, to play that role of the fortune teller. I was thinking about this book as something that couldn't be claimed in that sense, because every scene comes back to revealing the fraud.
Don't underestimate yourself. Some clueless CEO, ten years from now, is going to be citing you, completely misunderstanding what you were trying to say. ♦
This is the second of four interviews by Claire Evans on the changing AI landscape.
Subscribe to Broadcast