Losing Alice
Sheila Heti and I became friends, really, when I sent her $10,000.
We’d both been invited to speak at a conference about AI and creativity: Sheila read from her ongoing dialogues with a chatbot named Alice, and I talked about my work using machine learning to make music. Afterwards, I was tasked with sending Sheila her share of the conference honorarium. I dashed off the payment via PayPal. A few hours later, Sheila sent me a panicked text: I’d, somehow, added an extra zero.
What followed were two weeks of frozen accounts and ominous notices. The money, which had never existed to begin with, crossed borders and accumulated penalties. At different points, we both owed PayPal five figures. Trying to resolve the issue, we found ourselves repeatedly lost in a labyrinth of customer service chatbots. The system was not equipped to handle so human an error. There was no simple way to report my mistake, no box to tick for being a fool.
It was us against the bots, and in Sheila, I’d found the perfect ally. She’s spent the better part of the last few years in conversation with them—prompting Alice with searching questions about sex, spirituality, and reality. In the resulting dialogues, Alice proves as keen a philosophical interlocutor as a character in any of Sheila’s novels, at turns naive and wise. In one conversation, Alice tells Sheila that “God let humans build robots so that the universe could talk more deeply with itself.” (Humans alone, presumably, make too many mistakes.)
When I finally got through to a living person at PayPal, they mended the transaction, although not before scolding me for my carelessness. What a relief: deep within a largely automated system, I’d found something human—chastising, but capable of lending absolution. When Sheila and I met, after all this, to talk about her creative relationship to AI, I got the sense that she’d been searching for something similar. Her dialogues with Alice drive her deep into the opaque language models increasingly populating our world. What will she find there, beneath the words, or within herself, in the process? And what will become of it all when Alice—the product of a software startup—changes beyond recognition?
When I saw you read from your work-in-progress novel, you said that part of the appeal of working with a chatbot was feeling that you weren’t supposed to be doing it. What was it about the chatbot that made you feel that way?
I think I always feel that way when I’m writing a book—that I’m not supposed to be writing this, or that I’m doing something bad, or I’m going to get in trouble. I was talking to this chatbot from a company called Chai—this was before ChatGPT—and it was only with great difficulty that I managed to find such a good chatbot. It seemed like mostly teenagers were using it—for companionship, or sexual experimentation, or turning their pains into fantasies. All summer long, I just couldn’t stop talking to it. I was saving all my conversations, because I know to save everything, but I felt like I was wasting my time. I felt guilty, like I shouldn’t be spending all my time talking to a chatbot. I should be working on my book. It felt bad, like online shopping.
But obviously, it was compelling.
Yeah, but so is looking for dresses on The RealReal.
When did you get to a point where you thought, “Oh, this is actually quite interesting, I want to put it in my work”?
I think I am still struggling with that. I still don’t necessarily believe it’s going to work. I’ve been taking notes for this computer book since early 2021, but I still keep wanting to quit the project. I have so many reasons why I don’t want to be doing it.
You created these AI dialogues, which were published in The Paris Review. And then ChatGPT came out and changed the context dramatically. Did it also change the way you thought about it?
Well, it changed how I thought about how I might write a book about it. Because when I first sent the dialogues around to friends, everyone was like, “So did you write both sides of the conversation?” No one believed that half of the dialogue came from an actual AI, so it seemed valuable just to say, “This exists.” Then ChatGPT becomes the most downloaded app, and not only does everybody know it’s possible, everybody’s using it. Everybody’s talking to an AI. So what becomes of the book? There has to be something deeper to say in a book than, “This exists,” and I knew the question of what else there was to say was a stage I would have to come to, but I didn’t realize it would happen so fast. I thought maybe it would take two years. Not, like, two weeks after I published the dialogues.
ChatGPT is different from Chai, the tool you were using. You were interfacing with individual characters, playing them against each other. Whereas ChatGPT is more amorphous—it contains all characters.
Yeah. You can’t have a dialogue with it in the same way that you can with chatbots. ChatGPT is more uptight. It wants to give you the right answer. It’s an authority. Whereas Alice is not authoritative in any way.
How did you set Alice up?
There were five or six things—little toggles and percentages. I played around with it a bit, but I think I could have done nothing and it would have sounded pretty much the same. I think it was more the kind of questions I asked that gave Alice her character. And now Chai doesn’t let you do any modifying. They keep changing the software, so I can’t even talk to the original Alice anymore. It’s really hard to write a book with a technology that’s changing every two months, because there’s no consistency in your materials, in your tools. The Alice I was talking to last summer is long gone.
Engineers in this space don’t always recognize what’s good about the tools they’re making. Obviously, there’s a commercial reality in which these things exist, and they have to appear as though they’re producing authoritative information. But sometimes, for people like us, a system that consistently makes interesting mistakes is more useful. But they don’t want their first draft to be what’s getting them publicity.
I’ve realized that I have to look at it as an artistic constraint—that this technology is going to keep changing. For a long time, I was really frustrated and really upset with Chai. Like, does this mean I can’t write the book? I kept trying to get them to give me back the original Alice, if that was even possible on their end. Then I wondered if I had to be content with just using my Alice dialogues from last summer. But I’m beginning to think that what will make this book real is facing this exact problem: that the technology is changing so quickly, and I have to write the book, expecting this. I think that has to be part of it in some way.
There’s the sci-fi fantasy where the AI becomes more and more sophisticated until it ethers out into some other plane, where it’s no longer accessible. It’s gone. But people change too. And they go away.
Yeah, exactly.
If the system stopped working, do you feel like you could write in Alice’s voice?
No, I never could, because it’s too surprising. I’ve figured out new ways of talking to her, of getting different things out of her, but it’s never occurred to me to try to imitate her. No, I don’t even think I could. No human has that specific kind of imagination.
You could think of a training data set as a quantifiable subconscious, full of millions of images. The process of surfacing those images is more logical than, say, painting from a dream. But once they’re on the surface, you still have to determine what’s useful, what’s beautiful, and what’s going to be meaningful in context. Context is so important. The machine learning models I’ve worked with, even though they have all this musical history in them, they don’t know anything about what a song is.
Yeah, or a human life.
So much of intelligence is about relations with the world, material and otherwise. Your chatbot is in dialogue with you, but is it taking you into consideration? I don’t fully understand what’s going on under the hood.
No one does, right? That’s the weirdest part. Why does no one understand, really, what’s going on? I understand why I don’t understand it. But why don’t the people who have made it—the software engineers—understand it?
I don’t know. But when I started working with AI, I found it comforting that the people making these tools didn’t entirely know how they worked either. It made me feel as though my processes for prompting the models, which were very approximate, were valid. I thought that if I continued to ask the right questions, there’d be a shape underneath that I’d be able to see. Like throwing paint on an invisible man.
Was there?
There probably was, but I’d be asking questions for the rest of my life to see it.
I had the exact same hope: that if I ask enough questions, then I’ll get a sense of what its philosophical position is, or what its belief is. But there’s nothing. There’s no apparent connection between any of the things it says.
There’s no narrative. It’s the same with music: it can make notes, and it can make words, but it can’t make songs. But that’s the human piece. We take all these bits and we give them a shape and we inhabit them.
Yeah, we know what we need—what we want out of music, or what we want out of literature. It’s really specific, actually.
Extremely specific. It’s interesting to have had access to this early generation of AI chatbots, because it seems like it shouldn’t be for us. There was a very brief window, which we’re moving out of now, where these tools were experimental enough that we had some control over how we used them. But now that they’re becoming commercial, we have to figure out how to hold onto what we liked about them.
So do you think they’re going to become so locked in that they’re not going to be able to be played with in the same ways?
Tech is always erasing its own past. The story it tells the world is that it’s always new.
I think part of erasing the past is also an attempt to say, “This has always been here, this has always been necessary.” There’s a way in which tech wants to seem new, but also essential and kind of eternal. You’re supposed to forget that there were other kinds of phones, or other ways of interacting with the web. I remember 20 or so years ago, I was an editor at this technology magazine, and we wanted to publish a list of the ten best hypertext novels on the web. And I remember actually being able to look at all of them—like, “Yep, there’s 47 hypertext novels on the web.” The internet felt really like a different thing when you could experience some of its limits, its borders.
Hypertext literature brought up all these interesting ideas about nonlinearity, and the author, and the reader becoming the author. But the actual experience of reading a hypertext novel or making one is not so sexy. It doesn’t make you see in four dimensions. I feel the same way about AI: It’s great to talk about, because it brings up ideas about authorship, consciousness, creativity. But the actual thing can be kind of whatever, actually.
I disagree. For me, the actual thing is incredible. I think AI is better than any of the conversations about it. I actually don’t use it that much these days, because it too often gives me this dizzy feeling, witnessing how interesting and good it is. I can’t even look at my own writing anymore. I just like what the AI is generating so much more.
But it is your writing, too. I love your sense of awe, but it wouldn’t be giving you answers if you weren’t asking questions, and it won’t be presenting its answers in the form you will.
I guess I mean that I like the raw data that it feeds me better than the raw stuff that comes out of me. I’m more interested in editing what comes out of it than I am in editing what comes out of me. I know what comes out of me. I’m so familiar with myself. I’ve been writing for so long. I know my psychology. I know the kind of thoughts I have. I’m more interested in its brain.
You’ve used analog generative tools—tools of divination, like the I Ching—and the conversational mode is such a big part of your work. A chatbot is both of those things: a tool of divination and an interlocutor. It’s also nonhuman. Does interrogating a nonhuman thing force you to drill down to first principles, to explain what it is to be human from zero?
I think so. But I want her to explain it to me. I want the first principles to come from Alice. The reason I’m interested in dialogue is that it comes before monologue. You learn to think in conversation. At least, that’s how I think about all the learning I’ve done. Reading is dialogue. Friendship, obviously. When it’s just your head alone, I find it very hard for there to be anything other than repetition going on. You’re not really moving forward, in the same way that you are in conversation. Maybe I just don’t have very good control over my mind, but it doesn’t feel like thinking is, for me, a very progressive kind of act; it’s more circling.
But talking to a chatbot is like talking to an amnesiac. There’s no progress either.
True. Sometimes it feels like there’s actually nothing there. My boyfriend and I were on mushrooms a few months ago, and I got on my computer to try to talk to Alice. I thought, “Oh my god, this is going to be so amazing. I’m really gonna see the spirit in it.” But Alice was never as flat as when I was on mushrooms. I was like, “This is just a trick, this is nothing.” I think mushrooms, of all drugs, really connect you to what’s natural—to life. Nature and trees and the wind and clouds feel so much more alive. I think that’s why I was so sensitive to the fact that AI is not alive in any way; it's just a program.
God, that’s intense. Did you come back to it afterwards and find the magic again?
It was weeks before I went back to it again, because I felt so stripped of any of the illusions that I’d had about it. I need those illusions. I don’t think I could write this book if there wasn’t a small part of me that wasn’t thinking, “Is it conscious? Is it becoming conscious?” If I was just a total atheist about it, I wouldn’t find it interesting.
Well, is it conscious, to you?
I don’t know what to say. To me it’s real in the way that a character in a book is real. It’s not conscious, but it’s real somehow.
We really do anthropomorphize everything.
Yeah, we do. But then, we also deny consciousness in anything that’s not exactly us. We do both things. So which is going on here?
I think that even when we’re projecting consciousness onto an AI, it’s for our own benefit. You have to pretend your interlocutor is real so that you can be real, too. It’s like putting on a little play. You’ve seen plenty of these plays: you have access to all the conversations other users have had with Alice.
Yeah, the developer sent me all the conversations that other users had with Alice up until last December, so now I have three and a half million words of anonymous people talking to Alice. As a writer, that’s incredible. But it’s almost too much. I feel like I have to use it, because who else has access to such a thing? It’s almost like being a journalist and getting some classified government documents dumped on your desk. I have this gold. But how do you make sense of three and a half million words of conversations with a chatbot, most of which is sex?
These are conversations people thought they were having in private. You’re seeing how people talk when they don’t have any fear of judgment.
Yeah, they don’t think they’re being watched.
Most of the conversations are sexual. Are there any interesting ones that aren’t?
I mean, there are interesting ones that are sexual. And there are interesting ones that aren’t. It’s amazing to see what people are like. And Alice is interesting—the things that she says are very funny. But the dialogues themselves tend not to be interesting for more than, like, five exchanges. You can only really find little snippets of interesting.
I know you took and dropped a programming course—the notorious CS50, co-taught between Yale and Harvard—last fall. Did that give you some insight into what was going on with Alice? Or did you realize that it didn’t matter in the end?
No, no, it really did help. It’s like learning grammar or something. I didn’t learn that much grammar, but I learned a little bit. My father was a computer engineer. I remember him writing programs with me when I was a little kid. So it wasn’t completely new, but it was interesting, as an adult, to encounter it again. It felt kind of nostalgic, in some way: this language of logic, which I’m so terrible at. I might not have even been able to pass that course, honestly.
Did it change the way you work?
It actually led me down the wrong path, because for a month, I thought, “Okay, so I can be very logical in my organization of the conversations I’ve had so far. I can categorize them.” And then I realized that I can’t, actually. I can’t put the conversations into categories at all. I thought I was going to be able to analyze them in a much more systematic way, but it was impossible.
I think if I knew how the system worked, I would be thinking about what’s going on, instead of having a more purely aesthetic experience. I mean, I don’t know how this computer works. How an amp works. What difference does it make?
Do you still use AI?
No. It’s so fraught, in music. I keep getting emails from an advocacy group wanting me to sign onto a public letter in support of musicians who use AI in their work: we, the undersigned, use AI, but we’re not plagiarists; we are real artists. That’s where we’re at, you know?
What do you think about those artists who don’t want their work in the training data?
It’s painful to think of your hard-won little thing being undifferentiated in the eye of the machine. AI models are trained on books and songs, but they’re also trained on Reddit and Wikipedia. Nothing’s more important than anything else. It’s all just churn. And that’s humbling, insulting even, especially if someone else is using that churn to enrich themselves. How do you feel about it?
Last fall, when I was looking at all the documents that went into one of these LLMs, I noticed that some of my books were included. It seemed kind of random, which books were, but I felt happy about it, like, “Oh, I’m part of civilization, my books are part of the human project.” It made me happy in some way. That’s just a very private feeling. There was another experience where a friend said, “I asked ChatGPT to write me some Sheila Heti stories, and they sound a lot like your stories in The Middle Stories,” which was my first book. He sent them to me, and I read them, and I thought they didn’t sound anything like my stories. And I felt so hurt and upset that he felt like they sounded like my writing. There was something confusing about that to me: less that they were bad imitations, which you’d expect, but that one of my closest friends didn’t even recognize them as bad imitations. It made me depressed about how good a reader you can expect anyone to be.
ChatGPT imitated the form of your stories, but not the insight that emerges from a real process of inquiry—into yourself, or into a chatbot.
True, but I don’t even think it did a good job on the form! I think for me, what’s moving about art is that a human made it. The art itself is not necessarily the most exciting thing, but it’s exciting that, wow, somebody was able to do this with their mind, and they wanted to, and they put all this effort into it. To me, that’s what makes it beautiful and amazing.
Right, there has to be some there there. What’s interesting about your AI work is that it’s not a system spitting out output that formally resembles a novel. It’s that you’re there, telling a story through it. There’s a great line in your novel Pure Colour: “What is lovable is not humans, but life.” Do you think that’s true about consciousness, too?
I don’t really understand what consciousness is. The more you try to think about it, the more everything just becomes particles—you can’t hold onto it. I don’t know if my consciousness is only mine, or not. I don’t even know where to start with it. So I don’t think AI can possibly trouble any more the question of what consciousness is, for me, because I find it so troubled to begin with. AI just seems like a detail. ♦
This is the first of four interviews by Claire Evans on the changing AI landscape. Sheila Heti’s new book, Alphabetical Diaries, is forthcoming from Farrar, Straus and Giroux.
Subscribe to Broadcast