AI Afterlives
I was once on a writing fellowship in the New Hampshire woods, where occasionally, after heavy summer rains, all of us fellows would sit together on the porch of the main house, which looked out over an open field. If we were lucky, when the sky cleared, a few deer would venture out from the tree line to graze in the steaming grass.
Paranoid about ticks and bears, we rarely ventured into the woods ourselves, where slime molds splayed across the leaf litter and electric orange newts blinked from behind every moss-covered stone. So we satisfied ourselves with the field. Lit by a bolt of sun, it became an interface between the tidy, ordered world of human thought and those wordless woods. It was there, at the edge of the wild, that I met Akil Kumarasamy.
I devoured both of Akil’s books when I came home to Los Angeles. First Half Gods, an interlinked short story collection that loosely follows two brothers named after demigods from the Mahabharata. And then her strange, stunning debut novel, Meet Us By The Roaring Sea, which takes place in a near-future only barely more dystopian than our present, where self-driving cars make fatal calculations about their passengers and the indulgences of daily life are meted out through a system of carbon credits.
The protagonist of the novel, Aya, is a machine learning researcher at a Google-like technology corporation. She’s tasked with training a new AI model, which her coworker nicknames “Bogey,” as in Bogeyman. Aya’s job is to fine-tune the model through dialogue, something like teaching a recalcitrant child to speak. At home, she spends her nights absorbed in a very different kind of dialogue, translating an orphaned Tamil manuscript into English. Written collectively by a group of female medical students during the Sri Lankan Civil War, the manuscript is an account of their “radical compassion” in the face of atrocity and pain.
In both of Aya’s pursuits, Akil traces spaces of meaningful connection: between people and machines, and between people across time and place. She has since expanded this search for common ground. In her forthcoming novel, Akil will extend her attention to the more-than-human world, inviting background players like trees and fungi to become protagonists, “eating up everything.”
In New Hampshire, Akil worked in a little cottage once inhabited by the late Studs Terkel. It was surrounded on all sides by trees. Perhaps, like that great oral historian, she knew to lean in and listen closely when they spoke.
The protagonist of Meet Us By The Roaring Sea is a machine learning researcher training an AI through Socratic dialogue. What did these dialogues open up for you, narratively?
I was interested, first, in thinking about science fiction narratives. I wanted the science in the book to feel close to our present-day reality. Rather than having androids, I wanted to take what we already have—"narrow" AI—and push it another step. Having her in dialogue with a language AI model, specifically, was interesting to me, because I was interested in different forms of language decoding and coding. I was interested, also, in ideas of consciousness—thinking about how AI is trained with our collective data, and how that collective data builds a larger consciousness. How do we change consciousness? Does that mean changing our data? AI was a portal into these questions.
This book came out right before ChatGPT was released. Afterwards, I remember being on an AI panel where the moderator consulted ChatGPT to generate questions about our books. It was more adaptable to general questions but when it came to specifics, it struggled. Almost like it was predicting a book that I could have written, or one that I would go on to write.
In your acknowledgements, you cite Melanie Mitchell’s book Artificial Intelligence: A Guide for Thinking Humans. How much did you know about AI before writing the book?
I tried to read a lot of female writers who were writing about AI, specifically. Because in some texts, it seemed like, "AI is going to take over and destroy everything, and it's going to rule over us." That perspective was heavily enforced in the male tech writing. But in Mitchell's book, which is very technical, but also very accessible, there's a bigger emphasis on how AI is used by people in power. I had my own experiences with coding, and my mom codes, too. During the pandemic, I was back with my mom in New Jersey. We started talking more about AI, and those conversations seeped into the book—a surprise collaboration with my mom. It was exciting to get into the space of what was happening in our current world, and then try to imagine, on top of it, where it could go, and how you might train an AI to be "better." What does that mean?
Your main character is trying to do just that—train an AI to be better. At the same time, she’s translating a Tamil manuscript. They’re different pursuits, but in both, she's in dialogue with presences that aren't present. With ghosts.
That's a really good point. There are disembodied, ghostly presences that hover over the book. She's translating a manuscript about a group of female medical students that lived in the ’90s, and it's a collective kind of content, written in the first person plural. At the same time, there's the disembodied presence of Bogey, the AI project. I think in both instances, there's a level of decoding happening. She's trying to translate, trying to take this manuscript from Tamil and bring it to English, and then, with Bogey, trying to parse out how best to train the AI model. In Bogey's case, she doesn't know so much about the project. Sometimes people are very siloed, working on specific projects in a corporation, and they don't even know what their work is going to be used for. So it's quite frightening. At the same time, when you see something so powerful, where the scope of it is unbelievable, it's quite exciting, too. Training Bogey, she's using people's data. All these particles that are our afterlives, that have been spread around: she's using them to train this algorithm. In the same way, she's also taking this dead matter from the collective female consciousness of the manuscript and trying to bring it back to life. It's a weird resurrection of people and places.
I was really thinking about the translation in your book as an interface between time periods—as an almost technological layer bridging people across space and time, holding them together. Talking to an AI involves a kind of translation, too, but it’s far less direct. When we dialogue with AI systems, the language we use becomes “input.” It’s reduced to a string of data, to be made legible to the machine. So instead of translating directly between two texts, it’s almost as though the text is being stripped down to a picture book in between. So much is lost in the black box.
You mentioned the black box of AI and the strangeness of it—how is it making these decisions? There's a fellow here [at Harvard, where Akil is currently a fellow at the Radcliffe Institute for Advanced Study] who was talking about how they're trying to build more transparency into the AI’s thinking process. For example, say you asked an AI: give me a cheap holiday destination for this much money. When it gives you different possibilities, it's making a lot of assumptions. For example, it might be thinking that you're a man, or you're a woman, or you're from a certain income class. In a more transparent AI system, you could interrogate the process, and see where the AI is swinging as you ask questions.
At one point in your novel, a character asks, “Do you think you can trust something if you don't understand it?” Seeing as you're now working on a new novel where you extend your scope from AI to nonhuman minds, I'd like to revisit that question: do you think you need to understand another being's subjectivity in order to write about it?
That's a big question. A lot of my interest now has been on plants and fungi. How do you understand a plant? There's a big debate about even extending words like "consciousness" or "sentience" to plants and fungi. Maybe people use the word "consciousness" more freely with machines, thinking they can have a consciousness because machines are born through us. But there's a big debate about using the word "consciousness" when it comes to plants. People are more comfortable, maybe, with the word "intelligence," but it's been very polarizing. There are so many different definitions of what consciousness is, but even if something satisfies it, someone will find it uncomfortable. I've been looking through fiction and thinking about how people have been writing about plants and animals, and it's really difficult not to personify nonhumans, not to anthropomorphize them. At the same time, by anthropomorphizing them, you give them a sense of consciousness, too. So it’s quite difficult: you want to recognize consciousness through anthropomorphizing, but doing so puts the human at the center of it.
When it comes to defining consciousness, you can't please everyone, which is how philosophers and cognitive scientists end up with these really convoluted formulations, like the idea that consciousness is "what it is like to be" an organism. Of course fiction writers think a lot about “what it’s like to be” something—or someone—else.
I think part of the reason I've moved towards trying to think about the more-than-human world is because in Meet Us By The Roaring Sea, I was already thinking a lot about consciousness, between people, and between people and AI. But I was leaving the other parts of the world unexplored. So now I'm trying to see, how do you dissolve that border, too?
But you can never truly know what's going on in anyone else's head, you know? Writing a nonhuman character isn't that different from writing a human character in the sense that you don't have access to either consciousness.
That's true. You never know what's happening in someone else's head—but they are still human. How do we describe different kinds of tendencies? We call things vegetal if they're not moving. We don't use that word in a positive connotation, per se. Why not? Why is that the measuring stick? I'm trying to test how I'm seeing things and see how I could describe things in different ways.
Is a temporal shift part of this reorientation for you? The reason we call people on the brink of death "vegetative" is because we associate visible movement with life, with subjectivity.
You do have to slow down, because a tree can live centuries longer than us. There is movement happening, but at such a smaller scale. In time-lapse videos of plants, we get to see that movement, but it does require us to slow down and reimagine: the movement does not conform to our scale of time, or how we perceive time. So much of how we qualify life is in terms of motion, and it's very difficult for us to change that. There's also, as an equivalent, the idea that if something is not moving, it's vulnerable. But trees actually release certain toxins if they're in danger. They have a lot of defense mechanisms. They're not passive beings. For example, trees are known to release chemical signals to call wasps when caterpillars are chomping on their leaves, and these wasps lay eggs into the caterpillars. It’s brutal. I'm interested in changing these narratives that plants are passive creatures, without agency.
When I watch a time-lapse video of a vine snaking around something, my default reaction is fear. And there's a rich literature of plant horror. Listening to you talk, I’m thinking maybe fear is what happens when agency is opaque. Because when we tell scary stories about “man-eating-plants,” we’re ascribing agency to plants—but it’s hostile in nature.
I was reading a book, In Praise of Plants, by Francis Hallé, and he briefly talked about monsters and what scares us. The monster form often has to have bilateral symmetry. It has to remind us of something in ourselves, in a way. It's a mirror. I wonder what a moving vine resembles. A snake?
This is a fun party game for another time: trying to imagine a genuinely new monster. Because every once in a while, new monsters do emerge, the way Godzilla came out of the trauma of the atom bomb. What new traumas are we unleashing on the world? Ecocide, certainly. So in the new book, you're speaking as a forest?
I'm thinking of trees and plants, fungi and other creatures, even insects. I'm trying to have more conversations with these different spaces of consciousness. But even with all my efforts to put the human in the background, the human is still very present. Part of the intention was trying to have what's usually in the background come and eat up everything. I'm not too sure if that's how it's going to go, but we'll see. Before Han Kang wrote The Vegetarian, she wrote a short story called "The Fruit of my Woman," and it's about a woman who becomes a plant. There's also a book by Kōbō Abe called Kangaroo Notebook, where a man turns into a daikon radish.
You should make a reading list. We're definitely in a zeitgeist—there's so much interest in plants. I don't know if it's a last grasp at a world we're losing.
There's the new genre of fungi horror, too.
I've been really interested in the field of minimal cognition, which takes an evolutionary perspective on nonhuman minds. Memory, for example, arises when organisms have an evolutionary need to learn from experience. Some living things don't—if you're an ocean sponge, you don't need to remember anything—but most living things do. So the world is full of cognitive agents: human and plant cells, fungi, sperm, protists. They all have memory and decision-making capacities, to different degrees; cognitive complexity, and maybe “consciousness,” is an accumulation of those capacities. But you have to start from the bottom to understand it.
Micro-consciousness!
The more you get into cognition, into sentience, the more ancient it feels. You get closer to animism, the élan vital. Is there a connection to the spiritual for you, in all of this?
Definitely. It's interesting, because the plant and fungi consciousness reading group that I go to here [at Harvard] is in the Divinity School. What I'm writing now feels very rooted in ideas of spirituality. If we reduce everything—if the cell or the sperm has a level of cognition, what does it all mean? Is that pantheism?
It's inescapable. Inescapable, but taboo—and it becomes more taboo the closer we get to convergence. I mean, not to sound really new-agey, but in cognitive science there's the search for "the neural correlates of consciousness." Once we give a physical, material basis to subjective experience, then, you know, everything is alive.
And God is in everything.
Or everything is God.
Exactly.
Artificial Intelligence seems like such a side quest in comparison. Are you still interested in AI? Once you've graduated to trees, is it interesting to think about machines anymore?
I think it is. It just complicates it—how we're going to think about this, the language we use around it, and how all these different things are going to interact. AI is happening at the same time as the climate crisis, and everything is converging. It feels like the world as we know it is going through some sort of flux. Both these things are intertwined, and you can't really talk about one without the other.
There's this narrative emerging that in order to understand machines, we have to burn the world. Just the other day, Google CEO Eric Schmidt said we should deprioritize energy conservation in favor of building stronger AI, in the hopes that AI will solve our climate problems.
"We'll build all this AI to save the planet, but we have to kill everything in order to build it.”
It's literary, almost. But you couldn't make it up. If you did, it would seem maudlin.
People would be like, "It's too on the nose.” ♦
Subscribe to Broadcast