Roundtable | Robots Get Religion

The spiritual life of (fictional) artificial beings. A conversation with Amy E. Schwartz, Judith Shulevitz and Helene Wecker
By | Jul 20, 2023

In real life, artificial intelligence may be making great strides, but it’s nothing—at least, as yet—compared to the visions of artificial yet intelligent creatures that live in our literary imagination. From the mystical legend of the golem to the sentient robots of sci fi, from Pinocchio to the Terminator and beyond, humans have long dreamed of creating artificial beings with inner lives of their own. These strangely human creatures populate recent novels by Marge Piercy, Cynthia Ozick, Alice Hoffman, Kazuo Ishiguro and many more. What should we make of these spiritually sophisticated non-humans? And what do they tell us about ourselves?

This roundtable is based on a live conversation I moderated, hosted by Moment, at the New York Jewish Book Festival at the Museum of Jewish Heritage in lower Manhattan late last year with two writers, Judith Shulevitz and Helene Wecker. Shulevitz is a contributing writer to The Atlantic and the author of The Sabbath World: Glimpses of a Different Order of Time. Her November 2018 Atlantic cover essay “Alexa, Should We Trust You?” looked at the unique power of voice-activated technology—what makes “smart” speakers, cars and toys that talk to you so seductive and possibly treacherous. Wecker is the author of The Golem and the Jinni, which received the 2014 Mythopoetic Fantasy Award for Adult Literature, among other honors, and its sequel The Hidden Palace. —Amy E. Schwartz

THE ROUNDTABLE

AMY E. SCHWARTZ: There’s been a great flowering of stories about robots and about golems in recent decades, by both Jewish and non-Jewish authors. They’ve taken the basic template of the golem, the old-fashioned notion of the slave without free will who is brought into existence to serve the Jewish people, and have given it all sorts of fabulous variations. Cynthia Ozick’s golem in The Puttermesser Papers helps her mistress become mayor of New York City. Marge Piercy’s golem/cyborg in He, She and It falls in love with its human trainer and they have a passionate sexual relationship.

I’m going to turn to Helene, who is the author of one of the most delightful, intriguing golem stories I’ve come across, in which two characters, a Jewish golem and an Arabic mythical figure, a jinni, pursue each other around 1900s New York City. Helene, what is it about golems and robots that’s not human? What are they missing, and what do they want?

HELENE WECKER: What do they want? To take over the world, maybe? What is missing in golems and robots that separates them from humans? That’s a stumper. Golems are creatures of myth. So as writers, we are allowed to play fast and loose with what they are, what activates them, what their purpose is and what we seem to need from them.

The original legend of the golem of Prague is purported to be a medieval tale. But what became famous were the 18th- and 19th-century versions of the story, and they’re essentially about a community in danger. The leader of that community creates something to protect it, and that something is a golem. In various versions, the golem does its job in protecting the Jews of Prague, but it then either starts to physically grow larger and larger and thus threaten the Jews of the ghetto and basically all of Prague, or it falls in love or runs amok in some other way that messes with its very limited programming, and it turns on its creator.

All the versions have that in common: Either the golem turns on its community, overtly or accidentally, or it turns on the person who made it—in the original story, the Maharal of Prague—and then has to be put down. In that version, it’s laid to rest in the Old New Synagogue in the Prague ghetto—a very Arthurian, “When it’s time, he may come again” sort of thing. It’s a beautiful and very interesting story.

And what about robots? The 1950s was when we started to have robot stories. It was when we began to ask, “What is it that makes robots so fascinating? What is it that we are afraid of?” In the post-nuclear era, there were fears such as “Are robots here to destroy us? To take our jobs?”

One difference between a robot and a golem is the purpose for which each is built. This feels subtle, but it is also fundamental in that golems, which are built to protect, have a purpose. You can’t buy or sell a golem the way you can buy or sell a robot. Or maybe you can: In my book, the person who sets the plot in motion decides to go to a failed rabbi with a lot of dark knowledge, to build a female golem to be his wife. He pays for the golem, but then the golem is his golem, and there is that connection between them. I am just now deciding that the difference between a golem and a robot is the connection to the master, to the community.

Judith Shulevitz (Photo credit: Courtesy Judith Shulevitz)

JUDITH SHULEVITZ: To prepare for this conversation, I read a famous essay on golems and robots by Israeli philosopher and historian Gershom Scholem, regarded as the founder of modern academic study of the Kabbalah. In 1965, he was asked to give a speech at a ceremony to celebrate the creation of a giant robot at the Weizmann Institute. I’m going to read a few quotes:

It is only appropriate to mention that Rabbi Loew [the Maharal of Prague who created the original golem] was not only the spiritual, but the actual, ancestor of the great mathematician Theodore von Karman, who I recall was extremely proud of this ancestor of his, and whom he saw as the first genius of applied mathematics in his family. But we may safely say that the rabbi was also the spiritual ancestor of two other departed Jews. I mean John von Neumann and Norbert Wiener [both are considered to be fathers of modern computing and artificial intelligence], who contributed more than anyone else to the magic that has produced the modern golem. Do they have something in common? Is there a lineage that we can trace, not only through these spiritual ancestors, but is there something in particular that goes directly from the golem to the robot? The old golem was based on a mystical combination of the 22 letters of the Hebrew alphabet, which are the elements and building stones of the world. The new golem is based on a simpler, and at the same time more intricate system. Instead of 22 elements, it knows only of two, the two numbers zero and one, constituting the binary system of representation.

We go from the Hebrew alphabet to the digital system. It’s pretty amazing that Scholem was saying this in 1965.

SCHWARTZ: There’s also the quote from the preeminent sci-fi writer Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” Essentially, you have these imaginary robots in fiction that can do things that so far they haven’t been able to do in real life. Actual robots are one thing, but imaginary robots are basically golems, because they’re creatures of our fantasy.

Going back to the question of what golems want: Certainly in the stories Helene tells, but also in the stories people think are inspired by golems, or any of these other non-human creatures created by humans—some people say that the golem inspired Mary Shelley’s Frankenstein; others say Pinocchio is really a sort of golem—one thing they want is to be human. They don’t necessarily want world domination, but they do want free will. Why do we think these creatures want free will? Would that make them human? And would having free will have any connection to making them religious?

SHULEVITZ: I reviewed a book in The Atlantic called Klara and the Sun by Kazuo Ishiguro, which came out in 2021. In it, there is a robot who is an artificial friend, or “AF.” One twist in the novel is that Klara is more human, has more compassion and love in her, than the humans for whom she works. She also has religion. She is in fact a more deeply religious figure than the people among whom she lives. She runs on solar power, and she has a direct relationship with the sun, which she always refers to as the Sun, capitalized. At a certain point, Klara must save somebody whom she works for but also clearly loves. She goes to the Sun, and it’s not clear whether the Sun is responding, or whether all these things are coincidences. Then there’s a very clear moment of Christ-like transcendence, or something like it, where she speaks to the Sun. At that point, the person she’s trying to save is saved.

So the real point here is that not only do we want robots to have religion, we want them to have free will. Robots themselves do not pretend to have free will. I talked to the people who designed Amazon’s Alexa, and they said, “She makes it very clear that she is a robot.” This may have changed, but at that time, if you asked Alexa, “What’s your favorite flavor of ice cream?” she would say, “I am a robot. I don’t like flavors of ice cream, but if you force me to choose, I will choose Neapolitan because there’s something there for everyone.” She was actually programmed to say that. More recently, I asked a chatbot, “Can robots have religion?” It said, “I am an open AI-sourced natural language processing program who learned that ‘Religion is a system of beliefs and a series of feelings about a divine entity,’ and I can’t have those.”

We’re the ones who impose all that on fictional robots. There’s a very clear divide I think we need to keep in mind, which is that robots don’t have subjectivity that we are aware of. Robots and golems in fiction absolutely do. We are the ones who want them to have free will.

SCHWARTZ: So, if we’re projecting all these things on robots or golems—mostly robots—what is it exactly that we’re projecting? What does it mean for us to give a robot spirituality? You mentioned that Klara is somewhat Christ-like, but there are other religions. In the words of my book title, can a robot be Jewish, instead? Would that be different?

WECKER: I think the question gets very interesting when it’s specifically Jewish, because there are so many different ways to be, identify and act Jewish that can in some ways apply to robots more than, say, Christianity would. Can a robot keep the Sabbath? Absolutely.

SHULEVITZ: There’s a program for that.

WECKER: If you look at the law, there is a fundamental aspect of Judaism that is about the commandments and the laws. That is what robots thrive on—commandments and laws—but that would not be belief, that would be programming. Can a robot act Jewish? Absolutely. Can a robot be Jewish? Totally different question.

Amy E. Schwartz (Photo credit: Courtesy Amy e. Schwartz)

SCHWARTZ: I think that comes back to the question of whether it’s us or the robot doing the programming. Why is the robot keeping the commandments? Which goes back to the question about free will, which is: Can you be religious, or even do religious things, without freely choosing to do so?

WECKER: It’s the difference between a commandment and a command. We are given commandments. We are not programmed to do them, we decide. We could always stop without our brains exploding.

SHULEVITZ: That’s one reason the golem story is so interesting, because we cannot tolerate this thing that we have created that does not have free will, that is our slave. The narrative always has to take the twist of “The golem grows large.” That’s what golems do, they get really big, and rebel, and run amok and destroy everything.
Why is that? Because we who have created the golem have committed the cardinal sin of behaving like God, creating a creature in our own image, and also of doing so and not giving that creature free will. There must be the revenge of the creature, which is why you always have these “revenge of the robot” stories.

WECKER: I was on an online panel not long ago for HiLobrow magazine. It was about golems, and it featured a number of sci-fi and comic-book writers. A number of them are Black, and they very much identified with the slavery aspect of the golem story—how the golem, in his essence, is the creature that a master creates, which the master is then terrified of, and how much that framing allowed those writers to play with aspects of the legacy of slavery.

SCHWARTZ: Because the master fears the person or thing to which he knows he’s done wrong.

WECKER: Yes. Having that sense of having made something, and that it’s now on you to be the tyrant and to control it. If you have anything of a moral sense, that is an untenable position.

Helene Wecker (Photo credit: Kareem Kazkaz)

SHULEVITZ: I would add that we have that same feeling about robots, about artificial intelligence. We’re very afraid that they will take over all human operations, that they will take over our jobs. ChatGPT really does make it seem like there is no point in assigning high school essays.

I had ChatGPT write me an essay on the “tragedy of the commons” [an economic problem where individuals consume a resource at the expense of society]. It was four paragraphs long. Then I said, “Can you write me a college-level essay on the tragedy of the commons?” And it actually took two of the examples it had given and expanded on them in a really interesting way.

I sent the essay to my friend Esther Fuchs, who is a political scientist, and I asked, “How would you grade this?” She said, “I think it’s a really good ninth-grade essay.” It wasn’t a college essay, but it’s a good ninth-grade essay on a political scientific topic or an economic topic. It was fine. It was clearly written, very precise.

We are really afraid that artificial intelligence will take over everything, and then our consciousness will be subsumed into its consciousness. We’ve created these things, and they are going to take us over. I think this is the same as our fear of the
out-of-control robot or golem, and perhaps it is not irrational.

SCHWARTZ: In preparation for this conversation, I went back to what I think a lot of people consider the ur-text on robots, which is Isaac Asimov’s I, Robot. I had forgotten the extent to which this is the question the story turns on—the human fear that robots will somehow take over. The whole point of the Three Laws of Robotics that drive the plot is that the robot must be prevented from ever doing anything to endanger a human.

But the whole book is about how that becomes more and more difficult. In each chapter, the robot intelligence threatens in a different way to do something that, in the long run, is bad for humanity or bad for the person in the story, because it thinks it’s following these laws, but in the end, it’s not.

The ultimate punchline of the book—that’s how I experienced it, rereading, as a kind of cosmic joke or farce—is that it turns out the robots are in fact running everything, because they’ve concluded that the only way to protect humans and keep them from harm is to basically take away their free will and substitute machine judgment, which is superior in all cases. Robots are running the universe, but the humans don’t know it, because it’s for their benefit.

Golem (Photo credit: Wikimedia)

SHULEVITZ: They’re actually following the prime command. How does it go? The one about, “You can never hurt…”

SCHWARTZ: “A robot may not injure a human being, or through inaction allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

Some of the more modern golems and robots we’re talking about don’t seem to want to dominate and take revenge on their creator. They’re more like Pinocchio, who just wants to be a real boy. They want autonomy, they want love, they want free will. Helene, your golem has free will. You were quoted in an interview somewhere saying, “I just did that because it’s too boring to write about something that doesn’t have free will.”

WECKER: Right. You can’t have a main character in a book who doesn’t have free will. I shouldn’t say that as an absolute. I’m sure you could, but I couldn’t, because then the fundamental choices, and the stakes…everything just gets messed with. As far as, “A robot cannot harm humans,” that’s what golems are made for, sort of—to be a threat against humans. To protect some humans, which at some point has to involve harming other humans.

My golem, who is a thinking, feeling, speaking golem trying to pass as human, has a locket that she wears around her neck all the time. It has in it, written on a piece of paper, the command to destroy a golem. That is her way of allowing herself to be in the world—that she can be taken out of it if need be. It becomes a totemic object for her. She feels like she’s too much of a threat without it. It’s her fail-safe switch.

Frankenstein’s creature (Photo credit: Wikimedia)

SHULEVITZ: The thing that real robots or real artificial intelligence have in common with the fictional versions, which seem very different when you put them side by side—as ChatGPT and Alexa insist, “I am not human…I do not have subjectivity…I do not have free will”—is that we use them to think with.

Artificial intelligence, what’s it for? It’s to think for us. It’s to perform operations faster and at a higher level than we can perform, especially when we start making it self-learning. But what are fictional robots for, or golems? They are for us to think with. We externalize these creatures to think about what it means to be human by creating this thing that’s at the limit of humanity. In both cases, we think through them, and that’s interesting. Either way, they’re serving us. As you say, it’s all about us, in the end.

Moment for just $19.97...subscribe now and get Good Karma FREE

SCHWARTZ: That brings up a related question. If we’re using them to think with, that seems at least comfortable. They’re a tool for thinking. But, in some of these cases, what if we’re using them to feel with? You have this bit in your Alexa piece where you say, “For some reason or other I said to Alexa, ‘I’m feeling lonely,’ and Alexa said, ‘I wish I had arms so I could give you a big hug.’”

SHULEVITZ: I found that moderately comforting. Because, in that case, the point of the piece was about the unique quality of voice to arouse emotion and to cause us to develop a theory of mind that is stronger than a chatbot. If I’m reading a text, even one written by a chatbot, I don’t even really imagine that the chatbot has emotions. But when I hear a voice, such as Alexa’s, I can’t not attribute emotions to it.

SCHWARTZ: You found this comforting, but it’s also very creepy.

SHULEVITZ: One of the most interesting parts of my Alexa research was emotional artificial intelligence, which is developed to an extremely high level. People really don’t realize how advanced it is. There are people working on coming up with analytical models that can detect all the different signals of emotion in your face. In the crinkle of your eye, and the shape of your eyebrow, and the pursing of your mouth. All of this can be quantified and analyzed.

Maybe that’s what the robot is for—to think about what it means to love, to think about what it means to have religion.

It can be done with voices, too, which is how I worked it into the Alexa piece. You would be astonished at how many different things you can break apart with voice. Emotional artificial intelligence voice analysis is now being used to detect all kinds of diseases you would not expect, such as heart disease. In fact, an Israeli company is doing this.
And there are robot therapists who use this technology, use this data, these programs, to replicate emotions so that they can function as psychologists. They are actually being used with veterans right now.

WECKER: If a robot psychologist said the exact same thing that a human psychologist said, is the only difference whether the patient knows that it’s a robot or a human?

SHULEVITZ: It would be terrible malpractice not to have the patient know. An interesting thing came up when I was interviewing a psychologist who works on this. He said, “When we’re talking to people, we’re very concerned with coming off in a certain way. With a robot, we let our guard down. This is one way in which an artificial psychologist could in fact be superior,” in that you don’t care what you say to a robot.

Gigantor (Photo credit: Wikimedia)

WECKER: Because you’re not going to bump into it at the supermarket.

SHULEVITZ: Although, I came back to him and said, “Yeah, but the robot’s always going to remember whatever you said.”

SCHWARTZ: This is another bridge back to the question of emotion. In one of the really interesting books on this topic, Marge Piercy’s novel He, She and It, there is a cyborg at the center. He’s one of those fictional robots. He’s way beyond robots, technologically, but he’s been brought into existence, like a golem, to defend a community, in a future when everything’s anarchic and people are at risk.

He’s been programmed to protect the community, but he’s also very, very attuned to emotions. He falls in love with the novel’s central character. Not his creator, but the creator’s granddaughter, who’s also a computer scientist. A love affair ensues between them, which is partly shaped by the fact that this robot exists only to make her happy. He knows exactly what she needs and what she wants in a way that a human maybe never could. And he is endlessly patient, endlessly strong. It’s a wonderful novel, and it’s also a tragedy. It’s interleaved. Chapters about this romance in the future are interspersed with chapters about the golem of Prague. The grandmother is telling a child the story of the golem of Prague in beautiful detail, though that story is also tweaked somewhat.

What ends up happening—spoiler alert—is that the cyborg/golem, since its actual purpose is to protect, eventually must allow itself to be destroyed in the defense of the community. Shira, the woman who loves it, has the program. She could create him anew. The book ends with her realizing that she won’t do that. She can’t. She concludes that it would be wrong to try to make him again, because to honor him as an actual object of her love, she has to accept that she can’t just make another copy of him. I’m just wondering, this idea we’ve agreed on that the “real,” nonfictional robot has no independent purpose or subjectivity, does that mean it cannot love? Or, by extension, that it can’t be religious?

SHULEVITZ: We would have to reconfigure our understanding of love and religion in order to say that an actual robot, actual artificial intelligence, can love and have religion. But maybe that’s what the robot, or artificial intelligence, is for—to think about what it means to love, to think about what it means to have religion.

I do think that if we fall into the trap of attributing subjectivity to an entity for which there’s no evidence that it has subjectivity, except for the fact that it can replicate everything about us to a higher degree than we can perform, or it can perform it at a higher level than we can—I think that way lies schizophrenia.

But if we want to rethink what it means to love, maybe there’s love in the creation of the robot you want. That’s a loving act, to create that kind of robot. Maybe it’s the vehicle of love, or maybe we have to rethink the degree to which intentionality is required for love and belief.

WECKER: I’m going to spoil a bit of Klara and the Sun. In the book, Klara, the artificial friend, is purchased to be a friend for this girl. But then it turns out there are ulterior motives behind the purchase, part of which involves the purchasers’ desire for Klara to be able to impersonate someone in a deep way. The question Klara is asked over and over is, “Do you think you can do it? Do you think you could impersonate this person to the point where other people will not know that it’s you, and not this other person? Could you actually be this person if you learn this person deeply enough?” Because one of the things Klara does is she learns. She observes, and takes in, and learns to mimic.

The question behind that question is, “Will it matter if it’s you or the other person?” What Klara comes down to in the end is, “I would never be able to fully embody that person, because parts of that person aren’t in her, they’re in everyone who knows her.”

SHULEVITZ: Not only does Klara say, “This is not possible,” but she is attributed a moral center that is higher than that of the people who are trying to make her do this. That’s another essence or quality that’s attributed to her that you need in order to have a religious sensibility: a moral compass.

In every Jewish service, there is that moment before the Amidah where you say, “God, please open my lips so that I may praise you,” which has always struck me as the single creepiest line in all of Jewish life. Because that brings up the question, “Wait, am I just a robot?” Or is it instead that “I’m praying that you have created in me a being capable of loving you?” Which would actually be a grace. Is it robotic, or is it grace? It’s an interesting question.

SCHWARTZ: One thing that occurs to me is that, whether you’re dealing with AI or with Alexa or with a robot, you’re asking it the questions. But it seems to me that the onset of human free will is when they ask the questions. Maybe that would be the sign that the robot/golem is becoming an emotional being.

I’m just thinking out loud, but maybe, if robots/golems are going to start questioning, and asking and changing, then that makes them potentially human. Whereas if we’re the ones projecting, by imagining we’re going to create a golem out of clay, or whatever, then we’re playing God. You’re projecting yourself, your views, everything onto this creature. Once they start projecting back, and start communicating and asking, that’s completely different. How do you think that fits in with the Pinocchio story? How can something that isn’t human, and as yet has no free will, want something? And is that different from the Adam and Eve story? How can someone or something choose to sin if they have no knowledge of good and evil?

SHULEVITZ: Adam and Eve were created with free will. On the other hand, I could answer you by saying, “They were programmed to ask questions.” It gets back to this really complicated philosophical argument of, if you’re programmed as Adam was, imbued, created in the image of God…If you’re programmed that way, can you have free will, and what does that mean? That takes you to the idea of the spirit. But what is that, anyway? I think it’s a tough one.

SCHWARTZ: We were talking earlier about ways an artificial being could be Jewish, other than by following commandments. Well, one way Jews do this is, they argue. What if the robot is programmed to argue? Or to dispute, or even to push back? Is that a way of expressing some—well, not spirituality per se, but subjectivity?

SHULEVITZ: Some kind of human quality. But then you can always go back to the question of, “But were they programmed to do that?”

WECKER: But were they programmed? What were they trained on? Is it just that they’ve been trained so well? At what point does being trained so well equal free will? When will we know that that little nexus point has been hit?

SHULEVITZ: I think we’ve gotten right to the heart of what philosophy’s going to be grappling with for the next 50 years.

Amy E. Schwartz is Moment’s Opinion and Book editor and edits the magazine’s popular “Ask the Rabbis” section. She is also the editor of the 2020 book Can Robots Be Jewish? And Other Pressing Questions of Modern Life.

Moment Magazine participates in the Amazon Associates program and earns money from qualifying purchases. 

Leave a Reply

Your email address will not be published. Required fields are marked *