Think you can't fall in love with AI? Think again.

March 7, 2023

51视频-Dearborn Professor of Marketing Aaron Ahuvia says that humans have been falling in love with objects for centuries 鈥 but this time, thanks to ChatGPT, there could be real cause for concern.

A movie still from the film Her (2013)
The main character in the movie Her (2013) fell for Samantha, an AI assistant. Image courtesy Warner Bros.

As chatbots like Replika and Bing enter the market, following the release of ChatGPT late last year, questions are arising about humans鈥 capacity to form deep relationships, and even fall in love, with AI-enabled objects. This is already a hot topic in pop culture: the movie M3GAN, which revolves around a young girl's overly close relationship with an empathetic but murdery android doll, has become a surprise hit. And Bing鈥檚 have left some readers laughing and others quite unsettled.

Aaron Ahuvia
Professor Aaron Ahuvia presents his brand love research

51视频-Dearborn Professor of Marketing Aaron Ahuvia says that humans have been falling in love with objects for centuries鈥揵ut this time, there could be real cause for concern. A consumer psychologist and the author of (Little, Brown Spark), Ahuvia contends that the same psychology that marketers use to gain our loyalty can offer useful insight into our AI-enabled future. 

Question: You are an expert on brand love. What is brand love and what can it teach us about our future relationships with chatbots?
Aaron Ahuvia: Brand love is marketing jargon for situations where people love products and brands. Chatbots, like products and brands, aren鈥檛 human. So the underlying psychology of love for these things is the same. Our brain has an unconscious sorting mechanism that distinguishes between people and things, and reserves love for people. But sometimes this sorting mechanism gets fooled into classifying a thing as a person鈥 a stuffed animal is a good example鈥搘hich results in the phenomenon called anthropomorphism. When this happens, it becomes possible to love that thing. Chatbots are already so human-like鈥揳nd poised to become more so鈥搃t really ups the odds of this human-object confusion.

Q: So, chatbots are like teddy bears on steroids?
AA: One of the very consistent research findings is that objects like teddy bears鈥搊r chatbots鈥揷reate a conflict in the person who interacts with them because their conscious mind knows it's not a person, but their unconscious is treating it as if it was human. If you鈥檝e ever had trouble getting rid of an old possession you don鈥檛 use any more, you鈥檝e felt this conflict in action. Your conscious mind knows it isn鈥檛 useful to you anymore, but your unconscious mind has created an emotional attachment to the object, which makes it hard to part with. 

When you deal with a chatbot, you're going to have the same kind of conflict. That's important because a lot of people think, 鈥淥h, consciously, I know that's not a person. Therefore I'm not going to behave toward it in emotional or irrational ways.鈥 But that's like saying, 鈥淐onsciously, I know how alcohol affects my behavior. Therefore I can drink until I'm drunk, and it won't affect me.鈥 It doesn't work that way. It's going to affect your behavior, even if consciously, you know what's going on.

Q: Do you see this risk increasing as chatbots evolve?
AA: In the future, chatbots are going to not only have much better factual intelligence, they're going to have a lot of emotional intelligence. So when you say to a chatbot, 鈥淥h, I had a hard day at work,鈥 it's going to know exactly the right thing to say back to make you feel better. And that's going to be emotionally very gratifying. In our relationships with other people, when they tell us they've had a hard day at work, sometimes we have the energy to be really caring and responsive. But sometimes we had a hard day at work, too, or we're in a hurry, so we don't respond in the best possible way. These chatbots are always going to be focused 100% on your needs. They're not going to have any needs of their own, and they're going to be very good at it. And it's going to be very easy to develop emotional attachments to these things.

Q: Still, I know better than to fall in love with a chatbot . . . don鈥檛 I?
AA: What concerns me is that we鈥檝e all heard about incidents when a tribe of people who has never been exposed to a certain virus gets exposed for the first time, and that virus just runs rampant because the people have never built up an immunity to it. We just experienced this with COVID. Your brain is in a similar situation. It evolved over hundreds of millions of years, and there were never any objects you had to interact with that talked like a person, but weren鈥檛 a person. We have no defenses against that.

What we see in human behavior over and over again is that we know at some level that doing challenging, difficult things is rewarding and makes our lives better. But, very often, we choose the easy things just because they're easy. Think about all the times you鈥檝e chosen junk food over healthy food because it tastes good and it's available and it鈥檚 convenient. Or all the times you鈥檝e watched a kind of vapid movie instead of a more serious film on Netflix because you鈥檙e tired and are attracted to mindless entertainment. I think there is a real potential that we will have the same conflict in our social relationships, where our relationships with people are better, but these relationships with chatbots are easier. 

Q: That sounds scary. Are there any upsides?
AA: We have an epidemic of loneliness. We are not taking it nearly as seriously as we should, the costs to people's happiness and well-being, as well as their physical health. If AI could actually help solve that problem, it would be genuinely helpful to a lot of people.

Interview and story by Kristin Palm