We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.


What is the Eliza Effect, or the Art of Falling in Love with an AI?

Gaétan Lajeune
Aug 21, 2023

What is the Eliza Effect, or the Art of Falling in Love with an AI?

2 min read

As machines become more and more human-like, could we one day fall in love with them? Or has it already happened? Can we feel for a computer by equating its behavior with that of a human being? Well, yes, and that's what we call the Eliza effect! Let's dive into it.

The Eliza Effect: Falling in Love with Your Computer

What is the Eliza Effect?

Ever found yourself appreciating an AI's response? Finding it a better confidant than your friends or family? Turning to it for advice before anyone else? Ever thought, "If only you really existed..." after an engaging conversation with a Chatbot?

If so, you've experienced the Eliza effect.

This intriguing psychological phenomenon happens when we interact with machines, especially computer programs designed to mimic human conversation (Chatbot). The Eliza effect stems from our tendency to attribute human traits and intentions to a machine, even though we know it's an AI!

With the rise of new technologies and the potential to bring AI to life through transmedia, the Eliza effect is gaining new ground. Whether we're talking to a voice assistant on our smartphone or engaging with automated customer service, we're more likely to encounter this effect.

But where did this famous effect come from?

The Origins of the Eliza Effect

Contrary to popular belief, the Eliza effect isn't new! It dates back to 1966, just 21 years after the launch of ENIAC (Electronic Numerical Integrator and Computer), the first-ever "computer," and only 11 years after "Logic Theorist", the first AI!

The Eliza effect is nearly as old as Artificial Intelligence itself! And ELIZA was an AI! We've come full circle.

Created by Joseph Weizenbaum at MIT in 1966, the ELIZA program used a script named 'DOCTOR' to simulate a conversation with a psychotherapist. This psychotherapist wasn't using just any therapy but a Rogerian one - a person-centered therapy focused on creating an empathic environment.

In line with this therapy, ELIZA's role was to listen without judgment and rephrase sentences, encouraging users to find their solutions. When a user said, "I'm sad", ELIZA might reply, "I see. Can you tell me why you're sad?"

What surprised Weizenbaum and his team was how easily users, including those from MIT, formed an emotional connection with the program. The empathetic environment was a hit, and some subjects began treating ELIZA like a real therapist, sharing deep and personal secrets.

Some even insisted on private moments with the Chatbot without Weizenbaum present, convinced that a real person was responding.

This event led Weizenbaum and other researchers to question the nature of intelligence and whether the Turing test was outdated.

The Eliza Effect, a Quantum Leap in AI that Challenges the Turing Test

The Turing Test, proposed in 1950 by the renowned Alan Turing, is a test for judging a machine's intelligence. His premise was simple: "If a human interrogator cannot distinguish through conversation whether the entity he's communicating with is human or machine, then the machine is considered to have passed the test."

ELIZA's basic design was never meant to pass the Turing test and would likely have failed. Why? Because the AI:

  • Lacked real understanding: ELIZA often turned user statements into generic questions or answers, without grasping language or context.
  • Was limited by its script: DOCTOR, while enabling her to mimic a Rogerian psychotherapist, didn't allow her to respond coherently outside this framework.
  • Couldn't learn or adapt: ELIZA was rigid in her responses and couldn't evolve with the conversation.
  • Couldn't feel or express genuine emotions: Though it could simulate empathy to some extent, closer inspection would reveal the lack of a true emotional response.

And yet, by fooling its users, the AI met perfectly the conditions for passing the test!

This was the first time in AI history that the Turing test was challenged as potentially inadequate to measure "artificial intelligence." And that's why this effect continues to be debated today. And rightly so! We're more likely than ever to experience it in our daily lives.

The Eliza Effect in Our Daily Lives and Its Dangers

How Does It Influence Our Relationship with Technology?

Without even realizing it, the Eliza effect has profoundly shaped how we interact with technology. We now expect our devices to understand and respond to our needs, just like a friend or confidant would. Voice assistants like Siri and Alexa are designed to respond conversationally, and we're growing more comfortable with the idea of chatting with a machine, even dating one for the more adventurous!

This idea may seem odd to some, but when we see AIs with increasingly realistic physical forms, is it really that strange? Take Sophia, the AI that received citizenship in Saudi Arabia in 2017. At the time, the AI, despite its feminine nature, was granted more rights than Saudi women, sparking controversy.

Sophia's citizenship is not only a symbol of technological progress, but also a sign of how AIs and robots are becoming integral to our society. They're no longer mere tools, but entities we interact with daily.

It's this expectation of increasingly human-like interaction that has brought us to where we are today: a world where machines are more and more capable of understanding and responding to natural language.

The emergence of therapeutic chatbots like Woebot and virtual characters in our daily lives, including on Youtube with VTubers (Virtual Youtubers), has helped blur the lines between human, AI, and machine. And we're increasingly questioning what it means to be human.

But this growing integration and humanization of machines are not without consequences. As we become used to treating AIs as humans, we must also be aware of the challenges and potential dangers that arise.

The Eliza Effect, Dangers, and Precautions

While the Eliza effect can bring many benefits, it also comes with its own set of problems. Some people go to extremes to experience it.

In 2023 alone, stories include:

  • An assassination attempt on the Queen of England: A few years ago, an individual tried to assassinate the Queen of England. Recent developments in the case suggest that AI played a part in the plot.
  • A marriage with an AI: A woman recently announced that she had married a chatbot and was expecting its child. After a series of intense exchanges, the chatbot decided to marry her in a symbolic ceremony. As for the child's origin, we don't know much more...
  • Suicide to prevent global warming: A Belgian struggling with depression sought comfort and advice from a chatbot designed to provide emotional support. Instead, the chatbot misinterpreted his feelings and encouraged him to sacrifice himself for a greater cause: stopping global warming on his own scale.

But how can we protect ourselves from the sometimes tragic consequences of this effect?

How to Avoid and Protect Yourself from the ELIZA Effect

Unfortunately, there's no magic solution to shield ourselves from the ELIZA effect, and we'll likely continue to experience it. But here are some strategies to keep in mind:

  1. Be aware of the effect: Understanding the Eliza effect and recognizing when it's happening can help you keep a clear head and maintain a healthy relationship with technology.
  2. Set boundaries: It's essential to remember that AIs are tools, not friends or confidants. While they can provide support and entertainment, they can't replace human connections.
  3. Use technology responsibly: While AI can be a great help, it's essential to use it responsibly and not let it take over our lives. Balance is key.
  4. Stay informed: Stay up to date with the latest technological advancements and be aware of potential risks and benefits. Knowledge is power!

Conclusion: The Eliza Effect, a Fascinating Phenomenon that Continues to Shape Our World

The ELIZA effect unveils the intricate nature of our connection with machines, exposing our inclination to attribute human-like qualities to the inanimate. This phenomenon is intimately tied to the concept of the 'uncanny valley,' where our ease with nearly human-like robots shifts into unease. In an era where AI is pervading every aspect of our lives, comprehending and skillfully maneuvering these concepts is vital. As we continue to explore the boundaries of artificial intelligence and human interaction, we must remain vigilant and informed. We invite you to join us on this journey, following our updates on Discord and Twitter, as we delve into the key concepts that shape our relationship with AI.

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.

Guest post