The Eliza Effect is often associated with the first symbolic AI Chatbot, developed in 1966 by Joseph Weizenbaum. The program was designed to simulate a psychotherapist, with the user typing their thoughts and feelings into a computer. The system would then respond with therapist-style questions that are based on changing the end or the beginning of the sentence made by the human. For example, when a person types in their emotions, Eliza might respond with:
ELIZA:”how are you feeling?”
Human: “I’m depressed much of the time.”
(The programmed Machine repeats the word “depressed” in a new sentence at the end of the hard coded pre-text “I am sorry to hear you are [ word ] “)
ELIZA: “I am sorry to hear you are depressed.”
Despite the simplicity of Eliza’s responses—based on pattern recognition rather than understanding, users can feel as though they are interacting with a thinking, empathetic entity. The system, however, lacks genuine emotion or comprehension.
The Eliza Effect highlights a significant risk in human-machine interactions, where a person might confuse a machine’s patterns for genuine understanding, potentially affecting human relationships. As New Scientist writes, “Some even wonder if the rise of so-called empathic AI might change the way we conceive of empathy and interact with one another.” New Scientist gives the example of “greif tech” where people talk to avatars from deceased loved ones, and their grieving recovery becomes delayed or cronic as they remain in the phase of denial.
Source
https://en.wikipedia.org/wiki/ELIZA_effect
New Scientist Essential Guide No 23 “The AI Revolution”. Article “Feigning Feelings”