Chatbot ELIZA: Deconstructing Your Friendly Therapist
How does that make you feel?
Since my recent review of chatbot ALICE appeared to be well-received, I thought I could continue the series.
This time we are looking into ELIZA. ELIZA is another one of these chatbots pretty much everyone in our community has heard of. Although it did not win any awards, it inspired ALICE and deserves a dedicated article on our site.
Make sure you grab our chatbot history guide.
ELIZA is a stepping stone in our industry. In this article, I will give you a brief overview of its history and show you how ELIZA actually works. We will also see what we can learn from this highly successful chatbot.
Let’s dive right in.
Chatbot ELIZA, story and history
ELIZA came about in 1964 at MIT. Its creator, Joseph Weizenbaum, built it to “demonstrate that the communication between man and machine was superficial”. Evidently, he did not anticipate the success of its programme.
ELIZA is often described as a therapist chatbot (see this article’s title!). The truth is the therapist ELIZA ‘skill’ was only one of many scripts built by Weizenbaum. It does remain the most well-known, though. This script, DOCTOR, follows simple Rogerian psychotherapy rules to impersonate a real-life therapist.
To Weizenbaum’s surprise, many people who got to interact with ELIZA attributed human feelings to the machine. Some even got attached to it and refused to believe it was a machine (including, comically, his own assistant).
Finally, ELIZA is regarded as one of the first computer programmes capable of passing the Turing Test — no easy feat in the 60s!
Playing with ELIZA
Though not necessary to carry on reading, you might want to have a bit of fun chatting with ELIZA. You can do so on various websites.
I enjoy using this version because it is rapid. ELIZA’s responses are instant. Have a play with this one if you’d like.
Many other websites offer alternative versions of ELIZA. Note that some of those have been altered or improved over time. ELIZA does not have a machine learning engine to support learning on its own but the community has had the time to bring modifications to its open source code.
Chatbot ELIZA’s conversational approach
EIZA acts as a therapist. Not just any therapist, though: a Rogerian psychotherapist.
This is key information in this case because Rogerian psychotherapy employs a unique approach called ‘person-centered’. A Rogerian psychotherapist’s process is to interact with the patient with complete empathy and lack of judgement.
To do this, the psychotherapist asks person-centric questions to the patient. For instance, if a patient were to say ‘I feel depressed’, a Rogerian psychotherapist would try to dig further by asking ‘Why do you feel down?’.
This type of therapy is a back and forth of statements from the patient and questions from the therapist. The goal is to uncover realisations by digging deeper and deeper.
Ok, this concludes our lesson on psychotherapy. Why does all of this matter?
It matters because it dictates how ELIZA actually interacts with its users. It allows ELIZA to respond to input with accuracy without ever having to really understand what the user says.
How does ELIZA actually work?
The DOCTOR script that powers ELIZA is relatively simple. It assigns a value to each word of a sentence a user inputs and uses the value to reorder the words in the form of a question. The value of the word is determined by its importance within the sentence (which is where the smart stuff happens).
Let’s take an example. Taking the sentence ‘I want to run away from my parents’.
ELIZA attributes a weighted value to each word of that sentence, like so.
ELIZA attributes low values to pronouns (I), slightly higher values to action verbs (want to), and the highest value to the actual action (run away from my parents). This allows the programme to know exactly how to flip the sentence around to ask a digging question.
How? Simply turn the values into a question, flip the pronoun, and switch the verb to convey meaning.
The answer, then, becomes “What would getting to run away from your parents mean to you ?”
This principle works for every user input. If you tell ELIZA you “love to fly kites”, it will answer in the same fashion:
As you can see, we are met with the same kind of answer. The user’s input is switched around, albeit in a different manner, and a question is asked to dig deeper into the user’s underlying feelings.
What can we learn from ELIZA?
The biggest learning point to get from ELIZA is about complexity. It amazes me how simple ELIZA’s script actually is, yet plenty of humans got easily tricked.
Granted, this was a long time ago. In fact, this was 30 years before smartphones even reached our pockets. This, once again, reinforces how much of an innovation ELIZA was.
Yet, this is something you can take from this old chatbot: don’t over complicate things. Sure, ELIZA wasn’t a smart chatbot by any means. It didn’t learn or adapt. But it had one job to do and it did it well.
Originally published at blog.ubisend.com.