Creating Thought in a Computer

Dan Murrell
Chatbots Magazine

--

For a computer to learn how to speak to a human, it needs to do what we all do naturally. We read a sentence, hear some words, and attach meaning to those words and phrases. We extract information and context. We understand. We think of a response based on what we learned.

A computer is logical. Most human conversation is not. Humans are emotional, instinctual, intuitive, embarrassed, prideful, spiteful. They can be logical, but most are not built that way.

For a computer to achieve a believable autonomous conversation, it needs to recreate the learning experience that a human had. It needs to start from scratch. It needs to remember some things and forget others.

Its designer should take steps to recreate the illogical side of human thought.

That emotional, intuitive, gut feeling aspect of human thought. The parts that override logic, the holes in memory, that thing that causes déjà vu.

Computers need to recreate both long term and short term memories, and long term memories that sometimes can’t be recalled in the moment. It needs to remember some of what it’s said, but not all.

For a computer to think and speak like a human, it has to recreate that sense that it really knows this thing, but just can’t quite put its electrons on it. And it continues to form a sentence in spite of that, just down an alternate path than it otherwise would have taken if it could remember in time.

There needs to be variability, there needs to be unpredictability, there needs to be vulnerability. There should be a possibility of misspeaking, of telling half-truths, and outright lies. It needs to use the wrong word occasionally, or the lazy word, or the too smart for its own good-, too big for its britches-word.

It needs to read its audience, and use big words when necessary to impress, and smaller words when necessary to not bore.

A computer doesn’t really need to fool a human to have a rich conversation with one, as in the Turing Test, it needs to simply be compelling. There are ways to get there.

Our tools are so close. People have short bursts of conversation with characters like Alexa and Siri that are often entertaining and interesting. But they aren’t prolonged and thoughtful. That is the next hump to get over, the next barrier to cross.

Creating natural thought in a computer is going to require us to occasionally prevent a computer from doing what it does best, being perfect and logical.

To be natural, it will need to be flawed, like us.

Enjoyed the article? Click the ❤ below to recommend it to other interested readers!

--

--

Software engineering leader in Austin, Texas, technologist, author, journalist, liberal