How to interact with bots? Dealing with the complexity of a new design paradigm

Sandi MacPherson
Chatbots Magazine
Published in
7 min readJun 13, 2016

--

Since making a personal chat bot a couple months ago, I’ve met with some amazing people working on bots in the Bay Area. What’s become very obvious is that bot interfaces and interactions we’re all using are brand spankin’ new, so much so that even their core characteristics have yet to be scoped or named.

I’ve had some pretty intense, off-the-wall conversations on the topic of ‘designing for bots’, and wanted to share some observations. Three points came up multiple times in these conversations that were especially interesting:

  • establishing authority or expertise;
  • importance of timeliness; and
  • text-based conversations & human biases.

Authority and expertise

One of the main themes that’s common in the bots being built today is the idea of the bot as expert. It’s often assumed that the bot has access to information that is otherwise difficult or time-consuming to access. Whether that be finding shoes to suit your style and budget, the one flight that matches your last-minute schedule change, the perfect flower arrangement for a special day, or the headphones that’ll allow you to enjoy your morning BART commute without disruption from those incredibly loud screeching rails.

What can we learn about expertise in bots from how we perceive expertise in humans?

While our expectations of bots are probably different, it’s an interesting starting place. In talking with one founder who was working on an ecommerce product/bot, she expressed some concern about how to convey that the bot knew which product was the best choice. The way she wanted to do that was to pass along reviews and product recommendations from real-life experts, via the bot. She believed the shopper would want to have a sense for ‘Where is the bot getting this information from? Can I trust that these reviews are true? What actual people have tried this product? How do I know this bot is actually recommending the best product, and has reviewed all of the potential options?’.

The potential for shoppers to feel an overly strong sense of #fomo was causing the founder to preemptively design a system to vet reviews, assign external validation via a sourcing system, build reviewer profiles, etc. etc.

I countered her a bit, noting and asking:

“That’s a pretty strong assumption, that bot expertise will or should look like human expertise via social proof. I don’t think it has to… Can’t you establish expertise in another way, beyond linking to the 5-star profile of a reviewer with multiple reviews across categories that matter?”

What I was hinting at is the idea that the bot has the potential to be perceived of as a standalone expert. Many people seem to already have the built-in mental model of bots as omnipotent, all-knowing beings — so why not take advantage of that?

In addition to this foundational norm, there are many other facets of interacting with a chatbot that can be designed to establish expertise — which, for some shoppers, may be required before they feel confident enough to complete a purchase.

You could establish that reputation by:

  • Using vocabulary and jargon that infers domain knowledge and expertise;
  • Asking enough questions to create a sense that the bot has full information;
  • Varying response time, to indicate work being done;
  • Using a personality and tone that builds trust;
  • Creating changes in urgency to infer discovering unique and important information;
  • …and via many, many more ways!

Social proof (i.e. in the previous example, the rating system of reviewers) is only one piece of one method often used to establish (human) expertise. When you’re dealing with a non-human entity, social proof may be an inaccessible lever — but that doesn’t mean there aren’t other aspects of human psychology that your bot can lean on to create the desired outcome. Thinking about bots and how we interact with them will require many shifts in our thinking similar to this, away from the established norms and biases we all hold related to human-to-human interactions — a point I’ll touch on again in a bit.

The power of timeliness

A common question that came up in my conversations with bot builders related to speed of response. Many people find that the interactions they’re having with bots on Kik, Telegram, Slack, Messenger, etc. are too fast… or, sometimes, too slow. It can be a bit confusing to pull apart what matters with respect to time, but it seems that people generally expect bots to sometimes abide by human characteristics (i.e. don’t message me with a multi-paragraph welcome within 2ms of me saying ‘hi’), while also having some ‘robot’-like characteristics (i.e. if I’m searching for some information that I believe is readily accessible, a bot should be able to return it to me in 2ms).

It’s an interesting product design variable to explore. With most products, there’s no nuance around time — generally, everything we build should just be as quick as possible, meaning fast load times, quick refreshes, and smooth swipes. There’s often no ‘reason’ to have the user wait.

With bots, it seems like there are some really compelling reasons to actually understand and design how the product interactions could play out over different time chunks. One person I spoke with had worked on a ‘message me to get stuff delivered to you’-type bot that had a fair amount of AI up-front, so that the chat was only transferred to a human helper if the system couldn’t understand the shopper’s request. What he found was that when a ‘real person’ from their team became involved and took over the chat, the shopper became more uneasy and concerned that there was a problem — simply because the response times had gone up as the human had to read the history of the conversation and understand the request— when in actuality, that time delay and having a real person take over meant there was a higher probability that their request would be handled completely and with a higher degree of accuracy. Funny.

We’re full of biases, can bots leverage some?

Another interesting idea is a bit meta, as it came to mind as I wrote my last post about bots. Oftentimes in making our day-to-day decisions, many of the commitments to ourselves and decisions we make exist only in our heads as intangible thoughts. We go about our day making mental notes — what we’d like to buy when we go to the electronics store after work, how much time we have to spend shopping, which color and brand of item we’re hoping they have in-stock, how much we want to spend, etc.

However, we never actually write down the full expression of our desire — “I want a pair of bluetooth headphones, with a noise-cancelling feature, black, and that are under $400”. While it seems like a slight, meaningless difference, psychological research (e.g. by Daryl Bemrhave, Allan Teger) have shown that the simple act of writing down an intent changes the likelihood that we’ll follow through with the associated action, or that extending the engagement over a longer time period (i.e. via multiple, unnecessary messages) maybe yield a sense of commitment to the desire the person has stated and then solidified over the duration of the conversation.

It seems likely that simple biases such as these could have a huge impact on the completion rate of various tasks people engage with via text-based bots, simply because of the format through which those ideas and thoughts are conveyed and expressed in the world are different than how we typically think and make decisions in the world, or how we interact with each other via speech.

What other emotional states and human biases will come into play as we design for bots? There are many yet-to-be-identified triggers and unique interactions that bots and bot designers can play with to develop robust systems for bot-human interactions. What’s even more interesting is that this also means that bots have the potential to unlock actions that mere human-to-human processes are unsuited for, ones that we’ve never actually experienced before.

It’s a brand new paradigm, where conventions have yet to become solidified — and anyone working on bots today has the potential to massively impact a whole new generation of how people engage with technology. Exciting :D

Enjoyed the article? Click the ❤ below to recommend it to other interested readers!

--

--

founder at @ddoubleai / @sandimacbot, rip @quibb. advisor to @adoptapetcom. work on @clearlyproduct & @5050pledge. don’t ask me to say bagel #canadian.