Replacing Surveys With Chatbot Conversations
Chatbot Wisdom From Our Beta-Release
So, it’s 2017 and you’re in need of feedback or opinions on something. What do you do?
Well, if you need many opinions, you go with an online survey.
If you need more detailed responses, go out and interview people. Simple as that.
Gathering detailed feedback from a large crowd isn’t an easy task to accomplish. Basically, you’re stuck with having to manually call people to get any useful information whatsoever.
Aside from the fact that they are mind-numbingly boring to fill out, surveys suffer from only scratching the surface of what you’re trying to find out. The good part is, of course, that you can send them out to as many email addresses you can get your hands on.
So, what would be a 2017-solution to this problem?
CHATBOTS - To the rescue!
Think about it. An artificial interviewer that’s able to identify interesting topics and follow up on them in the same manner a human interviewer would. But scaled up. A lot.
Chatbots have the potential of becoming the perfect compromise between a survey and an interview.
But, as with all AI/ML-applications, data is absolute key to achieving anything that’s even remotely close to mimic human behavior. And even with a very large set of data, it’s still only possible to get acceptable results in a reeeeeeally narrow area. That’s basically how far humanity has come on the long road towards exploring the deep abyss that is AI.
So, back to your need of collecting feedback. If we are to achieve the incredible feat of making a chatbot smart enough to carry out a conversation that’s not completely brain-dead, we need to start small. Really small.
That’s how we were thinking when designing our chatbot Hubert.
The thought is that Hubert, in time, will replace the feedback surveys teachers use to adapt and improve their teaching.
But even the relatively narrow area of student feedback was too large to tackle at once, we needed to think even smaller.
That’s when we remembered that teachers often perform formative evaluations during an ongoing course to catch improvement opportunities early. You could describe them like a scrum stand-up, but for teachers so to speak…
Hey, wait a minute, why don’t we just build a scrum stand-up bot instead? What? Yeah!
A quick Google session later we were fully committed to staying with formative evaluations.
… and the list goes on. But hey, if stand-up bots work well, there’s nothing to say that chatbots wouldn’t work equally well with formative evaluations.
So after speaking to 15 or 20 teachers about our idea, we sat down in our Stockholm office and proceeded to build what is now the worlds’ first formative course evaluation chatbot. Take that Silicon Valley!
The beta version of Hubert has now been out for a couple of weeks and has so far attracted a couple of hundred users.
So, what do they think? A great advantage with chatbots is that you can ask users for instant feedback. In our case; feedback on giving feedback. Feedback inception sort of.
For every formative evaluation, Hubert asks each student what they thought about the conversation. Comments have been wildly varying from users describing Hubert as a bad version of Clippy to some pretty positive ones.
Here’s our all time favorite:
When it comes to more constructive feedback, the most commonly recurring is that Hubert is too corny and formal when asking his questions. And we agree — Hubert needs more chill. A big part of making evaluations better is to drop the rigid and monotonous way of asking questions.
Huberts’ lack of conversational finesse is a bit harder to fix in a jiffy, we still need more data to be able to respond in a more human way. Users actually seem to accept Huberts’ lack of real intelligence, but when he starts to repeat himself is when people go crazy. Nothing, and I do mean nothing, is more annoying than talking to a bot stuck on repeat.
“I’m just a simple bot, please just say ‘Yes’ or ‘No’”
For example, when students enter their individual link to the evaluation, they’re asked wether or not they’re ready to start.
Students have a habit of saying ‘Yes, I’m ready’ in some very strange and inventive ways. When Hubert doesn’t even understand that they would like to move on to the next step, it sets the tone of the whole conversation and leaves a bad first impression.
Adding to the frustration is the repeating ‘I-Have-No-Idea-What-You’re-On-About’-message
With more data we can get Hubert to actually comprehend the context and answer accordingly, but in the meantime, we quickly learned that this problem is most easily solved by teaching Hubert some new lingo. Saying the same thing with different words really add a lot towards making the conversation seem more human. A simple “Wait, what? Yes?” is still a bad replacement of Hubert actually comprehending, but at least it makes the user a bit more intrigued to continue and explore what more he can say.
Right now, we’re in full swing working on a significantly larger vocabulary for Hubert and we see increasingly better conversations every day. Its like watching our very own code-baby grow up. Simply amazing.
These are the kinds of issues we’re constantly working on, and even though Hubert still isn’t perfect, we’re improving him rapidly to make sure our users notice and trust our efforts to make him better. After all, we have grand plans for Hubert and will continue to work hard to push them through.
This autumn, w have a few really exciting collaborations coming up, so stay tuned for more reports and announcement.
We’re sharing these insights to help other chatbot developers move forward with their awesome products. Since chatbots so far are largely unexplored in many areas, building one is all about testing, getting user feedback and improving.
We hope you’ve learned something useful from this post.
Oh, and please try Hubert and tell every teacher you know to try him, or create your own account and start sending evaluations.
But please, don’t teach him anything stupid and bare in mind we’re still in beta.