How Secure Are Chatbots?
Chatbots are the hottest new digital technology, and it’s not hard to see why. By providing a user-friendly conversational experience across multiple messaging channels, chatbots open up opportunities for brands to engage more directly and frequently with their customers in ways that feel more genuine and personal. As a result, chatbots are poised to disrupt several industries, including retail banking.
The rising popularity of messaging platforms and conversational experiences has led to rapidly increasing rates of chatbot development, deployment, and adoption in recent months. Consumers are already using chatbots on a range of messaging platforms such as Facebook Messenger, Slack, Kik, SMS, and WhatsApp, and companies and developers alike are continuing to roll out chatbots to capitalize on the opportunity.
Recently, there’s been a wave of press expressing concern for chatbot security, labeling the technology with bold claims like “the next big cybercrime target.” While it’s true that security is important for any technology, much of the concern around chatbot security is excessive alarmism. Chatbots may be the newest technology from a user experience perspective, but they operate on standard, secure Internet protocols that have been in place for decades.
Internet Security 101
Internet security is complex, but at a very basic level, it involves making sure communication between two parties cannot be intercepted, altered, forged, or read by unauthorized third parties.
Since the earliest days of the Internet, security features have been implemented to ensure the safe transfer of data between parties. HTTPS is the standard web protocol for securing online communication. This protocol facilitates secure communication by transferring data over Hypertext Transfer Protocol (HTTP) through a connection encrypted by Transport Layer Security (TLS) or Secure Sockets Layer (SSL). The end result is that HTTPS protects the privacy and integrity of data exchanged between parties.
There are two types of security concerns:
- Threats: Ways that a system can be compromised.
- Vulnerabilities: Ways that a system can be compromised that aren’t protected by safeguards.
For example, a bank robber can be considered a security threat. His attempts to steal money from the bank represent a way that the bank’s security can be compromised. If the bank does not take steps to protect itself from this threat (such as securing assets in a locked vault), it has security vulnerabilities that could allow a robbery to occur. While the threat of robbery always exists, the safeguarding of assets in a secure location allows the bank to mitigate the threat and prevent possible damages.
With every web technology, security threats are a fact of life; all systems have weaknesses. However, with careful safeguarding of attack vectors, security vulnerabilities can be mitigated. As always, it is up to the creators of these technologies to follow best practices for security, such as those outlined by the Open Web Application Security Project (OWASP).
How Chatbots Work
Chatbots send and receive messages through existing voice and text messaging platforms, like Facebook Messenger, Slack, Siri, SMS, and Amazon’s Alexa. When a user sends a message to a chatbot, or vice versa, a request is sent to the parent platform, where the identity of the sender must be verified before the data request is fulfilled (for personalized experiences). This interface allows users to get information from the brands they engage with via the messaging platforms they already use.
Much of the concern around chatbot security is centered around the applications of chatbots in the financial services sector, where communication between institutions and their clients must not only maintain the highest levels of privacy and security but also ensure compliance with industry regulations.
Banks and other financial institutions already do this through secure messaging services, which transmit client data securely via HTTPS protocols. HTTPS, in conjunction with HTTP metadata (Transport Level Authentication) techniques, have served financial providers like Stripe with sensitive information retrieval for years. These same techniques are used by banking chatbots to secure the transmission of user data.
Same Technology, New Experiences
The bottom line is that the technology powering chatbots isn’t new; it’s just personified through artificial intelligence. New experiences, platforms, and devices are redefining how users engage with brands, but they still transmit data via secure HTTPS protocols.
The voice-activated experience
Within the past few years, voice-activated virtual assistants have become popular. Apple’s Siri, Microsoft’s Cortana, Google Assistant, and Amazon’s Alexa are widely available to the masses and can carry out an ever-increasing number of tasks. With the rising popularity of voice-activated devices for the home (such as the Amazon Echo and Google Home), voice-activated chatbots will only become more prevalent and sophisticated.
The text-based experience
When a voice-activated device isn’t available, or you simply prefer to text rather than talk, there’s the text-based chat experience. You can send a text message to a chatbot across a range of messaging platforms, from mobile messaging apps like Facebook Messenger to plain old SMS messaging. There are now over 11,000 chatbots on Facebook Messenger alone, and that number is only expected to increase over the next year.
These new experiences are raising questions about the privacy and security of user data. However, we’ve seen this same concern play out before with the transition from websites to mobile apps. As apps became the dominant mobile technology, computers — once housed and secured within stationary buildings — were now portable, leading to a host of new security concerns. In response, an entire industry was built around mobile security. With the rising popularity of chatbots, mobile security (already a robust industry) will continue to address new and existing security concerns for mobile technologies.
How Chatbots Are Secured
Chatbots use two basic processes to ensure security: authentication (the process of verifying a user’s identity) and authorization (the process of granting a user permission to execute a given task).
Requests for data through chatbots are verified using authenticated tokens, which allow users to verify their identity without repeatedly entering their login credentials. For example, say a user wants to order an Uber ride through Facebook Messenger. While logged into Messenger, the user would send a ride request through the Messenger app. After verifying the user’s identity, the app generates a secure authentication token, which is relayed to Uber along with the ride request. The receipt of the authentication token allows Uber to verify the identity of the user, and the request is processed.
Chatbots are deployed across many platforms, each of which have their own internal security responsibilities. Securing financial chatbots is in the hands of these platforms just as much as it is in the hands of bot developers.
Chatbots can be secured using many of the same security strategies used for other mobile technologies:
- User identity authentication: A user’s identity is verified with secure login credentials, such as a username and password. These credentials are exchanged for a secure authenticated token that is used to continually verify the identity of the user.
- Authentication timeouts: Authenticated tokens can be revoked either by the user or automatically by the platform after a given amount of time.
- Two-factor authentication: A user is required to verify their identity through two separate channels (e.g., once by email, then again by text message).
- Biometric authentication: A user is required to verify their identity using a unique physical marker, such as a fingerprint or retina scan (e.g., Apple’s Touch ID).
- End-to-end encryption: The entire conversation is encrypted so that only the two parties involved in the conversation can read it. Facebook Messenger recently implemented this capability with their Secret Messages feature, and we hope to see support for bot integration soon.
- Self-destructing messages: When potentially sensitive information is transmitted, the message containing this information is destroyed after a given amount of time. Our banking chatbot Abe does this on Slack.
Concern for security is an important part of the digital age, and each new innovation comes with security concerns. However, the technologies making up most chatbots have been used to safely transmit data for many years.
Chatbot developers are keenly aware of the need for privacy and security, particularly those building chatbots for the financial industry. When these built-in security measures are combined with basic user security precautions, chatbots provide both strong security and ease of accessibility. This allows enterprises to offer their clients the best customer experience using the latest security measures.
It’s easy to become overly concerned about security with new technologies, but in the case of chatbots, this concern is largely misunderstood. Chatbots are built on the same secure Internet infrastructure and integrated platforms as websites and apps; they just provide different user experiences.
This post originally appeared on Abe.