credits: recast.ai (Jasmine Anteunis)

How Chatbots Will Redefine the Future of App Privacy

Hamza Harkous
Published in
7 min readApr 24, 2016

--

The race for the next major platform for chatbots has started. Microsoft and Facebook, among others, are competing to be the go-to place for bot developers. Their pitch is that any business, big or small, will be able to easily design an intuitive user experience via their bot platforms. They can also reach to around one billion users who already have Facebook Messenger or Skype, without requiring them to install a single new app. That is why a lot of analysts position bots to make a lot of traditional apps obsolete.

As Jan Dawson further points out, what these early platforms have in common is that they come from the players who have failed before to successfully take hold of the mobile arena. That arena is where Apple and Google excelled with iOS and Android. On the other hand, Microsoft’s Windows phone is almost on hold while Facebook’s previous attempts (remember Facebook Phone and Facebook Home?) have never built momentum. Hence, owning the bot ecosystem is the next hope for Facebook and Microsoft, not only for direct monetization but also for hunting the biggest of all fish: Your Personal Data.

Facebook and Microsoft are after the biggest of all fish: Your Personal Data.

A Single Medium for All Transactions

What is missing from the bots’ narrative is that, when bot platforms take off, we will be entering a new era from a privacy perspective. We will be completing the new shift in data control from the users’ hands to Facebook’s, Google, and Microsoft’s hands, and that is, in my opinion, the next big privacy concern.

The idea is that everything you do will be under Facebook’s eyes. Forget about Facebook’s Like button tracking your visits to other sites. Bot platforms are on a whole new level. Instead of ordering a ride from Uber and buying clothes from Spring individually, you will be doing this under the eyes of Facebook. The whole transaction is logged by Facebook, which mediates the communication between you and every other company. It will no longer be an absurdity that a messaging app eventually gets access to all the possible permissions out there, given the diversity of the bots’ use-cases.

credits: recast.ai (Jasmine Anteunis)

The only way to mitigate this issue is if Facebook introduces a private channel (via end-to-end encryption) between the user and the bot developers. That way, Facebook will only know which bots you are talking to, but not the content of your conversations (i.e. your ride destination or shopping cart items). The first attempt at such a private channel has been taken by Wire, with its upcoming API, offering end-to-end encryption between the user and the bot developer. However, a step like this is not likely to be taken by the major platforms as it goes against their business models in the first place. It will also prevent them from the gold mine of conversational data between bots and humans, which might be the key to training the true AI-based bots of the future. Moreover, these platforms cannot easily monitor the spamming or phishing behavior of misbehaving bots without having access to the conversation itself. Hence, to say the least, the majority of bot platforms were not designed with privacy in mind.

Securing bot communications is against Facebook’s business model in the first place.

The Ultimate Data Brokers

Based on this, Facebook and other bot platforms have the potential to become the ultimate data broker of all time. Traditional data-brokering companies, such as Acxiom and Epsilon, invest a lot in gathering data from multiple sources: supermarket purchases, bank transactions, health records, etc. They also work on unifying and aligning such data to map it to specific people before they sell it to marketers or even governmental agencies. Their work is one of the least heard-of multi-billion dollar industries.

These companies, however, have been always hindered by the absence of a unique identifier of each person. They have to use a lot of heuristics to match your health records’ identifier with your supermarket loyalty card information for example. Facebook, which partnered before with these data brokers for its advertising business, might never have this alignment problem with bots. All your conversations and transactions are tied to your Facebook ID or mobile number. It is well positioned therefore to take over the reign of data brokering.

Facebook is well positioned to take over the reign of data brokering.

The Future of Permissions

Another gray area in the future of bot platforms is the concept of permissions. In mobile platforms, users are asked to grant access to specific permissions (e.g. contacts, location, body sensors, billing, etc.) before an app gets access to their data. However, up till now, this has not been fully addressed in the case of bots.

Take the example of location access. Nowadays, whenever a chatbot needs your location, it has to explicitly get access to it every time unless this is the same location you requested before. Even then, it only gets access to the current location, and it cannot track your location. Imagine if transportation apps, like Google Maps, had this type of continuous requests. They will definitely become unusable. The same can be applied to other types of permissions, where seamless access is needed. Fitness-related chatbots are one such use-case. If they are to behave like real apps with a conversational interface, they will need always-on access to body sensors, which not currently possible.

With the growth of the chatbot platforms, they will have to handle the same question that Android and iOS struggled with for years: how to ensure that users give informed consent to permissions? And how to guarantee that apps are only granted permissions they need?

Should users trust Facebook to take these decisions on their behalf? Alarmingly, with its “Customer Matching” service offered to bot developers, Facebook is not giving the rights signs in that direction. It is basically allowing companies which have users’ mobile numbers to initiate contact with them. It is legitimate to ask then whether any user information will be eventually transferred to these companies before users express interest in their services? At the end, a lot of choices that might make sense from a business point of view are in the privacy-nightmare zone for users.

Facebook will have to handle the same issues that Android and iOS struggled with for years.

Cloud-based AI Engines

Furthermore, the success of conversational bots in the long run hugely depends on the advances in natural language processing and user intent understanding. Such advances were never a key ingredient to a shopping or a flowers delivery company in traditional touch-based app interfaces. Given that in-house AI is still a long shot for most companies, they will be relying more and more on cloud-based APIs for intelligent processing of users’ input. The big players in this are again Facebook with wit.ai, Microsoft’s with the Bot Framework, IBM with Watson, and recently Google with api.ai.

Accordingly, this is yet another avenue where data is going to be more and more centralized, thus raising further privacy concerns. This is also part of the expansion to build the AI-engines that power bots outside Messenger or Skype. Data exchanged with bots via pure messaging applications like Telegram, Slack, or Kik, might end up powering Facebook’s intelligence (and potentially advertising) engine. Simply put, the bot AI engines might be the Trojan horses for enabling their owners to become the mediators of bot conversations on any platform.

credits: recast.ai (Jasmine Anteunis)

In sum, users will be paying for the UI convenience via their own data. Facebook and other bot platform owners still have a lot to figure out when it comes to striking a balance among monetization, user privacy, and contextual awareness. In any case, these platforms represent a clear manifestation of the post-privacy world, where users’ behavioral data is in Facebook’s hands from the onset, and users are not even consulted when their data is shared. To be realistic, the question is not if researchers can do anything to stop them; it is rather about how they can minimize the loss!

If you are interested in the related discussion on privacy in the next generation of AI-rich messaging apps, check my follow-up article on Encryption, AI, and the Myth of Incompatibility:

This is my Twitter:

…and my personal website:

Enjoyed the article? Click the ❤ below to recommend it to other interested readers!

--

--