“Employees and customers often enter sensitive information during chatbot sessions, but you can minimize chatbot security and privacy risks.
Are chatbots your next big data vulnerability?
Yes, chatbots, those little add-ons to Slack and other messaging apps that answer basic HR questions, conduct company-wide polls, or get information from customers before connecting them to a person, pose a security risk.
Because of the way we buy bots, Rob May, CEO of chatbot vendor Talla, says the IT industry is heading toward a data security crisis. “In the early days of SaaS [software as a service],” he explains, software “was sold as, ‘Hey, marketing department, guess what? IT doesn’t have to sign off, you just need a web browser,’ and IT thought that was fine until one day your whole company was SaaS.”
Suddenly, critical operations were managed by platforms bought without any user or data management best practices in place. To head off similar data vulnerability from chatbots, May recommends streamlining bot purchasing and implementation now.
Unfortunately, employees might already be using chatbots to share salary information, health insurance details, and similar data. So what steps can IT can take now to keep that data safe? How do you stop this vulnerability before it starts? What other questions should you be asking?”