Chatbots have been getting a lot of attention for making inappropriate comments, but now it turns out they can also be used to steal your data.
How Chatbots Can Be 'Hacked' by Corpus Poisoning
Researchers have found a way to make Bing’s artificial intelligence (AI) chatbot ask for user information. The new method could be a useful tool for hackers amid growing concerns that chatbots could be used for malicious purposes, including scams.
“You could use AI chatbots to make your message sound more believable,” Murat Kantarcioglu, a computer science professor at the University of Texas at Dallas, told Lifewire in an email interview. “Eventually, fake text messages could be almost as good as real text messages.”
Getting hacked by a chatbot might be easier than you think. A new Cornell study found that AI chatbots can be manipulated through text embedded in web pages. The idea is that a hacker can insert a hint into the text in a small font that activates when someone asks the chatbot a question.