BRUSSELS (Reuters) – European Union police force Europol warned on Monday that the artificial intelligence-powered chatbot ChatGPT could be misused in phishing attempts, disinformation and cybercrime, raising concerns ranging from legal to ethical issues.
Since its launch last year, Microsoft-backed OpenAI’s ChatGPT has set off a tech frenzy, prompting competitors to launch similar products and companies to incorporate it or similar technologies into their apps and products.
“As the capabilities of large language models (LLMs) such as ChatGPT are being actively improved, the potential exploitation of these types of AI systems by criminals provides a grim outlook,” Europol said while presenting its first technology report starting with the chatbot.
It singled out the harmful use of ChatGPT in three areas of crime.
“ChatGPT’s ability to craft highly realistic text makes it a useful tool for phishing purposes,” said Europol.
With its ability to reproduce language patterns to impersonate the speech style of specific individuals or groups, the European Union’s law enforcement agency said, criminals could use the chatbot to target victims.
She said ChatGPT’s ability to output authentic text with speed and volume also makes it an ideal tool for propaganda and disinformation.
“It allows users to create and publish messages that reflect a specific narrative with relatively little effort.”
Europol said that criminals with little technical knowledge could turn to ChatGPT to produce malicious code.
Fu Yun Che’s report. Editing by Angus McSwan
Our standards: Thomson Reuters Trust Principles.
“Incurable bacon nerd. Lifelong tv aficionado. Writer. Award-winning explorer. Evil web buff. Amateur pop culture ninja.”