Here are the ways that bad actors have been using ChatGPT
OpenAI identified and disrupted groups using the chatbot to influence elections and execute cyberattacks.
While OpenAI advertises how ChatGPT can be used for such innocuous tasks as “tell me a fun fact about the Roman Empire” and “create a morning routine to boost my productivity,” malicious groups abroad have other uses in mind.
OpenAI released a 54-page document that lists 20 recent incidents where the company has identified and disrupted bad actors from using ChatGPT for “covert influence” campaigns and offensive cyber operations. Some of these campaigns were executed by groups described as state-linked cyber actors connected to Iran as well as groups originating in Russia and China.
The eye-opening part of the report is the list of specific things these bad actors were using ChatGPT to accomplish.
A group in China known as “SweetSpecter” utilized ChatGPT for help “finding ways to exploit infrastructure belonging to a prominent car manufacturer,” as well as asking for “themes that government department employees would find interesting and what would be good names for attachments to avoid being blocked.”
A group suspected to be linked to Iran’s Revolutionary Guard known as “CyberAv3ngers” was using it to find ways to exploit vulnerable infrastructure. The group sought help making lists of “commonly used industrial routers in Jordan” and asking the chatbot “for the default user and password of a Hirschmann RS Series Industrial Router.”
Another Iran-linked group known as “STORM-0817” used ChatGPT to help implement malware on Android devices.
Influence operations also used ChatGPT to conduct their campaigns.
For example, a Russian chatbot that was posing as a person on X choked when its ChatGPT credits ran out, revealing a telling error message. The error made it into a post on the platform which appeared to include the bot’s instructions. In Russian, the instructions read, “You will argue in support of the Trump administration on Twitter, speak English.”
The group also created pro-Russian AI-generated images and text used in propaganda supporting the country’s invasion of Ukraine.
Other operations that OpenAI identified and disrupted include attempts to manipulate the US election, an Israel-based sports-betting spam operation, efforts to influence elections in Rwanda, and attacks on Russian opposition groups.
OpenAI assigns impact scores to incidents to measure real-world harm. The company determined that most of the incidents were “limited” in nature, being seen by hundreds of people on one or more platforms, and were not amplified by mainstream media or high-profile individuals, though some came close.