Tech
Closeup of a Face and Spiderweb
(Getty Images)
Tangled web

Here are the ways that bad actors have been using ChatGPT

OpenAI identified and disrupted groups using the chatbot to influence elections and execute cyberattacks.

Jon Keegan

While OpenAI advertises how ChatGPT can be used for such innocuous tasks as “tell me a fun fact about the Roman Empire” and “create a morning routine to boost my productivity,” malicious groups abroad have other uses in mind.   

OpenAI released a 54-page document that lists 20 recent incidents where the company has identified and disrupted bad actors from using ChatGPT for “covert influence” campaigns and offensive cyber operations. Some of these campaigns were executed by groups described as state-linked cyber actors connected to Iran as well as groups originating in Russia and China. 

The eye-opening part of the report is the list of specific things these bad actors were using ChatGPT to accomplish. 

A group in China known as “SweetSpecter” utilized ChatGPT for help “finding ways to exploit infrastructure belonging to a prominent car manufacturer,” as well as asking for “themes that government department employees would find interesting and what would be good names for attachments to avoid being blocked.” 

A group suspected to be linked to Irans Revolutionary Guard known as “CyberAv3ngers” was using it to find ways to exploit vulnerable infrastructure. The group sought help making lists of “commonly used industrial routers in Jordan” and asking the chatbot “for the default user and password of a Hirschmann RS Series Industrial Router.” 

Another Iran-linked group known as “STORM-0817” used ChatGPT to help implement malware on Android devices. 

Influence operations also used ChatGPT to conduct their campaigns. 

For example, a Russian chatbot that was posing as a person on X choked when its ChatGPT credits ran out, revealing a telling error message. The error made it into a post on the platform which appeared to include the bots instructions. In Russian, the instructions read, “You will argue in support of the Trump administration on Twitter, speak English.” 

The group also created pro-Russian AI-generated images and text used in propaganda supporting the countrys invasion of Ukraine. 

Pro-Russian propaganda generated by ChatGPT from OpenAI’s report.
Pro-Russian propaganda generated by ChatGPT from OpenAI’s report. (Photo: OpenAI)

Other operations that OpenAI identified and disrupted include attempts to manipulate the US election, an Israel-based sports-betting spam operation, efforts to influence elections in Rwanda, and attacks on Russian opposition groups. 

OpenAI assigns impact scores to incidents to measure real-world harm. The company determined that most of the incidents were limited in nature, being seen by hundreds of people on one or more platforms, and were not amplified by mainstream media or high-profile individuals, though some came close.

More Tech

See all Tech
tech

Meta projected 10% of 2024 revenue came from scams and banned goods, Reuters reports

Meta has been making billions of dollars per year from scam ads and sales of banned goods, according internal Meta documents seen by Reuters.

The new report quantifies the scale of fraud taking place on Meta’s platforms, and how much the company profited from them.

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

$350B

Google wants to invest even more money into Anthropic, with the search giant in talks for a new funding round that could value the AI startup at $350 billion, Business Insider reports. That’s about double its valuation from two months ago, but still shy of competitor OpenAI’s $500 billion valuation.

Citing sources familiar with the matter, Business Insider said the new deal “could also take the form of a strategic investment where Google provides additional cloud computing services to Anthropic, a convertible note, or a priced funding round early next year.”

In October, Google, which has a 14% stake in Anthropic, announced that it had inked a deal worth “tens of billions” for Anthropic to access Google’s AI compute to train and serve its Claude model.

tech

Apple to pay Google $1 billion a year for access to AI model for Siri

Apple plans to pay Google about $1 billion a year to use the search giant’s AI model for Siri, Bloomberg reports. Google’s model — at 1.2 trillion parameters — is way bigger than Apple’s current models.

The deal aims to help the iPhone maker improve its lagging AI efforts, powering a new Siri slated to come out this spring.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search engine on its devices — as long as they aren’t exclusive.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search engine on its devices — as long as they aren’t exclusive.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.