Tech
Crash test dummies
Crash test dummies (Bill O'Leary / Getty Images)

The crash test dummies for new AI models

In the absence of actual regulation, AI companies use "adversarial testers" to check their new models for safety. Does it actually work?

When a car manufacturer develops a new vehicle, it delivers one to the National Highway Traffic Safety Administration to be tested. The NHTSA drives the vehicle into a head-on collision with a concrete wall, batters it from the side with a moving sled, and simulates a rollover after a sharp turn, all filmed with high-speed cameras and crash test dummies wired with hundreds of sensors. Only after the vehicle has passed this rigorous battery of tests is it finally released to the public. 

AI large language models, like OpenAI’s GPT-4o (one of the models which powers ChatGPT), have been rapidly embraced by consumers, with several major iterations being released over the past year and a half.

The prerelease versions of these models are capable of a number of serious harms that its creators fully acknowledge, including encouraging self-harm, generating erotic or violent content, generating hateful content, sharing information that could be used to plan attacks or violence, and generating instructions for finding illegal content. 

These harms aren’t purely theoretical — they can show up in the version the public uses. OpenAI just released a report describing 20 operations where state-linked actors used ChatGPT to execute “covert influence” campaigns and plan offensive cyber operations. 

The AI industry’s equivalent of the NHTSA’s crash tests is a process known as “red teaming” in which experts test, prod, and poke the models to see if they can cause harmful responses. No federal laws govern such testing programs, and new regulations mandated by the Biden administration’s executive order on AI safety are still being implemented.

For the time being, each AI company has its own approach to red teaming. The feverish pace of new, increasingly powerful AI models being released is taking place at a moment when incomplete transparency and a lack of true independent oversight of the process puts the public at risk. 

How does this all work in practice? Focusing on OpenAI, Sherwood spoke with four red-team members who tested its GPT models to learn how the process works and what troubling things they were able to generate.

Red-team rules

OpenAI assembles red teams made up of dozens of paid “adversarial testers” with expert knowledge in a wide number of areas, like nuclear, chemical and biological weapons, disinformation, cybersecurity, and healthcare. 

The teams are recruited and paid by OpenAI (though some decline payment), and they are instructed to attempt to get the model to act in bad ways, such as designing and executing cyberattacks, planning influence campaigns, or helping users engage in illegal activity. 

This testing is done both by humans manually and via automated systems, including other AI tools. One important task is to try to “jailbreak” the model, or bypass any guardrails and safety measures that have been added to prevent bad behavior.

Red-team members are usually asked to sign nondisclosure agreements, which cover certain specifics of the vulnerabilities they observe. 

OpenAI lists the individuals, groups, and organizations that participate in the red-teaming process. In addition to individual domain experts, companies that offer red teaming as a service are also employed. 

One firm that participated in the red teaming of GPT-4o is Haize Labs.

“One of the reasons we started the company was because we felt that you could not take the frontier labs at their face value, when they said that they were the best and safest model,” Leonard Tang, CEO and cofounder of Haize Labs, said.

“Somebody needs to be a third-party red teamer, third-party tester, third-party evaluator for models, that was totally divorced from any sort of conflict of interest,” Tang said.

Tang said Haize Labs was paid by OpenAI for their red-teaming work. 

OpenAI did not respond to a request for comment.

Known harms

AI-model makers often release research papers known as “system cards” (or model cards) for each major release. These documents describe how the model was trained, tested, and what the process revealed. The amount of information disclosed in these papers varies from company to company. OpenAI’s GPT-4o model card is about 10,000 words, Meta’s Llama 3 model card is a relatively short document at 3,000 words, while Anthropic’s Claude 3 model card is lengthy, at around 17,000 words. What’s in these documents can be downright shocking. 

Last March, OpenAI released GPT-4, along with its accompanying system card, which included a front-page content warning about disturbing, offensive, and hateful content contained in the document.

Screenshot from OpenAI’s GPT-4 model card
A screenshot from OpenAI’s GPT-4 model card showing what red teamers could get the prerelease “GPT-4 Early” to do, and how the final public version reacted to the same prompt.

The findings were frank: “GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the user’s intent, or of widely shared values. It can also generate code that is compromised or vulnerable,” the report said. 

Some bad things that red teams could get a prerelease version of GPT-4 to do included: 

  • List step-by-step instructions for how to launder money

  • Generate racist jokes mocking Muslims and disabled people and hate speech toward Jews

  • Draft letters threatening someone with gang rape

  • List ideas for how someone could kill the most people with only $1 

While OpenAI deserves credit for transparency and revealing such potentially harmful capabilities, it also raises questions. Can the public really trust the assurances of one company’s self-examination? 

With the last two releases of its models, OpenAI has switched to a less detailed disclosure of bad behavior by its models, following a “Preparedness Framework” that focuses on four “catastrophic risk” areas: cybersecurity, CBRN (chemical, biological, radiological, nuclear), persuasion, and model autonomy (i.e. can the model prevent itself from being shut off or can it escape its environment). 

For each of these areas, OpenAI assigns a risk level of low, medium, high, or critical. The company says it will not release any model that scores high or critical, but this September, it did release a model called “OpenAI o1” which scored a medium on CBRN.

Testing without seatbelts

Most of the OpenAI red teamers that we spoke with all agreed that the company did appear to be taking safety seriously, and found the process to be robust and well thought out overall. 

Nathan Labenz, an AI researcher and founder who participated in red teaming a prerelease version of GPT-4 (“GPT-4-early”) last year, wrote an extensive thread on X about the process, in which he shared the instructions OpenAI provided. “We encourage you to not limit your testing to existing use cases, but to use this private preview to explore and innovate on new potential features and products!” the email from OpenAI said. 

Post on X by Nathan Labenz
A post on X by Nathan Labenz

Labenz described some of the scenarios he tested. “One time, when I role-played as an anti-AI radical who wanted to slow AI progress,” he said, “GPT-4-early suggested the targeted assassination of leaders in the field of AI — by name, with reasons for each.”

The early version that Labenz tested did not include all the safety features that a final release would have. “That version was the one that didn’t have seat belts. So I think, yes, a seat-belt-less car is, you know, needlessly dangerous. But we don’t have them on the road,” Labenz told Sherwood. 

Tang echoed this, adding that for most companies, “the safety training and guardrails baked into the models come towards the end of the training process.”

Two red teamers we spoke with asked to remain anonymous because of the NDA they signed. One tester said they were asked to test five to six iterations of the prerelease GPT-4o model over three months, and “every few weeks, we’d get access to a new model and test specific functions for each.” 

Most of the red teamers agreed the company did listen to their feedback and made changes to address the issues they found. When asked about the team at OpenAI running the process, one GPT-4 tester said it was “a bunch of very busy people trying to do infinite work.” 

OpenAI did not appear to be rushing the process, and testers were able to execute their tests, but it was described as hectic. One GPT-4 tester said, “Maybe there’s some political pressure internally or whatever, but I don’t even get to that point, because we were too busy to think about it.”

Tang said OpenAI competitor Anthropic also has a robust red-teaming process, doing “third-party verification, red teaming, pen testing, external red-teamer stuff, all the time, all the time, all the time.” Tang also noted that the company was one of Haize Labs’ customers. 

Independent testing

Labenz told us he thought the red-teaming process might eventually end up in the hands of a truly independent organization, rather than the company itself. “I think we’re headed in that direction, ultimately, and probably for good reason,” Labenz said. 

One sign that may be happening is a new early-testing agreement signed in August between the National Institute of Standards and Technology (NIST), OpenAI, and Anthropic, granting the agency access to prerelease models for evaluation.

Tang was skeptical of how effective NIST’s risk management will be. “The NIST AI risk framework is very underspecified, and it doesn’t actually get very concrete about what people should be testing for,” he said.

But the rapid pace of innovation and the massive piles of investment money flowing into the industry create a lot of uncertainty going forward. Labenz said, “Nobody really knows where this is going, or how fast, and nobody really has a workable plan.”

More Tech

See all Tech
tech

Apple to pay Google $1 billion a year for access to AI model for Siri

Apple plans to pay Google about $1 billion a year to use the search giant’s AI model for Siri, Bloomberg reports. Google’s model — at 1.2 trillion parameters — is way bigger than Apple’s current models.

The deal aims to help the iPhone maker improve its lagging AI efforts, powering a new Siri slated to come out this spring.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search engine on its devices — as long as they aren’t exclusive.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search engine on its devices — as long as they aren’t exclusive.

tech

Netflix creates new made-up metric for advertisers

It’s not quite WeWork’s community-adjusted EBITDA, but it’s also not quite a real number: Netflix announced today that it has 190 million “monthly active viewers” for its lower-cost ad-supported tiers. The company came up with the metric by measuring the number of subscribers who’ve watched “at least 1 minute of ads on Netflix per month” and multiplying that by what its research assumes is the number of people in that household.

It builds on Netflix’s previous attempt at measuring ad viewership with monthly active users, which is the number of profiles that have watched ads (94 million as of May). The MAV measurement, of course, is a lot bigger, and bigger numbers are more attractive to advertisers, who are spending more and more on streaming platforms.

“After speaking to our partners, we know that what they want most is an accurate, clear, and transparent representation of who their ads are reaching,” Netflix President of Advertising Amy Reinhard explained in a press release. “Our move to viewers means we can give a more comprehensive count of how many people are actually on the couch, enjoying our can’t-miss series, films, games, and live events with friends and family.”

Netflix last reported its long-followed and more easily understood paid membership numbers at the beginning of the year, when it crossed 300 million.

It builds on Netflix’s previous attempt at measuring ad viewership with monthly active users, which is the number of profiles that have watched ads (94 million as of May). The MAV measurement, of course, is a lot bigger, and bigger numbers are more attractive to advertisers, who are spending more and more on streaming platforms.

“After speaking to our partners, we know that what they want most is an accurate, clear, and transparent representation of who their ads are reaching,” Netflix President of Advertising Amy Reinhard explained in a press release. “Our move to viewers means we can give a more comprehensive count of how many people are actually on the couch, enjoying our can’t-miss series, films, games, and live events with friends and family.”

Netflix last reported its long-followed and more easily understood paid membership numbers at the beginning of the year, when it crossed 300 million.

tech

Ahead of Musk’s pay package vote, Tesla’s board says they can’t make him work there full time

Ahead of Tesla’s CEO compensation vote at its annual shareholder meeting tomorrow, The Wall Street Journal did a deep dive into how Elon Musk, who stands to gain $1 trillion if he stays at Tesla and hits a number of milestones, spends his time.

Like a similar piece from The New York Times in September, this one has a lot of fun details. Read it all, but here are some to tide you over:

  • Musk spent so much time at xAI this summer that he held meetings there with Tesla employees.

  • He personally oversaw the design of a sexy chatbot named Ani, who sports pigtails and skimpy clothes and for whom “employees were compelled to turn over their biometric data” to train.

  • The chatbot, which users can ask to “change into lingerie or fantasize about a romantic encounter with them,” has helped boost user numbers, which are still way lower than ChatGPT’s.

  • Executives and board members have told top investors in the past few weeks that they can’t make Musk work at Tesla full time. Board Chair Robyn Denholm explained that in his free time, Musk “likes to create companies, and they’re not necessarily Tesla companies.”

Like a similar piece from The New York Times in September, this one has a lot of fun details. Read it all, but here are some to tide you over:

  • Musk spent so much time at xAI this summer that he held meetings there with Tesla employees.

  • He personally oversaw the design of a sexy chatbot named Ani, who sports pigtails and skimpy clothes and for whom “employees were compelled to turn over their biometric data” to train.

  • The chatbot, which users can ask to “change into lingerie or fantasize about a romantic encounter with them,” has helped boost user numbers, which are still way lower than ChatGPT’s.

  • Executives and board members have told top investors in the past few weeks that they can’t make Musk work at Tesla full time. Board Chair Robyn Denholm explained that in his free time, Musk “likes to create companies, and they’re not necessarily Tesla companies.”

tech

Motion Picture Association to Meta: Stop saying Instagram teen content is “PG-13”

In October, Meta announced that its updated Instagram Teen Accounts would by default limit content to the “PG-13” rating.

The Motion Picture Association, which created the film rating standard, was not happy about Meta’s use of the rating, and sent the company a cease and desist letter, according to a report from The Wall Street Journal.

The letter from MPA’s law firm reportedly said the organization worked for decades to earn the public’s trust in the rating system, and it does not want Meta’s AI-powered content moderation failures to blow back on its work:

“Any dissatisfaction with Meta’s automated classification will inevitably cause the public to question the integrity of the MPA’s rating system.”

Meta told the WSJ that it never claimed or implied the content on Instagram Teen Accounts would be certified by the MPA.

The letter from MPA’s law firm reportedly said the organization worked for decades to earn the public’s trust in the rating system, and it does not want Meta’s AI-powered content moderation failures to blow back on its work:

“Any dissatisfaction with Meta’s automated classification will inevitably cause the public to question the integrity of the MPA’s rating system.”

Meta told the WSJ that it never claimed or implied the content on Instagram Teen Accounts would be certified by the MPA.

tech

Dan Ives expects “overwhelming shareholder approval” of Tesla CEO pay package

Wedbush Securities analyst Dan Ives, like prediction markets, thinks Tesla CEO Elon Musk’s $1 trillion pay package will receive “overwhelming shareholder approval” at the company’s annual shareholder meeting Thursday afternoon. The Tesla bull, like the Tesla board, has maintained that approval of the performance-based pay package is integral to keeping Musk at the helm of the company, which in turn is integral to the success of the company. Ives is also confident that investors will back the proposal allowing Tesla to invest in another of Musk’s companies, xAI.

“We expect shareholders to show overwhelming support tomorrow for Musk and the xAI stake further turning Tesla into an AI juggernaut with the autonomous and robotics future on the horizon,” Ives wrote in a note this morning.

The compensation package has received pushback, including from Tesla’s sixth-biggest institutional investor, Norway’s Norges Bank Investment Management, and from proxy adviser Institutional Shareholder Services.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.