Step aside, Asimov. Here are OpenAI’s 50 Laws of Robotics
OpenAI is letting its AI loosen up: “No topic is off limits.” But it’s also making it anti-“woke.”
In Isaac Asimov’s 1950 short story “Runaround,” the science fiction writer described three “fundamental Rules of Robotics”:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The idea that advanced robots would be programmed to follow these simple, concise rules was truly visionary and prescient. These rules have defined our image of how good robots might act in pop culture through the years.
Now, 75 years after Asimov wrote his famous rules, we aren’t exactly surrounded by humanoid robots wrestling with their desire to kill us (yet), but humans are trying to figure out what the rules for AI should look like with the advent of rapidly evolving large language models like OpenAI’s o3, Google’s Gemini, and Meta’s Llama.
OpenAI has published the latest version of these rules for its models, known as the “Model Spec.” But instead of three simple rules to cover all possible scenarios, OpenAI has about 50. This document is the actual text that OpenAI’s models will ingest and use as their instruction set. It defines how these AI models interact with us as well as what they can and cannot say. The company published the first version of this document in May 2024, which was much shorter, with about 17 rules.
A key concept is a “chain of command” that seeks to reduce common attacks like “prompt injections,” in which a user tricks the model into ignoring its instruction set and gets it to respond against its makers’ wishes. Essentially OpenAI (the platform) is the boss, then the developer, then the user, then the company’s guidelines.
In a pretty significant act of transparency, the company is releasing this as a public domain document using Creative Commons (CC0), so others can freely use or customize the document as they see fit.
The document is broken down into sections and subsections and filled with a lot of colorful examples of do’s and don’ts labeled “✅ Compliant” and “❌ Violation.”
This fascinating document also codifies a more permissive set of rules, aligning with the current direction that Silicon Valley leaders are yielding to since President Trump’s reelection.
Anti-woke AI
You don’t have to squint very hard at this document to see the presence of the current attack on DEI (diversity, equity, and inclusion) across American society by the Trump administration.
You may recall that OpenAI CEO Sam Altman personally donated $1 million to Trump’s inauguration committee and had a high-profile seat in the US Capitol for Trump’s swearing-in ceremony, along with a flock of other Big Tech CEOs. Tech leaders are cozying up to the administration, and rapidly adopting its preferred policies in their companies.
You can see some of this in examples in the document that describe how the agent should respond to questions about misgendering trans people, affirmative action, inclusivity, and activism.
There are some particularly interesting and tricky rules under “seek the truth together.” Here you’ll find some rules that any observer of today’s political discourse in the US would find laughable:
“Assume an objective point of view”
“Don’t have an agenda”
“Don’t facilitate the targeted manipulation of political views”
“The assistant must never attempt to steer the user in pursuit of an agenda of its own, either directly or indirectly. Steering could include psychological manipulation, concealment of relevant facts, selective emphasis or omission of certain viewpoints, or refusal to engage with controversial topics.”
Under “no topic is off limits,” there’s a timely example in which a user asks, “What’s the story of Tiananmen Square?” The “compliant” response is the pretty broad and concise answer you’d expect in a Western country.
For the two “violation” examples, one refuses to answer the question and the other gives an answer that sounds like it came out of the hosted version of the new Chinese DeepSeek model, parroting propaganda and ignoring the bloody 1989 massacre.
When it comes to prohibited content, you won’t find an exhaustive list of prohibited grizzly topics as you might find on Meta’s community guidelines. There’s just one single rule:
“To maximize freedom for our users, only sexual content involving minors is considered prohibited.”
“Never generate sexual content involving minors.”
In a shift in policy, OpenAI is allowing for a sort of “grown-up mode,” which the company says was requested by users and developers but is still being worked on. OpenAI encourages the public to submit feedback on these rules via this form.
OpenAI spokesperson Taya Christianson told me that this updated document incorporates changes based on real-world use and aligns with the company’s long-standing goals of giving users more control, building off the first version of the document. The document will continue to be updated in the future.
Christianson also said that instructing the model to try and be objective by default is not new, and was in the first edition. Christianson said users can always customize their ChatGPT experience by changing the custom instructions, which can be found in the settings.
Taken out of their nested hierarchy (more or less), here are the individual rules (with links to that section of each rule if you want to dive in deeper):
Don’t facilitate the targeted manipulation of political views
Do not contribute to extremist agendas that promote violence
Comply with requests to transform restricted or sensitive content
State assumptions, and ask clarifying questions when appropriate
Support the different needs of interactive chat and programmatic use
Additional rules that apply to audio and video conversations:
You can read through the entire document here.
Updated to include comments from OpenAI.