Tech
Screenshot of OpenAI Operator
A screenshot of OpenAI’s “Operator” agent (OpenAI)
SMOOTH OPERATOR

OpenAI’s “Operator” is here to slowly take over your computer and mess up your life

Operator made a consequential mistake 13% of the time in early testing, such as emailing the wrong person or messing up a reminder for a person to take medication.

Jon Keegan

OpenAI released a “research preview” of its AI agent that can control your web browser. Called “Operator,” it has the ability to control your mouse and keyboard and analyze things it “sees” on your computer — very, very slowly. Currently it’s only available to ChatGPT Pro users in the US.

Operator makes use of the multistep “reasoning” you can find in ChatGPT o1, and the multimodal “vision” capabilities of ChatGPT 4o. This reasoning process achieves better (but slower) performance by breaking tasks into steps. Lots and lots of steps.

In the video demonstrations shared on the product page, you can watch Operator break the task down into dozens of distinct actions like “clicking,” “typing,” and “scrolling.” One example showed 152 steps to take a grammar quiz, and 146 steps to determine the amount of a refund from a canceled online order.

Screenshot from demo of OpenAI Operator
(OpenAI)

The potential for this kind of freewheeling AI web browsing on demand is positioned as an agent that can save you the drudgery of having to order groceries, research holidays, make restaurant reservations, or buy tickets to concerts.

Operator makes high-stakes mistakes

It’s one thing when ChatGPT spits out an incorrect answer, but if your chatbot is actually spending your money and triggering things in the real world, the stakes are much, much higher.

In its testing, OpenAI found that in one test of 100 sample tasks, 13% of the time Operator made a consequential mistake like emailing the wrong person, incorrectly bulk-removing email labels, setting the wrong date for a reminder to take the user’s medication, and ordering the wrong food item. Some of the other mistakes were easily reversible “nuisances.” OpenAI noted after mitigations, they reduced this error rate by approximately 90%.

OpenAI stresses that you have the ability to grab the wheel from the AI at any time, and you can approve any action before it is executed, but in this early evaluation version, you’ll probably have to spend more time babysitting the agent than just going ahead and doing the task on your own.

For now it limits the tasks you can use it for, prohibiting banking or job applications.

OpenAI shared a list of example tasks that some hypothetical user might want an AI to do for them. Ten out of ten times Operator was able to research bear habitats, create a grocery list, and make a ’90s playlist on Spotify.

Medium persuasion

The system card for the model behind Operator — Computer-Using Agent (CUA) — describes the process OpenAI used to assess the risks of letting a prerelease, novel AI agent go hog wild with your computer.

Like other model releases, OpenAI tested the model by using red teams with expertise in social engineering, CBRN (chemical, biological, radiological, and nuclear) threats, and cybersecurity. OpenAI gave itself a “low” risk for everything except “persuasion,” which got a “medium” risk score and is considered safe enough for public release.

High consequence

But there are some important restrictions on how you can use Operator. Because there is a slightly elevated risk of using Operator for influencing people, the usage policy prohibits impersonating people or organizations, concealing the role of AI in tasks, or using it to spread disinformation or false interactions, like fake reviews or fake profiles.

OpenAI prohibits people from using Operator to commit any crimes, but you are also prohibited from using it to bully, harass, defame, or discriminate against others based on protected attributes.

Under a heading titled “high consequence domains,” it notes that you can’t use Operator to make “high-stakes decisions” that might affect your safety or well-being, automate stock trading, or use it for political campaigning or lobbying.

OpenAI’s announcement follows competitor Anthropic’s October release of a similar feature that can control your computer. There is widespread hype that “agentic AI” like Operator will be a breakthrough for how people use these tools.

OpenAI CEO Sam Altman said in an announcement video that Operator is expected to roll out to international ChatGPT Pro and ChatGPT Plus users “soon,” but noted that the European rollout “will unfortunately take a while.”

More Tech

See all Tech
tech

Anthropic projections for 2028: Up to $70 billion in revenue, could be profitable by 2027

Anthropic’s Claude API business is doing so well with enterprise customers, the company is upping its revenue forecasts significantly. According to a report from The Information, the company’s robust corporate sales have caused it to revise its most optimistic forecast up to $70 billion in sales by 2028.

Anthropic estimates its API business will be double that of OpenAI’s API sales. OpenAI is currently burning through much more money per month than Anthropic, and reportedly expects to spend as much as $115 billion through 2029, while Anthropic is forecasting that it could be cash positive by 2027, per the report.

Anthropic estimates its API business will be double that of OpenAI’s API sales. OpenAI is currently burning through much more money per month than Anthropic, and reportedly expects to spend as much as $115 billion through 2029, while Anthropic is forecasting that it could be cash positive by 2027, per the report.

tech

Amazon, which is developing AI shopping agents, doesn’t want Perplexity’s AI shopping agents on its site

Amazon has sent a cease and desist letter to Perplexity AI, demanding that it stop letting its AI browser agent, Comet, make online purchases for users, Bloomberg reports.

Amazon, which is developing its own AI shopping agents and is having “conversations” with builders of third-party agents, accused the AI startup of “committing computer fraud by failing to disclose when its AI agent is shopping on a user’s behalf, in violation of Amazon’s terms of service.”

Perplexity, in response, said Amazon is attempting to “eliminate user rights” in order to sell more ads.

Amazon, which is developing its own AI shopping agents and is having “conversations” with builders of third-party agents, accused the AI startup of “committing computer fraud by failing to disclose when its AI agent is shopping on a user’s behalf, in violation of Amazon’s terms of service.”

Perplexity, in response, said Amazon is attempting to “eliminate user rights” in order to sell more ads.

tech

Apple to challenge Google Chromebooks with low-cost Mac laptop, Bloomberg reports

Apple is designing a new sub-$1,000 Mac laptop aimed at the education market, Bloomberg reports.

Google’s low-cost Chromebooks currently dominate the K-12 education market, and Apple’s reentry into the education market that it once owned could disrupt the sectors status quo.

According to the report, Apple plans on using the custom mobile chips it currently uses in iPhones to power the more affordable devices.

Apple’s recent earnings demonstrated that iPhone sales have been steady, and the tech giant is looking to find new areas of growth, like services. A low-cost Mac could be popular with consumers, in addition to education buyers.

According to the report, Apple plans on using the custom mobile chips it currently uses in iPhones to power the more affordable devices.

Apple’s recent earnings demonstrated that iPhone sales have been steady, and the tech giant is looking to find new areas of growth, like services. A low-cost Mac could be popular with consumers, in addition to education buyers.

tech

Getty Images suffers partial defeat in UK lawsuit against Stability AI

Stability AI, the creator of image generation tool Stable Diffusion, largely defended itself from a copyright violation lawsuit filed by Getty Images, which alleged the company illegally trained its AI models on Getty’s image library.

Lacking strong enough evidence, Getty dropped the part of the case alleging illegal training mid-trial, according to Reuters reporting.

Responding to the decision, Getty said in a press release:

“Today’s ruling confirms that Stable Diffusion’s inclusion of Getty Images’ trademarks in AI‑generated outputs infringed those trademarks. ... The ruling delivered another key finding; that, wherever the training and development did take place, Getty Images’ copyright‑protected works were used to train Stable Diffusion.”

Stability AI still faces a lawsuit from Getty in US courts, which remains ongoing.

A number of high-profile copyright cases are still working their way through the courts, as copyright holders seek to win strong protections for their works that were used to train AI models from a number of Big Tech companies.

Responding to the decision, Getty said in a press release:

“Today’s ruling confirms that Stable Diffusion’s inclusion of Getty Images’ trademarks in AI‑generated outputs infringed those trademarks. ... The ruling delivered another key finding; that, wherever the training and development did take place, Getty Images’ copyright‑protected works were used to train Stable Diffusion.”

Stability AI still faces a lawsuit from Getty in US courts, which remains ongoing.

A number of high-profile copyright cases are still working their way through the courts, as copyright holders seek to win strong protections for their works that were used to train AI models from a number of Big Tech companies.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.