Tech
Robot controlling a computer
(CSA Images/Getty Images)

Anthropic’s new Claude AI can control your computer, and sometimes it just does whatever it wants to

The company is defending its choice to release the tool to the public before fully understanding how it could be misused.

10/22/24 3:28PM

Today generative-AI company Anthropic released an upgraded version of its Claude 3.5 Sonnet model, alongside a new model, Claude 3.5 Haiku.

The surprising new feature of Sonnet is the ability to control your computer — taking and reading screenshots, moving your mouse, clicking on buttons in web pages and typing text. The company is rolling this out as a “public beta” release and admits it is experimental and “at times cumbersome and error-prone,” according to the post announcing the new release.

In a blog post discussing the reasons for developing the feature and what safeguards the company is putting in place, Anthropic said:

“A vast amount of modern work happens via computers. Enabling AIs to interact directly with computer software in the same way people do will unlock a huge range of applications that simply aren’t possible for the current generation of AI assistants.”

Last week Anthropic’s CEO and cofounder Dario Amodei published a 14,000-word optimistic manifesto on how powerful AI might solve many of the world’s problems by rapidly accelerating scientific discovery, eliminating most diseases, and enabling world peace.

The ability for computers to control themselves is hardly new, but the way Sonnet is implemented is novel. A common example of automated computer control today might be a programmer writing code to control a web browser to scrape content. But Sonnet does not require any code, and lets the user open the windows of apps or web pages, then write instructions for what the AI agent should do, and the agent analyzes the screen and figures out what elements to interact with to execute the user’s instructions.

If the idea of releasing an experimental AI agent loose on an internet-connected computer sounds like a dangerous idea, Anthropic kind of agrees with you. The company said, “For safety reasons we did not allow the model to access the internet during training,” but the beta version allows the agent to access the internet.

Anthropic recently updated its “Responsible Scaling Policy,” which lays out specific thresholds of risks and determines how the tools are released and tested. According to this framework, Anthropic said they found that the upgraded Sonnet gets a self-assigned grade of “AI Safety Level 2,” which it describes as showing “early signs of dangerous capabilities,” but is safe enough to release to the public.

The company is defending its choice to release such a tool to the public before fully understanding how it could be misused, saying they would rather find out what kinds of bad things might happen at this stage, rather than when the model has more dangerous capabilities. “We can begin grappling with any safety issues before the stakes are too high, rather than adding computer use capabilities for the first time into a model with much more serious risks,” the company wrote.

The potential for the misuse of consumer-focused AI tools like Claude is not merely hypothetical. Recently OpenAI released a list of 20 incidents in which state-connected bad actors had used ChatGPT to plan cyberattacks, probe vulnerable infrastructure, and design influence campaigns. And with the US presidential election just two weeks away, the company is aware of the potential for abuse.

“Given the upcoming US elections, we’re on high alert for attempted misuses that could be perceived as undermining public trust in electoral processes,” the company wrote. In the GitHub repository with demo code, the company cautions users that Claude’s computer-use feature “poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using computer use to interact with the internet.” Anthropic also warned, “In some circumstances, Claude will follow commands found in content even if it conflicts with the users instructions.”

To protect against any election-related meddling via the use of Sonnet’s new capabilities, Anthropic said they have “put in place measures to monitor when Claude is asked to engage in election-related activity, as well as systems for nudging Claude away from activities like generating and posting content on social media, registering web domains, or interacting with government websites.”

Anthropic said it will not use any computer screenshots observed while using the tool for any future model training. But the new technology’s behavior appears to still surprise its own creators with “amusing” behavior. Anthropic said that at one point in testing, Claude was able to stop the screen recording, losing all the footage. In a post on X, Anthropic shared footage of Claude’s unexpected behavior, writing “Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park.”

More Tech

See all Tech
tech
Jon Keegan
9/11/25

OpenAI and Microsoft reach agreement that moves OpenAI closer to for-profit status

In a joint statement, OpenAI and Microsoft announced a “non-binding memorandum of understanding” for their renegotiated $13 billion partnership, which was a source of recent tension between the two companies.

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling nonprofit arm would hold an equity stake in the PBC valued at $100 billion, which would make it “one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission — and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling nonprofit arm would hold an equity stake in the PBC valued at $100 billion, which would make it “one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission — and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

tech
Rani Molla
9/11/25

BofA doesn’t expect Tesla’s ride-share service to have an impact on Uber or Lyft this year

Analysts at Bank of America Global Research compared Tesla’s new Bay Area ride-sharing service with its rivals and found that, for now, its not much competition for Uber and Lyft. “Tesla scale in SF is still small, and we dont expect impact on Uber/Lyft financial performance in 25,” they wrote.

Tesla is operating an unknown number of cars with drivers using supervised full self-driving in the Bay Area, and roughly 30 autonomous robotaxis in Austin. The company has allowed the public to download its Robotaxi app and join a waitlist, but it hasn’t said how many people have been let in off that waitlist.

While the analysts found that Tesla ride-shares are cheaper than traditional ride-share services like Uber and Lyft, the wait times are a lot longer (nine-minute wait times on average, when cars were available at all) and the process has more friction. They also said the “nature of [a] Tesla FSD ‘driver’ is slightly more aggressive than a Waymo,” the Google-owned company that’s currently operating 800 vehicles in the Bay Area.

APPLE INTELLIGENCE

Apple AI was MIA at iPhone event

A year and a half into a bungled rollout of AI into Apple’s products, Apple Intelligence was barely mentioned at the “Awe Dropping” event.

Jon Keegan9/10/25
tech
Jon Keegan
9/10/25

Oracle’s massive sales backlog is thanks to a $300 billion deal with OpenAI, WSJ reports

OpenAI has signed a massive deal to purchase $300 billion worth of cloud computing capacity from Oracle, according to a report from The Wall Street Journal.

The report notes that the five-year deal would be one of the largest cloud computing contracts ever signed, requiring 4.5 gigawatts of capacity.

The news is prompting shares to pare some of their massive gains, presumably because of concerns about counterparty and concentration risk.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.