Tech
Robot controlling a computer
(CSA Images/Getty Images)

Anthropic’s new Claude AI can control your computer, and sometimes it just does whatever it wants to

The company is defending its choice to release the tool to the public before fully understanding how it could be misused.

Today generative-AI company Anthropic released an upgraded version of its Claude 3.5 Sonnet model, alongside a new model, Claude 3.5 Haiku.

The surprising new feature of Sonnet is the ability to control your computer — taking and reading screenshots, moving your mouse, clicking on buttons in web pages and typing text. The company is rolling this out as a “public beta” release and admits it is experimental and “at times cumbersome and error-prone,” according to the post announcing the new release.

In a blog post discussing the reasons for developing the feature and what safeguards the company is putting in place, Anthropic said:

“A vast amount of modern work happens via computers. Enabling AIs to interact directly with computer software in the same way people do will unlock a huge range of applications that simply aren’t possible for the current generation of AI assistants.”

Last week Anthropic’s CEO and cofounder Dario Amodei published a 14,000-word optimistic manifesto on how powerful AI might solve many of the world’s problems by rapidly accelerating scientific discovery, eliminating most diseases, and enabling world peace.

The ability for computers to control themselves is hardly new, but the way Sonnet is implemented is novel. A common example of automated computer control today might be a programmer writing code to control a web browser to scrape content. But Sonnet does not require any code, and lets the user open the windows of apps or web pages, then write instructions for what the AI agent should do, and the agent analyzes the screen and figures out what elements to interact with to execute the user’s instructions.

If the idea of releasing an experimental AI agent loose on an internet-connected computer sounds like a dangerous idea, Anthropic kind of agrees with you. The company said, “For safety reasons we did not allow the model to access the internet during training,” but the beta version allows the agent to access the internet.

Anthropic recently updated its “Responsible Scaling Policy,” which lays out specific thresholds of risks and determines how the tools are released and tested. According to this framework, Anthropic said they found that the upgraded Sonnet gets a self-assigned grade of “AI Safety Level 2,” which it describes as showing “early signs of dangerous capabilities,” but is safe enough to release to the public.

The company is defending its choice to release such a tool to the public before fully understanding how it could be misused, saying they would rather find out what kinds of bad things might happen at this stage, rather than when the model has more dangerous capabilities. “We can begin grappling with any safety issues before the stakes are too high, rather than adding computer use capabilities for the first time into a model with much more serious risks,” the company wrote.

The potential for the misuse of consumer-focused AI tools like Claude is not merely hypothetical. Recently OpenAI released a list of 20 incidents in which state-connected bad actors had used ChatGPT to plan cyberattacks, probe vulnerable infrastructure, and design influence campaigns. And with the US presidential election just two weeks away, the company is aware of the potential for abuse.

“Given the upcoming US elections, we’re on high alert for attempted misuses that could be perceived as undermining public trust in electoral processes,” the company wrote. In the GitHub repository with demo code, the company cautions users that Claude’s computer-use feature “poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using computer use to interact with the internet.” Anthropic also warned, “In some circumstances, Claude will follow commands found in content even if it conflicts with the users instructions.”

To protect against any election-related meddling via the use of Sonnet’s new capabilities, Anthropic said they have “put in place measures to monitor when Claude is asked to engage in election-related activity, as well as systems for nudging Claude away from activities like generating and posting content on social media, registering web domains, or interacting with government websites.”

Anthropic said it will not use any computer screenshots observed while using the tool for any future model training. But the new technology’s behavior appears to still surprise its own creators with “amusing” behavior. Anthropic said that at one point in testing, Claude was able to stop the screen recording, losing all the footage. In a post on X, Anthropic shared footage of Claude’s unexpected behavior, writing “Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park.”

More Tech

See all Tech
tech

Apple to pay Google $1 billion a year for access to AI model for Siri

Apple plans to pay Google about $1 billion a year to use the search giant’s AI model for Siri, Bloomberg reports. Google’s model — at 1.2 trillion parameters — is way bigger than Apple’s current models.

The deal aims to help the iPhone maker improve its lagging AI efforts, powering a new Siri slated to come out this spring.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search on its devices — as long as they aren’t exclusive.

Apple had previously been considering using OpenAI’s ChatGPT and Anthropic’s Claude, but decided in the end to go with Google as it works toward improving its own internal models. Google, which makes a much less widely sold phone, the Pixel, has succeeded in bringing consumer AI to smartphone users where Apple has failed.

Google’s antitrust ruling in September helped safeguard the two companies’ partnerships — including the more than $20 billion Google pays Apple each year to be the default search on its devices — as long as they aren’t exclusive.

tech

Netflix creates new made-up metric for advertisers

It’s not quite WeWork’s community-adjusted EBITDA but it’s also not quite a real number: Netflix announced today that it has 190 million “monthly active viewers” for its lower-cost advertising tiers. The company came up with MAVs by measuring the number of subscribers who’ve watched “at least 1 minute of ads on Netflix per month” and multiplies that by what its research assumes is the number of people in that household.

The metric builds on Netflix’s previous attempt at measuring ad viewership, monthly active users, which is the number of profiles that have watched ads (94 million as of May). The MAV measurement, of course, is a lot bigger and bigger numbers are more attractive to advertisers, who are spending more and more on streaming platforms.

“After speaking to our partners, we know that what they want most is an accurate, clear, and transparent representation of who their ads are reaching,” Netflix President of Advertising Amy Reinhard explained in a press release. “Our move to viewers means we can give a more comprehensive count of how many people are actually on the couch, enjoying our can’t-miss series, films, games and live events with friends and family.”

Netflix last reported its long-followed and more easily understood paid membership numbers at the beginning of the year, when it crossed 300 million.

The metric builds on Netflix’s previous attempt at measuring ad viewership, monthly active users, which is the number of profiles that have watched ads (94 million as of May). The MAV measurement, of course, is a lot bigger and bigger numbers are more attractive to advertisers, who are spending more and more on streaming platforms.

“After speaking to our partners, we know that what they want most is an accurate, clear, and transparent representation of who their ads are reaching,” Netflix President of Advertising Amy Reinhard explained in a press release. “Our move to viewers means we can give a more comprehensive count of how many people are actually on the couch, enjoying our can’t-miss series, films, games and live events with friends and family.”

Netflix last reported its long-followed and more easily understood paid membership numbers at the beginning of the year, when it crossed 300 million.

tech

Ahead of Musk’s pay package vote, Tesla’s board says they can’t make him work there full time

Ahead of Tesla’s CEO compensation vote at its annual shareholder meeting tomorrow, The Wall Street Journal did a deep dive into how Elon Musk, who stands to gain $1 trillion if he stays at Tesla and hits a number of milestones, spends his time.

Like a similar piece from The New York Times in September, this one has a lot of fun details. Read it all, but here are some to tide you over:

  • Musk spent so much time at xAI this summer that he held meetings there with Tesla employees.

  • He personally oversaw the design of a sexy chatbot named Ani, who sports pigtails and skimpy clothes and for whom “employees were compelled to turn over their biometric data” to train.

  • The chatbot, which users can ask to “change into lingerie or fantasize about a romantic encounter with them,” has helped boost user numbers, which are still way lower than ChatGPT’s.

  • Executives and board members have told top investors in the past few weeks that they can’t make Musk work at Tesla full time. Board Chair Robyn Denholm explained that in his free time, Musk “likes to create companies, and they’re not necessarily Tesla companies.”

Like a similar piece from The New York Times in September, this one has a lot of fun details. Read it all, but here are some to tide you over:

  • Musk spent so much time at xAI this summer that he held meetings there with Tesla employees.

  • He personally oversaw the design of a sexy chatbot named Ani, who sports pigtails and skimpy clothes and for whom “employees were compelled to turn over their biometric data” to train.

  • The chatbot, which users can ask to “change into lingerie or fantasize about a romantic encounter with them,” has helped boost user numbers, which are still way lower than ChatGPT’s.

  • Executives and board members have told top investors in the past few weeks that they can’t make Musk work at Tesla full time. Board Chair Robyn Denholm explained that in his free time, Musk “likes to create companies, and they’re not necessarily Tesla companies.”

tech

Motion Picture Association to Meta: Stop saying Instagram teen content is “PG-13”

In October, Meta announced that its updated Instagram Teen Accounts would by default limit content to the “PG-13” rating.

The Motion Picture Association, which created the film rating standard, was not happy about Meta’s use of the rating, and sent the company a cease and desist letter, according to a report from The Wall Street Journal.

The letter from MPA’s law firm reportedly said the organization worked for decades to earn the public’s trust in the rating system, and it does not want Meta’s AI-powered content moderation failures to blow back on its work:

“Any dissatisfaction with Meta’s automated classification will inevitably cause the public to question the integrity of the MPA’s rating system.”

Meta told the WSJ that it never claimed or implied the content on Instagram Teen Accounts would be certified by the MPA.

The letter from MPA’s law firm reportedly said the organization worked for decades to earn the public’s trust in the rating system, and it does not want Meta’s AI-powered content moderation failures to blow back on its work:

“Any dissatisfaction with Meta’s automated classification will inevitably cause the public to question the integrity of the MPA’s rating system.”

Meta told the WSJ that it never claimed or implied the content on Instagram Teen Accounts would be certified by the MPA.

tech

Dan Ives expects “overwhelming shareholder approval” of Tesla CEO pay package

Wedbush Securities analyst Dan Ives, like prediction markets, thinks Tesla CEO Elon Musk’s $1 trillion pay package will receive “overwhelming shareholder approval” at the company’s annual shareholder meeting Thursday afternoon. The Tesla bull, like the Tesla board, has maintained that approval of the performance-based pay package is integral to keeping Musk at the helm of the company, which in turn is integral to the success of the company. Ives is also confident that investors will back the proposal allowing Tesla to invest in another of Musk’s companies, xAI.

“We expect shareholders to show overwhelming support tomorrow for Musk and the xAI stake further turning Tesla into an AI juggernaut with the autonomous and robotics future on the horizon,” Ives wrote in a note this morning.

The compensation package has received pushback, including from Tesla’s sixth-biggest institutional investor, Norway’s Norges Bank Investment Management, and from proxy adviser Institutional Shareholder Services.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.