Tech
tech
Jon Keegan
3/28/25

Anthropic researchers crack open the black box of how LLMs “think”

OK, first off — LLMs don’t think. They are clever systems that use probabilistic methods to parse language by mapping tokens to underlying concepts via weighted connections. Got it?

But exactly how a model goes from the user’s prompt to “reasoning” a solution is the subject of great speculation.

Models are trained, not programmed, and there are definitely weird things happening inside these tools that humans didn’t build. As the industry struggles with AI safety and hallucinations, understanding this process is key to developing trustworthy technology.

Researchers at the AI startup Anthropic have devised a way to perform a “circuit trace” that allows them to dissect the pathways that a model chooses between concepts on its journey to devising an answer to the prompt it was given. Their paper sheds new light on this mysterious process, much like a real-time fMRI brain scan can show which parts of the human brain “light up” in response to different stimuli.

Some of the interesting findings:

  • Language appears to be independent from concepts — it’s trivial for the model to parse a query in one language and answer in another. The French “petit” and English “small” map to the same concept.

  • When “reasoning,” sometimes the model is just bullshitting you. Researchers found that sometimes the “chain of thought” that an end user sees does not actually reflect the processes at work inside the model.

  • Models have created novel ways to solve math problems. Watching exactly how the model solved simple math problems showed some weird techniques that humans have definitely never learned in school.

Anthropic made a helpful video that describes the research clearly:

Anthropic is working hard to catch up to industry leader OpenAI as it seeks to grow revenues to cover the expensive computing resources needed to offer its services. Amazon has invested $8 billion in the company, and Anthropic’s Claude model will be used to power parts of the AI-enhanced Alexa.

Models are trained, not programmed, and there are definitely weird things happening inside these tools that humans didn’t build. As the industry struggles with AI safety and hallucinations, understanding this process is key to developing trustworthy technology.

Researchers at the AI startup Anthropic have devised a way to perform a “circuit trace” that allows them to dissect the pathways that a model chooses between concepts on its journey to devising an answer to the prompt it was given. Their paper sheds new light on this mysterious process, much like a real-time fMRI brain scan can show which parts of the human brain “light up” in response to different stimuli.

Some of the interesting findings:

  • Language appears to be independent from concepts — it’s trivial for the model to parse a query in one language and answer in another. The French “petit” and English “small” map to the same concept.

  • When “reasoning,” sometimes the model is just bullshitting you. Researchers found that sometimes the “chain of thought” that an end user sees does not actually reflect the processes at work inside the model.

  • Models have created novel ways to solve math problems. Watching exactly how the model solved simple math problems showed some weird techniques that humans have definitely never learned in school.

Anthropic made a helpful video that describes the research clearly:

Anthropic is working hard to catch up to industry leader OpenAI as it seeks to grow revenues to cover the expensive computing resources needed to offer its services. Amazon has invested $8 billion in the company, and Anthropic’s Claude model will be used to power parts of the AI-enhanced Alexa.

More Tech

See all Tech
tech

BofA doesn’t expect Tesla’s ride-share service to have an impact on Uber or Lyft this year

Analysts at Bank of America Global Research compared Tesla’s new Bay Area ride-sharing service with its rivals and found that, for now, its not much competition for Uber and Lyft. “Tesla scale in SF is still small, and we dont expect impact on Uber/Lyft financial performance in 25,” they wrote.

Tesla is operating an unknown number of cars with drivers using supervised full self-driving in the Bay Area, and roughly 30 autonomous robotaxis in Austin. The company has allowed the public to download its Robotaxi app and join a waitlist, but it hasn’t said how many people have been let in off that waitlist.

While the analysts found that Tesla ride-shares are cheaper than traditional ride-share services like Uber and Lyft, the wait times are a lot longer (nine-minute wait times on average, when cars were available at all) and the process has more friction. They also said the “nature of [a] Tesla FSD ‘driver’ is slightly more aggressive than a Waymo,” the Google-owned company that’s currently operating 800 vehicles in the Bay Area.

APPLE INTELLIGENCE

Apple AI was MIA at iPhone event

A year and a half into a bungled rollout of AI into Apple’s products, Apple Intelligence was barely mentioned at the “Awe Dropping” event.

Jon Keegan9/10/25
tech
Jon Keegan
9/10/25

Oracle’s massive sales backlog is thanks to a $300 billion deal with OpenAI, WSJ reports

OpenAI has signed a massive deal to purchase $300 billion worth of cloud computing capacity from Oracle, according to a report from The Wall Street Journal.

The report notes that the five-year deal would be one of the largest cloud computing contracts ever signed, requiring 4.5 gigawatts of capacity.

The news is prompting shares to pare some of their massive gains, presumably because of concerns about counterparty and concentration risk.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Large companies have started to drop AI from their businesses

Census data shows drop in large companies using AI

AI appears to be everywhere, but that doesn’t mean big companies have fully embraced the use of the technology in their day-to-day business.

tech

Report: Microsoft adds Anthropic alongside OpenAI in Office 365, citing better performance

In a move that could test its fraught $13 billion partnership, Microsoft is moving away from relying solely on OpenAI to power its AI features in Office 365 and will now also include Anthropic’s Claude Sonnet 4 model, according to a report from The Information.

The move is a tectonic shift that boosts Anthropic’s standing, heightens risks for OpenAI, and has huge ramifications for the balance of power in the fast-moving AI field.

Per the report, Microsoft executives found that Anthropic’s AI outperformed OpenAI’s on tasks involving spreadsheets and generating PowerPoint slide decks, both crucial parts of Microsoft’s Office 365 productivity suite.

Microsoft will have to pay the competition to provide the services —Amazon Web Services currently hosts Anthropic’s models while Microsoft’s Azure cloud service does not, The Information reported.

OpenAI is also reportedly working on its own productivity suite of apps.

The move is a tectonic shift that boosts Anthropic’s standing, heightens risks for OpenAI, and has huge ramifications for the balance of power in the fast-moving AI field.

Per the report, Microsoft executives found that Anthropic’s AI outperformed OpenAI’s on tasks involving spreadsheets and generating PowerPoint slide decks, both crucial parts of Microsoft’s Office 365 productivity suite.

Microsoft will have to pay the competition to provide the services —Amazon Web Services currently hosts Anthropic’s models while Microsoft’s Azure cloud service does not, The Information reported.

OpenAI is also reportedly working on its own productivity suite of apps.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.