Tech
A screenshot from arcprize.org.
(Screenshot: https://arcprize.org/arc)

OpenAI’s ARC de Triumph

How a puzzle designed to resist memorization is reshaping AI intelligence tests. See if you can score better than an AI model.

2/28/25 3:51PM

OpenAI’s Sam Altman thinks that the upcoming GPT-5 AI model will be smarter than him. Emphasis ours:

“I don’t think I’m going to be smarter than GPT-5. And I don’t feel sad about it because I think it just means that we’ll be able to use it to do incredible things.” 

But what does it actually mean for an AI to be “smart”? It turns out this is pretty difficult to nail down, as the AI world can’t even decide what the definition of “artificial general intelligence” (AGI) is. While imperfect, the industry has embraced the use of “benchmarks” — tests designed to measure an AI model’s knowledge and reasoning ability.

Want to test how well an AI model can write code? There’s a benchmark for that: SWE-bench. High school math? AIME 2024. General knowledge across multiple subjects? MMLU

AI companies are quick to boast about how well their new model tested on different benchmarks, using them as proof of progress. But recently, a new high score for one of the most challenging benchmarks caught the industry’s attention. 

Unlike traditional benchmarks filled with multiple-choice questions, ARC-AGI is unique and challenges AI with purely visual puzzles designed to test complex reasoning skills. For five years, no AI model could score higher than 5% on the test. 

That changed when OpenAI announced on December 20, 2024, that its just-released “o3” model had solved the ARC-AGI test. This marked the first time any AI had passed the test — a huge leap over the other state-of-the-art models. 

François Chollet, an AI researcher and cofounder of new AI startup Ndea, created the ARC-AGI challenge in 2019. When asked about the significance of OpenAI’s breakthrough score on his test, Chollet called it “a major achievement.”

In an email to Sherwood News, Chollet noted that OpenAI’s previous GPT models all scored near zero on the test. “It demonstrates the fact that o3 is not limited to memorized skills and memorized knowledge, unlike the GPT series, or really unlike any prior major AI system. It is actually capable of adapting to novelty, at least in the very simple context of ARC-AGI tasks,” Chollet said.

Many of the previous testing tools failed to demonstrate what Chollet considers actual intelligence: “the ability to adapt on the fly to new situations and new problems. Rather they demonstrated memorized skill,” Chollet said. 

Input > Output

So what does this test look like? It’s a series of tasks comprised of visual puzzles. The puzzle is made up of a series of colored tiles on a grid, with the computer reading each color as a different number. 

Each task shows the test-taker a series of problem and solution pairs — the input and the output. The user has to look at the sample pairs and deduce the steps needed to solve these puzzles. Then the user is asked to solve a puzzle that only shows the input, applying the same solution from the examples. 

A screenshot from arcprize.org.
A screenshot from arcprize.org. Source: https://arcprize.org/arc

Chollet says the test was designed to not require any special training, but only innate “core knowledge” like object persistence, basic geometry, numbers and counting, and that objects can be animate or inanimate.  

“Most of the puzzles are elementary, something a young child could figure out,” Chollet explained. “There’s just one thing: it’s designed to resist memorization. Each task is unique. You must figure out each task on the fly.”

Note: this quiz is optimized for desktop and is best viewed on a computer.

OpenAI shared the results of its testing with Chollet, which he has verified. The company has not yet released the full results of the testing for the community to assess. The results we can see require some unpacking, though. 

According to Chollet, the test was done in several parts. One test used a “high efficiency, low cost” computing process. This cheaper, low-compute mode test scored an impressive 76%, which beat all previous records, but did not achieve the 85% passing score. The time it took for each puzzle to be solved was 1.3 minutes on average and it cost about $20 per puzzle in computing power to solve each one. 

But when OpenAI cranked it up and used the “low efficiency, high cost” process, it scored a winning 88% on the test. According to Greg Kamradt, president of the nonprofit ARC Prize Foundation — which develops and funds benchmarks like ARC-AGI — human performance on the test is about 85%. To achieve such a high score, OpenAI’s model had to work really hard and it cost exponentially more.

On average, the most computing-intensive test took 13.8 minutes of “thinking” for each puzzle. That’s a lot of time for some expensive GPUs, like Nvidia’s popular H100 chip, and all that thinking isn’t cheap.

The cost to solve each puzzle was about $3,400, according to estimates from Chollet. 

In January 2025, shortly after the release of DeepSeek’s R1 “reasoning” model, the foundation announced that the new model scored about as well as OpenAI’s “o1-preview” model, scoring a 20% on the test, but only costing $0.05 per puzzle on average.

Yesterday, OpenAI released GPT-4.5, the last “non-reasoning” model the company will release, and you might see why. The new model achieved a score of only 10% (for $0.29 per task). 

As the industry grapples with a possible plateau of the AI “scaling law” that fueled the first wave of the generative-AI boom, a consensus is emerging that multistep reasoning like that found in o3 and DeepSeek R1 is going to be an important part of future performance gains. But as these test results show, longer, more intensive AI computing also drastically increases the costs — both fiscal and environmental. 

Big Tech is planning on huge demand for this type of energy-thirsty computing, and intends to spend $315 billion this year alone on AI data centers and computing infrastructure. 

But… is this AGI?

In the 2019 research paper in which Chollet proposed the ARC dataset, he speculated about the capabilities of an advanced AI system that could one day solve the test. 

We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet wrote, emphasis ours. 

While its not clear that any AI has reached this level of intelligence yet, it is clear that new benchmarks will be needed to test new models emerging from the fast-moving AI development pipeline. The ARC foundation is currently funding development of ARC-AGI-2, which will be released in March, as well as ARC-AGI-3, which will look more like simple animated 8-bit video games. 

The ARC-AGI-2 benchmark was designed specifically to be difficult for these new reasoning models, but still easy for humans to solve. Today, humans appear to have the advantage, but the rapid pace of progress suggests that won’t last forever.

OpenAI did not return a request for comment. 

More Tech

See all Tech
tech

OpenAI and Microsoft reach agreement that moves OpenAI closer to for-profit status

In a joint statement, OpenAI and Microsoft announced a “non-binding memorandum of understanding” for their renegotiated $13 billion partnership, which was a source of recent tension between the two companies.

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling non-profit arm would hold an equity stake in the PBC valued at $100 billion, which would “making it one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission—and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling non-profit arm would hold an equity stake in the PBC valued at $100 billion, which would “making it one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission—and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

tech

BofA doesn’t expect Tesla’s ride-share service to have an impact on Uber or Lyft this year

Analysts at Bank of America Global Research compared Tesla’s new Bay Area ride-sharing service with its rivals and found that, for now, its not much competition for Uber and Lyft. “Tesla scale in SF is still small, and we dont expect impact on Uber/Lyft financial performance in 25,” they wrote.

Tesla is operating an unknown number of cars with drivers using supervised full self-driving in the Bay Area, and roughly 30 autonomous robotaxis in Austin. The company has allowed the public to download its Robotaxi app and join a waitlist, but it hasn’t said how many people have been let in off that waitlist.

While the analysts found that Tesla ride-shares are cheaper than traditional ride-share services like Uber and Lyft, the wait times are a lot longer (nine-minute wait times on average, when cars were available at all) and the process has more friction. They also said the “nature of [a] Tesla FSD ‘driver’ is slightly more aggressive than a Waymo,” the Google-owned company that’s currently operating 800 vehicles in the Bay Area.

APPLE INTELLIGENCE

Apple AI was MIA at iPhone event

A year and a half into a bungled rollout of AI into Apple’s products, Apple Intelligence was barely mentioned at the “Awe Dropping” event.

Jon Keegan9/10/25
tech
Jon Keegan
9/10/25

Oracle’s massive sales backlog is thanks to a $300 billion deal with OpenAI, WSJ reports

OpenAI has signed a massive deal to purchase $300 billion worth of cloud computing capacity from Oracle, according to a report from The Wall Street Journal.

The report notes that the five-year deal would be one of the largest cloud computing contracts ever signed, requiring 4.5 gigawatts of capacity.

The news is prompting shares to pare some of their massive gains, presumably because of concerns about counterparty and concentration risk.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Large companies have started to drop AI from their businesses

Census data shows drop in large companies using AI

AI appears to be everywhere, but that doesn’t mean big companies have fully embraced the use of the technology in their day-to-day business.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.