Chinese AI lab DeepSeek has released a major new version of its eponymous open-source AI models that are nipping at the heels of leading frontier models in some areas.
The most significant DeepSeek-V4 Pro and DeepSeek-V4 Flash both have a 1 million-token context — the amount of information the model can actively work with in a single session — which is a crucial feature for complex, long-running coding tasks.
DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.
Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.
The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”
As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.
But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.
DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.
Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.
The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”
As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.
But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.
SpaceX thinks its total addressable market (TAM) is a whopping $28.5 trillion for its businesses, according to an S-1 filing for its upcoming IPO reviewed by Reuters. And most of that market isn’t rockets. The company says roughly 90% could come from AI — largely selling artificial intelligence tools to businesses.
“We believe that our enterprise strategy, which is focused on serving the digital needs of the world’s largest industries with Al solutions, positions us competitively to pursue this rapidly growing opportunity,” SpaceX said in the filing. “We believe we have identified the largest actionable total addressable market in human history.”
TAM, of course, assumes capturing every possible customer. But even a small slice of a $28.5 trillion market would be enormous.
On Tesla’s earnings call earlier this week, CEO Elon Musk said production of the company’s steering-wheel-less Cybercab had begun. Since then, Musk and Tesla have posted videos showing the gold two-seater rolling off the line at its Texas Gigafactory and onto the road.
Cybercab has started production pic.twitter.com/MAeswanf96
— Elon Musk (@elonmusk) April 24, 2026
The Cybercab — meant both for consumers and Tesla’s Robotaxi network — is widely seen as central to the company’s future. “The future of the company is fundamentally based on large-scale autonomous cars and large scale and large volume, vast numbers of autonomous humanoid robots,” Musk said last year.
Whether these cars actually make it to consumers is another question. For now, regulations generally require steering wheels, and Tesla still has to prove the vehicles can reliably drive themselves.
In formation pic.twitter.com/7qA0SluL8J
— Tesla Robotaxi (@robotaxi) April 24, 2026
On the earnings call, Musk said production would be “very slow” but would ramp up and go “kind of exponential towards the end of the year and certainly next year.”
Meta said it will deploy “tens of millions” of Amazon Web Services Graviton CPU cores to power so-called “agentic” AI systems — tools that can reason, plan, and act on their own. The move makes Meta one of the largest customers of Amazon’s in-house chips.
The deal also underscores a broader shift in AI infrastructure, as companies move beyond Nvidia GPUs and use different chips for different tasks.
Meta, which is working on its own custom inference chips, also has chip deals with Advanced Micro Devices and Nvidia.
The deal also underscores a broader shift in AI infrastructure, as companies move beyond Nvidia GPUs and use different chips for different tasks.
Meta, which is working on its own custom inference chips, also has chip deals with Advanced Micro Devices and Nvidia.
Oracle extended its premarket gains Friday after Wedbush Securities’ Dan Ives initiated coverage with an “outperform” rating and a $225 price target — about 25% upside to its pre-initiation level — calling the enterprise software and cloud infrastructure company a “foundational infrastructure provider for the AI revolution.”
Ives argues investors are misreading Oracle’s heavy capital spending and negative free cash flow as risky, despite being backed by a massive $553 billion backlog of contracted demand. He says the company’s “secret sauce” is a two-part strategy: building high-performance cloud infrastructure for AI workloads while connecting those models directly to companies’ own data.
“We believe Oracle is in the early innings of a significant repositioning as it executes on this generational opportunity,” Ives wrote.
Right on the heels of Anthropic’s Claude Opus 4.7, OpenAI has also released the next incremental improvement to its flagship frontier model.
OpenAI says that ChatGPT 5.5 performs better on complex coding and data analysis tasks, and more carefully follows instructions, even when the instructions are vague.
Importantly, this gain in capability does not mean developers and companies have to shell out for more tokens (as is the case with Claude Opus 4.7) — the model uses fewer tokens that ChatGPT 5.4.
OpenAI says the new model has strengthened safeguards to ensure that the model’s strong cybersecurity capabilities aren’t used for malicious attacks.
Importantly, this gain in capability does not mean developers and companies have to shell out for more tokens (as is the case with Claude Opus 4.7) — the model uses fewer tokens that ChatGPT 5.4.
OpenAI says the new model has strengthened safeguards to ensure that the model’s strong cybersecurity capabilities aren’t used for malicious attacks.
On Wednesday, Google CEO Sundar Pichai said in a blog post that AI is now writing 75% of new code at the company. This is up from 50% last fall. Pichai said all code is “approved by engineers.”
Google announced new TPU 8 chips today at its annual Cloud Next event. Pichai wrote:
“We’re now shifting to truly agentic workflows. Our engineers are orchestrating fully autonomous digital task forces, firing off agents and accomplishing incredible things.”
Tesla signed a deal that lets more than 50,000 public agencies — including police departments and school districts — buy its vehicles without the usual slow bidding process, making it much easier to compete in a market long dominated by Ford and General Motors. The public sector currently represents less than 1% of Tesla’s sales.
The move doesn’t guarantee orders, but it removes a major barrier at a time when Tesla is looking for new demand to bolster its main source of revenues. Tesla’s Q1 deliveries fell short of analyst expectations and annual sales have declined for two years in a row. The public sector also represents a large pool of buyers who are beyond Elon Musk’s other companies.
Tesla reports earnings after the bell today.
The move doesn’t guarantee orders, but it removes a major barrier at a time when Tesla is looking for new demand to bolster its main source of revenues. Tesla’s Q1 deliveries fell short of analyst expectations and annual sales have declined for two years in a row. The public sector also represents a large pool of buyers who are beyond Elon Musk’s other companies.
Tesla reports earnings after the bell today.
The raft of announcements from Google’s Cloud Next ’26 event sent shares up in early trading.
The New York Times took a close look at how Elon Musk is reshaping SpaceX’s priorities ahead of its highly anticipated, potentially record-breaking IPO — and what that could mean for the company and its investors.
As the NYT’s Ryan Mac noted in the article, “Shifting aims before an I.P.O. would be unthinkable for most corporate leaders, who tend to focus on their core businesses and try to project steadiness to potential investors.”
But Musk, who is also the ever-unpredictable CEO of Tesla, doesn’t follow typical playbooks. Here’s a quick look at how SpaceX’s goals have changed:
But Musk, who is also the ever-unpredictable CEO of Tesla, doesn’t follow typical playbooks. Here’s a quick look at how SpaceX’s goals have changed:
SpaceX said today that it’s “working closely together” with fast-growing coding startup Cursor “to create the world’s best coding and knowledge work AI.” The post also said SpaceX would have the right to acquire Cursor later this year or make the startup “pay $10 billion for our work together.” The New York Times, citing people familiar with the matter, previously reported that the companies had agreed to an acquisition.
The news comes as SpaceX prepares for a blockbuster IPO and doubles down on AI, with a growing — if still fully aspirational — focus on space-based data infrastructure and computing.
Last month, when SpaceX hired two senior leaders from Cursor, CEO Elon Musk noted that xAI, which SpaceX acquired earlier this year, “was not built right first time around, so is being rebuilt from the foundations up.”
ChatGPT Images 2.0 marks a big leap forward in image generation as OpenAI seeks to distinguish its features from Anthropic’s Claude.
Anthropic’s recent momentum, powered by the success of its popular Claude Code tool, is turning up the heat among its AI competitors — not only for its AI startup peer OpenAI, but also with established Big Tech giants like Google.
The Information reports that within Google DeepMind, a “strike team” has been assembled to make a serious push to improve Gemini’s coding capabilities. According to the report, leaders within Google, including cofounder Sergey Brin, are sounding the alarm after determining that Anthropic’s Claude has superior coding skills. The new team’s goal is to create a AI system that can improve itself.
“To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers,” Brin wrote in a recent memo to DeepMind staff.
The Information reports that within Google DeepMind, a “strike team” has been assembled to make a serious push to improve Gemini’s coding capabilities. According to the report, leaders within Google, including cofounder Sergey Brin, are sounding the alarm after determining that Anthropic’s Claude has superior coding skills. The new team’s goal is to create a AI system that can improve itself.
“To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers,” Brin wrote in a recent memo to DeepMind staff.
Tesla’s federal tax bill last year was once again $0, Reuters reports. While past losses and green energy credits helped shrink the bill, Reuters found that Tesla also leaned on a classic corporate maneuver: offshore profit-shifting. By routing intellectual property rights through paper-only subsidiaries in the Netherlands and Singapore, Tesla effectively parked $18 billion in profits overseas between 2023 and early 2025. The entirely legal setup saved Tesla an estimated $400 million in US taxes. Not bad for a company whose CEO is not a fan of “shady” tax loopholes.