Tech
Gremiln Messing With Engine
NO WAY
SAN JOSE
Getty Images

Just because Silicon Valley is pumped to make this junk doesn’t mean people will buy it

Big Tech is fixated on making computers for your face and portable AI companions. Turns out there may be bad ideas in brainstorming.

For all its innovation, it can sometimes feel like all of Big Tech coalesces along a single idea. Lately it’s seemed like Silicon Valley’s best idea for the next best thing has been the same thing: an AI-powered device that isn’t a phone.

Last week, OpenAI announced that it’s buying io, former Apple design lead Jony Ive’s AI hardware startup. The plan so far is to make a “companion” device that sits on your desk, in your pocket, or maybe around your neck, and works in addition to your phone or computer.

Most everyone else has settled on AI-powered smart glasses as the device du jour.

Apple itself plans to release its AI-enhanced glasses at the end of next year. Like its competitors, the device would have cameras, microphones, and speakers allowing it to assess the outside world and let users communicate with its voice assistant.

Last week, Google announced Android XR, a framework meant to bring AI to face computers, including those made by Samsung, Warby Parker, and Gentle Monster.

Amazon has been working on AI glasses for internal use that would provide turn-by-turn navigation within buildings to help it decrease delivery times.

Meta, which partnered with Ray-Ban owner EssilorLuxottica, is the furthest along, having already sold more than 2 million units since they came out in October 2023. The company says it expects to produce 10 million a year starting next year and is expected to offer a more deluxe, more expensive version of the glasses by the end of 2025.

The idea is all roughly the same: create a new form factor through which you can interact with AI. It’s meant to give users access to their phone without having to pick it up and to overlay useful information on the outside world. But it all raises some important questions.

Why is everyone having the same idea at the same time?

It’s easy to see why Big Tech companies might want this. Anything that enables them to sell you devices or services, serve you ads, collect data, and generally be closer to your everyday lives is a big win for them.

And the tech is more ready for prime time than it’s ever been.

“With Google Glass, when they first came out, everybody was creeped out”

While the general idea of AI glasses has been floating around since before Google’s first foray with Glass more than a decade ago, with the rise of generative AI and other advances, we’re much closer now to having the technology to make it actually work. AI is better able to correctly identify what you’re looking at, chips are more powerful, and batteries are smaller.

Society, too, is more ready.

“With Google Glass, when they first came out, everybody was creeped out that all of a sudden there’s potentially somebody with a camera on you all the time,” Counterpoint Technology senior analyst Gerrit Schneemann told Sherwood News. “But now with social media, the mainstream user is more aware of the constant nature of being potentially on camera, so there’s less of a friction there.”

It’s also possible that a separate AI device is a genuinely good idea. In other words, a type of carcinization is going on where the form factor is so useful that a number of unrelated things are evolving to look like it (and it’s not just tech companies wanting you to use their proverbial crab).

While much of what these smart devices are offering is already possible on your phone, glasses or other hands-free devices open up a world of possibilities.

“If you look at insert company here —  it could be Google, it could be Meta, it could be Microsoft or Apple or Jony Ive, etc. — there is a recognition of the potential conveniences that are unique to this device that you can’t get anywhere else,” Ramon Llamas, a research director at IDC who specializes in wearables, told Sherwood.

The companies themselves are quick to advertise use cases for these devices.

As Shahram Izadi, vice president of AR/XR at Google, spun it at Google’s developer conference last week, “Unlike Clark Kent, you can get superpowers when you put your glasses on.”

Those powers were included in a demo where Product Manager Nishtha Bhatia sent and received texts without her hands and instead used those hands to double high-five basketball player Giannis Antetokounmpo. She asked her glasses to look up a band and then play songs from that band. The glasses recalled a logo from her coffee cup earlier, made a calendar invite with someone at that coffee shop, and provided step-by-step directions to the coffee shop. Perhaps most impressive: with only a small glitch, Bhatia and Izadi were able to see real-time translations of what each person was saying in Hindi and Farsi, respectively.

Remember, of course, that demos at tech conferences are not real life.

A lot of the draw for these devices is their potential. In other words, they will presumably get more powerful down the line and possibilities for how people might eventually use them are more interesting than what we’ve come up with to date.

And that promise is so tantalizing that Big Tech doesn’t want to be left out. Everyone is keenly aware that it’s been almost two decades since the iPhone launched, and they don’t want to be BlackBerry.

“ The great thing about starting now and everybody else starting now is the gold standard hasn’t been determined yet,” Llamas said. “Right now there’s a huge land grab.  If you are one of those companies to help guide and shape that market as it gets started and as it grows, you dictate the terms.”

That means these companies might be willing to tolerate lots of losses — looking at you, Meta — to jockey for position.

Do people really want this?

In a word, no. It’s not like consumers are out in the streets clamoring to put a computer on their face.

Rather, these devices seem a bit like a solution in search of a problem.

In one Google Gemini Live demo, an employee walks around her neighborhood misidentifying objects like a garbage truck as a convertible, only to have the AI assistant correct her. (The bit was reminiscent of HBO’s “Silicon Valley,” when Jian-Yang’s app disappointingly could only tell you whether something was or wasn’t a hotdog.) One can certainly think of single or niche use cases — especially for people with visual impairments — but it’s much more difficult to imagine wide-ranging, mass-market, everyday reasons to shell out hundreds or thousands to wear a computer on your face.

“I don’t see anything where people are going to say, ‘This is going to be so revolutionary that I need to have it.’”

As Llamas put it, “They’re still trying to figure out what spaghetti sticks to the wall.”

It doesn’t yet make a meal.

“If the only ‘benefit’ is to have the content of your phone in eyesight at all times, or with the voice prompt,” Schneemann said,  “I don’t see anything where people are going to say, ‘This is going to be so revolutionary that I need to have it.’”

Big Tech companies, it seems, are hoping that if they build it people will come — and make it useful. They’re providing a nurturing playground and hoping the big ideas come later. The hope is that much like with the iPhone’s subsequent App Store, developers will come up with the killer use case and build the next Uber or Instagram for AI glasses. But notably, the iPhone was pretty useful before the apps. People already need a phone and a computer, and it was good enough to be both.

Then there’s the big issue of whether the tech will actually work outside the narrow confines of the demos, which have been scripted and vetted. Look no further than notable AI device flops like Rabbit’s r1 and Humane’s pin to know that if a device fails to perform in the wild, no one will want it. Perhaps Ive’s ChatGPT-powered device will break the losing streak, but we won’t know till that comes out in late 2026.

Apple’s Vision Pro is also instructive. In the first year it came out, the immersive headset sold fewer than 500,000 units. While considered a technical marvel, the device has so far failed to generate any killer apps or use cases — at least not at its surprisingly expensive $3,500 cost.

Perhaps a relatively cheaper price tag will help smart glasses sell, but they also need people to actually want to use them and other companies that see how to build businesses around them.

For now, these devices will likely be relegated to early adopters.

Even Meta’s relatively successful Ray-Bans, which start around $300, are nowhere near mass market. While 2 million sold sounds like a lot, it’s nothing in comparison with the 232 million iPhones or even 34 million Apple Watches sold last year. It’s hard to believe that a deluxe version costing more than 3x as much would somehow push substantially more units. Also, remember the Metaverse? Just because Meta, née Facebook, wants something to be a big deal, doesn’t mean it will actually pan out.

Ultimately whether such devices become commercial successes will depend on whether they meet a number of thresholds: they will have to be fast enough, light enough, cheap enough, accurate enough, long-lasting enough, and useful enough for people to decide it’s worth toting around a whole new expensive device. We don’t know yet how much enough is enough, but we’re willing to bet we’re still at least a few years away.

More Tech

See all Tech
tech
Jon Keegan

DeepSeek releases new V4 series models highlighting efficiency and long context

Chinese AI lab DeepSeek has released a major new version of its eponymous open-source AI models that are nipping at the heels of leading frontier models in some areas.

The most significant DeepSeek-V4 Pro and DeepSeek-V4 Flash both have a 1 million-token context — the amount of information the model can actively work with in a single session — which is a crucial feature for complex, long-running coding tasks.

DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.

Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.

The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”

As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.

But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.

DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.

Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.

The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”

As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.

But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.

$28.5T
Rani Molla

SpaceX thinks its total addressable market (TAM) is a whopping $28.5 trillion for its businesses, according to an S-1 filing for its upcoming IPO reviewed by Reuters. And most of that market isn’t rockets. The company says roughly 90% could come from AI — largely selling artificial intelligence tools to businesses.

“We believe that our enterprise strategy, which is focused on serving the digital needs of the world’s largest industries with Al solutions, positions us competitively to pursue this rapidly ⁠growing opportunity,” ​SpaceX said in the filing. “We believe we have identified the largest actionable total addressable market in human ​history.”

TAM, of course, assumes capturing every possible customer. But even a small slice of a $28.5 trillion market would be enormous.

tech
Rani Molla

Tesla Cybercab production has begun

On Tesla’s earnings call earlier this week, CEO Elon Musk said production of the company’s steering-wheel-less Cybercab had begun. Since then, Musk and Tesla have posted videos showing the gold two-seater rolling off the line at its Texas Gigafactory and onto the road.

The Cybercab — meant both for consumers and Tesla’s Robotaxi network — is widely seen as central to the company’s future. “The future of the company is fundamentally based on large-scale autonomous cars and large scale and large volume, vast numbers of autonomous humanoid robots,” Musk said last year.

Whether these cars actually make it to consumers is another question. For now, regulations generally require steering wheels, and Tesla still has to prove the vehicles can reliably drive themselves.

On the earnings call, Musk said production would be “very slow” but would ramp up and go “kind of exponential towards the end of the year and certainly next year.”

tech
Rani Molla

Meta signs deal to use Amazon Graviton chips

Meta said it will deploy “tens of millions” of Amazon Web Services Graviton CPU cores to power so-called “agentic” AI systems — tools that can reason, plan, and act on their own. The move makes Meta one of the largest customers of Amazon’s in-house chips.

The deal also underscores a broader shift in AI infrastructure, as companies move beyond Nvidia GPUs and use different chips for different tasks.

Meta, which is working on its own custom inference chips, also has chip deals with Advanced Micro Devices and Nvidia.

The deal also underscores a broader shift in AI infrastructure, as companies move beyond Nvidia GPUs and use different chips for different tasks.

Meta, which is working on its own custom inference chips, also has chip deals with Advanced Micro Devices and Nvidia.

tech
Rani Molla

Oracle rises after Wedbush’s Dan Ives calls the stock a buy with 25% upside

Oracle extended its premarket gains Friday after Wedbush Securities’ Dan Ives initiated coverage with an “outperform” rating and a $225 price target — about 25% upside to its pre-initiation level — calling the enterprise software and cloud infrastructure company a “foundational infrastructure provider for the AI revolution.”

Ives argues investors are misreading Oracle’s heavy capital spending and negative free cash flow as risky, despite being backed by a massive $553 billion backlog of contracted demand. He says the company’s “secret sauce” is a two-part strategy: building high-performance cloud infrastructure for AI workloads while connecting those models directly to companies’ own data.

“We believe Oracle is in the early innings of a significant repositioning as it executes on this generational opportunity,” Ives wrote.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.