Tech
tech
Jon Keegan

AI companies are worried about the wrong thing

When AI companies test their AI models for the absolute worst things they could be used for, like helping someone build a nuclear or biological weapon, theyre ignoring the actual real-world harms that people are being subjected to right now, says a group of researchers in a new paper released by the AI-watchdog research group the AI Now Institute.

OpenAI, Anthropic, and other makers of “foundational” AI models all cite the same short list of potential risks, usually topped by uses related to chemical, biological, radiological, and nuclear (CBRN) applications.

The publicly available generative-AI tools of today can be considered “dual use.” In addition to helping people write cover letters, generate funny photos, or complete homework, powerful new AI tools can also potentially enable bad actors to develop CBRN weapons. While this idea is certainly terrifying and worth protecting against, the authors of this paper argue that it fails to recognize the potential harms from the AI tools being widely used today — ISTAR: intelligence, surveillance, target acquisition, and reconnaissance.

“The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations.”

The researchers also highlight the existence of personally identifiable information in the massive datasets commercial AI-model builders have trained their models with as a potential national-security risk that should preclude their use in military applications.

OpenAI, Anthropic, and other makers of “foundational” AI models all cite the same short list of potential risks, usually topped by uses related to chemical, biological, radiological, and nuclear (CBRN) applications.

The publicly available generative-AI tools of today can be considered “dual use.” In addition to helping people write cover letters, generate funny photos, or complete homework, powerful new AI tools can also potentially enable bad actors to develop CBRN weapons. While this idea is certainly terrifying and worth protecting against, the authors of this paper argue that it fails to recognize the potential harms from the AI tools being widely used today — ISTAR: intelligence, surveillance, target acquisition, and reconnaissance.

“The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations.”

The researchers also highlight the existence of personally identifiable information in the massive datasets commercial AI-model builders have trained their models with as a potential national-security risk that should preclude their use in military applications.

More Tech

See all Tech
Multicolor Sticks

OpenAI is shipping everything. Anthropic is perfecting one thing.

The two AI titans are in a race to grow revenues, but they have very different strategies for releasing products. And one approach appears to be winning out.

73%

Here’s another sign Anthropic’s enterprise tools are killing it: The AI firm now captures 73% of all spending among companies buying AI tools for the first time, Axios reports, citing data from Ramp, a fintech company that provides corporate cards and expense management software. That’s up from 50% in January, when it was tied with OpenAI.

As we’ve noted, Big Tech is pivoting from experimentation to revenue — and enterprise is where that shift is playing out.

tech

Microsoft considers suing Amazon and OpenAI over $50 billion deal

Microsoft may be about to take its biggest AI partner to court, the Financial Times reports.

Microsoft, a longtime backer of OpenAI, is weighing legal action over the latter’s $50 billion deal with Amazon tied to its new Frontier AI product, arguing it could violate a key clause in their exclusive cloud deal requiring OpenAI’s models to run through Azure. Amazon and OpenAI say they’ve found a workaround. Microsoft executives disagree.

“We know our contract,” a source told the FT. “We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them.”

OpenAI, which is eyeing an IPO this year and under pressure to generate more revenue, is trying to loosen Microsoft’s grip as it scales, while Microsoft increasingly sees OpenAI as both a partner and competitor.

“We know our contract,” a source told the FT. “We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them.”

OpenAI, which is eyeing an IPO this year and under pressure to generate more revenue, is trying to loosen Microsoft’s grip as it scales, while Microsoft increasingly sees OpenAI as both a partner and competitor.

tech

Morgan Stanley says robotaxis could help Tesla sell more cars

Morgan Stanley analysts think Tesla’s robotaxi push could boost more than just a new business line — it could help sell more cars and software, too.

After visiting Giga Texas, analysts said they’re more optimistic about Tesla’s progress toward an unsupervised robotaxi rollout, with improvements in tricky pickup and drop-off scenarios where Tesla doesn’t have as much data from consumer usage. For now, the vast majority of its vehicles still have human supervisors in the front seat, but the analysts say the service is helping Tesla.

“Incremental unsupervised robotaxi miles driven improve the underlying autonomy model, which accelerates the path to personal unsupervised FSD [Full Self-Driving]. This, in turn supports higher FSD attach rates, improves auto demand, and cash flow generation.”

In other words, the more robotaxis drive, the better Tesla’s self-driving gets — and that could make its Full Self-Driving software more appealing and its cars easier to sell, in addition to improving its robotaxi service. Note that Tesla’s vehicle deliveries, which accounts for the lion’s share of the company’s revenue, have dropped two years in a row.

Morgan Stanley also sees a cost advantage. It estimates Tesla’s robotaxis could cost about $0.81 per mile to run today — cheaper than traditional ride-hailing and rival autonomous services — with costs falling further as purpose-built vehicles like the Cybercab scale.

Morgan Stanley maintained its equal-weight rating and $415 price target, about 4% above where the stock is currently trading.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.