Tech
Apple and European Union
(Photo by Metin Aktas/Anadolu via Getty Images)
Risky business

The EU's Artificial Intelligence Act upends the free-for-all era of AI

It’s week one for the new regulation which could cost big tech billions if they don’t comply

Jon Keegan

For American tech companies building AI, the past few years have been an unregulated free-for-all, with companies rapidly developing and releasing their powerful new technologies, turning the public marketplace into a sort of petri dish. 

In the race to stuff these expensive new tools into every existing digital product, discussions about potential harms, transparency, and intellectual property have taken a back seat.

But on August 1 in the EU, the first comprehensive law regulating AI took effect, and together with other EU regulations aimed at big tech, already affecting American companies operating in Europe.  

Citing concerns with the EU’s GDPR privacy law, Meta recently announced that it will withhold its multi-modal Llama model (images, text, audio, and video) from the region, “due to the unpredictable nature of the European regulatory environment," said a Meta spokesperson in a statement to Sherwood News. 

Apple also is playing hardball with European regulators by threatening to withhold its new “Apple Intelligence” features, citing the Digital Markets Act, an EU law that seeks to enforce fair competition among the largest tech platforms.

While the new AI regulation gives companies a lot of time to comply with the laws (most regulations won’t be enforced until two years from now), it lays down some regulatory concepts that may make their way to the US, either at the federal, or more likely state level.

Risky business

Because AI is such a broad catch-all term that can include decades old machine learning algorithms and today’s state-of-the-art large language models that can be used in so many applications, the EU law starts with classifying AI systems focused on risks to people. Here’s how they break down: 

Unacceptable risk — This group is flat-out prohibited in the law. It includes the systems that have already been seen causing real harm to humans. Some of these include:

High risk — This is the category that faces the most regulation, which is the meat of the law. Insurance companies, banks, and AI tools used by government agencies, law enforcement and healthcare that make consequential decisions affecting peoples’ lives are likely to fall into this category. Companies developing or using these “high risk” AI systems would have increased transparency requirements and allow for human oversight. This includes any systems that involve: 

  • Biometric identification systems

  • Critical infrastructure, such as internet backbones, the electric grid, water systems and other energy infrastructure

  • Educational training, such as grading assessments, dropout prediction, admissions or placement and behavioral monitoring of students

  • Employment – such as systematically filtering resumes or job applications, and employee monitoring

  • Government and financial services, such as insurance pricing algorithms, credit scoring systems, public assistance eligibility and emergency response systems such as 911 calls or emergency healthcare

  • Law enforcement systems, such as predictive policing, polygraphs, evidence analysis such as DNA tests in trials and criminal profiling

  • Immigration application processing, risk assessment or profiling

  • Democratic processes, such as judicial decision making, elections and voting

Limited risk — This applies to generative AI tools like the chatbots and image generators that you might have actually used recently, such as ChatGPT or Midjourney 

  • Disclosure. When people are using these tools, they need to be informed that they are indeed talking to a AI powered chatbot

  • Labeling AI-generated content, so other computers (and humans) can detect if a work was generated via AI. This faces some serious technical challenges, as it has proven difficult to detect AI generated content automatically  

Minimal risk — These systems are left unregulated and include some of the AI that has been part of our lives for a while, such as spam filters, or AI used in video games. 

General purpose AI

Another key concept in the regulation is the definition of “general purpose AI” systems. This means AI models that have been trained on a wide variety of content, meant to be useful for a broad assortment of applications. The biggest models out there today, such as OpenAI’s GPT-4 or Google’s Gemini would fall under this category.

Any of these model builders are required to comply with the EU’s copyright laws, share a summary of the content that was used to train the model, release technical documentation about how it was trained and evaluated, and provide documentation for anyone incorporating these models into their own “downstream” AI products. 

The EU law actually lessens the restrictions for open source models, which would include Meta’s new Llama 3.1, especially because the model also includes the “weights” (the weighted contextual relationships between words). Open models — or to use a term preferred by FTC Chair Lina Khan, “open weights models” — like this would only need to comply with EU copyright laws and a summary of the training content. 

When asked about Meta’s plans to comply with the EU AI act, its spokesperson said, “We welcome harmonized EU rules to ensure AI is developed and deployed responsibly. From early on we have supported the Commission’s risk-based, technology-neutral approach and championed the need for a framework which facilitates and encourages open AI models and openness more broadly. It is critical we don’t lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here. We look forward to working with the AI Office and the Commission as they begin the process of implementing these new rules.” 

An OpenAI spokesperson directed us to a company blog post about the EU law, which noted:

“OpenAI is committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented. In the coming months, we will continue to prepare technical documentation and other guidance for downstream providers and deployers of our GPAI models, while advancing the security and safety of the models we provide in the European market and beyond. ”

Penalties

The fines for violating the EU AI law can be steep. Like the fines for the EU’s privacy law or Digital Markets Act, the violations are tied to a company’s annual global revenue. Companies deploying prohibited “unacceptable risk” AI systems could face up to €35,000,000, or 7% of a company’s annual global revenue, whichever is higher. It’s a little lower than the 10% fine for violating the DMA, as Apple is finding out as it faces a possible $38 billion fine as the first target of that act, but an EU AI violation could still equal a nearly $27 billion hit.  

Google, Apple, and OpenAI did not respond to a request for comment as of press time.

Updated at 5:30 PM to include OpenAI’s response.

More Tech

See all Tech
tech

OpenAI acquires Astral, adding talent to Codex team

OpenAI has acquired open-source Python tool developer Astral, bringing aboard additional coding talent for its Codex team.

The company said the acquisition will help Codex “expand beyond coding” by helping address a wider range of development tasks, such as planning, testing, and code maintenance.

OpenAI said Codex has seen “3x user growth and 5x usage increase” since the start of 2026, and has over 2 million weekly active users.

Software development is emerging as one of the key battlegrounds where OpenAI is competing for market share with Anthropic, which has been enjoying success with its Claude Code product.

OpenAI said it will continue to support Astral’s open-source software projects.

OpenAI said Codex has seen “3x user growth and 5x usage increase” since the start of 2026, and has over 2 million weekly active users.

Software development is emerging as one of the key battlegrounds where OpenAI is competing for market share with Anthropic, which has been enjoying success with its Claude Code product.

OpenAI said it will continue to support Astral’s open-source software projects.

tech

Elon Musk gives an estimate for Tesla’s AI6 chip timeline... while the AI5 is still unfinished

Tesla CEO Elon Musk said yesterday that the company’s AI6 chip could, with “some luck and acceleration using AI,” be finalized and sent to manufacturing by December. For those paying attention, Tesla hasn’t confirmed that its previous chip, the AI5, has reached tape-out, with Musk saying only that the design is in “good shape” and “almost done.” Still, Musk is already talking about subsequent chips AI6, AI7, AI8, and beyond.

Here’s a roundup of when these chips are expected, what they’re supposed to do, and what Musk himself has said about them.

While the AI5 and AI6 will be made by TSMC and Samsung, respectively, Musk has said Tesla eventually aims to manufacture its future AI chips at Tesla’s upcoming Terafab factory in Austin.

tech

NHTSA expands Tesla FSD probe, focusing on whether system can detect when cameras can’t see the road

The National Highway Traffic Safety Administration said it is expanding its probe into Tesla’s Full Self-Driving system into an engineering analysis covering about 3.2 million Teslas, a majority of its vehicles that are on the road in the US, Reuters reports.

The agency is focusing on Tesla’s “degradation detection system,” which is meant to recognize when its camera-based technology cannot reliably perceive the road and prompt drivers to intervene:

“Available incident data raise concerns that Tesla’s degradation detection system, both as originally deployed and later updated, fails to detect and/or warn the driver appropriately under degraded visibility conditions such as glare and airborne obscurants. In the crashes that ODI has reviewed, the system did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred.”

Tesla CEO Elon Musk has long argued that the company’s self-driving approach does not require the expensive lidar sensors used by rivals such as Waymo.

The agency is focusing on Tesla’s “degradation detection system,” which is meant to recognize when its camera-based technology cannot reliably perceive the road and prompt drivers to intervene:

“Available incident data raise concerns that Tesla’s degradation detection system, both as originally deployed and later updated, fails to detect and/or warn the driver appropriately under degraded visibility conditions such as glare and airborne obscurants. In the crashes that ODI has reviewed, the system did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred.”

Tesla CEO Elon Musk has long argued that the company’s self-driving approach does not require the expensive lidar sensors used by rivals such as Waymo.

$1B

Apple is behind the rest of Big Tech when it comes to developing its own AI, but that hasn’t stopped it from cashing in on the AI boom. The iPhone maker stands to bring in more than $1 billion in App Store fees this year from other companies’ generative-AI apps, mostly from ChatGPT, The Wall Street Journal reports, citing data from App Magic.

Unlike rivals pouring hundreds of billions into AI infrastructure, Apple’s spending has been relatively modest, with its overall capital expenditure actually declining last quarter. Its lucrative App Store model lets Apple profit from AI as a gatekeeper without fully joining the expensive race to build it.

Multicolor Sticks

OpenAI is shipping everything. Anthropic is perfecting one thing.

The two AI titans are in a race to grow revenues, but they have very different strategies for releasing products. And one approach appears to be winning out.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.