Tech
Apple and European Union
(Photo by Metin Aktas/Anadolu via Getty Images)
Risky business

The EU's Artificial Intelligence Act upends the free-for-all era of AI

It’s week one for the new regulation which could cost big tech billions if they don’t comply

Jon Keegan
8/2/24 1:11PM

For American tech companies building AI, the past few years have been an unregulated free-for-all, with companies rapidly developing and releasing their powerful new technologies, turning the public marketplace into a sort of petri dish. 

In the race to stuff these expensive new tools into every existing digital product, discussions about potential harms, transparency, and intellectual property have taken a back seat.

But on August 1 in the EU, the first comprehensive law regulating AI took effect, and together with other EU regulations aimed at big tech, already affecting American companies operating in Europe.  

Citing concerns with the EU’s GDPR privacy law, Meta recently announced that it will withhold its multi-modal Llama model (images, text, audio, and video) from the region, “due to the unpredictable nature of the European regulatory environment," said a Meta spokesperson in a statement to Sherwood News. 

Apple also is playing hardball with European regulators by threatening to withhold its new “Apple Intelligence” features, citing the Digital Markets Act, an EU law that seeks to enforce fair competition among the largest tech platforms.

While the new AI regulation gives companies a lot of time to comply with the laws (most regulations won’t be enforced until two years from now), it lays down some regulatory concepts that may make their way to the US, either at the federal, or more likely state level.

Risky business

Because AI is such a broad catch-all term that can include decades old machine learning algorithms and today’s state-of-the-art large language models that can be used in so many applications, the EU law starts with classifying AI systems focused on risks to people. Here’s how they break down: 

Unacceptable risk — This group is flat-out prohibited in the law. It includes the systems that have already been seen causing real harm to humans. Some of these include:

High risk — This is the category that faces the most regulation, which is the meat of the law. Insurance companies, banks, and AI tools used by government agencies, law enforcement and healthcare that make consequential decisions affecting peoples’ lives are likely to fall into this category. Companies developing or using these “high risk” AI systems would have increased transparency requirements and allow for human oversight. This includes any systems that involve: 

  • Biometric identification systems

  • Critical infrastructure, such as internet backbones, the electric grid, water systems and other energy infrastructure

  • Educational training, such as grading assessments, dropout prediction, admissions or placement and behavioral monitoring of students

  • Employment – such as systematically filtering resumes or job applications, and employee monitoring

  • Government and financial services, such as insurance pricing algorithms, credit scoring systems, public assistance eligibility and emergency response systems such as 911 calls or emergency healthcare

  • Law enforcement systems, such as predictive policing, polygraphs, evidence analysis such as DNA tests in trials and criminal profiling

  • Immigration application processing, risk assessment or profiling

  • Democratic processes, such as judicial decision making, elections and voting

Limited risk — This applies to generative AI tools like the chatbots and image generators that you might have actually used recently, such as ChatGPT or Midjourney 

  • Disclosure. When people are using these tools, they need to be informed that they are indeed talking to a AI powered chatbot

  • Labeling AI-generated content, so other computers (and humans) can detect if a work was generated via AI. This faces some serious technical challenges, as it has proven difficult to detect AI generated content automatically  

Minimal risk — These systems are left unregulated and include some of the AI that has been part of our lives for a while, such as spam filters, or AI used in video games. 

General purpose AI

Another key concept in the regulation is the definition of “general purpose AI” systems. This means AI models that have been trained on a wide variety of content, meant to be useful for a broad assortment of applications. The biggest models out there today, such as OpenAI’s GPT-4 or Google’s Gemini would fall under this category.

Any of these model builders are required to comply with the EU’s copyright laws, share a summary of the content that was used to train the model, release technical documentation about how it was trained and evaluated, and provide documentation for anyone incorporating these models into their own “downstream” AI products. 

The EU law actually lessens the restrictions for open source models, which would include Meta’s new Llama 3.1, especially because the model also includes the “weights” (the weighted contextual relationships between words). Open models — or to use a term preferred by FTC Chair Lina Khan, “open weights models” — like this would only need to comply with EU copyright laws and a summary of the training content. 

When asked about Meta’s plans to comply with the EU AI act, its spokesperson said, “We welcome harmonized EU rules to ensure AI is developed and deployed responsibly. From early on we have supported the Commission’s risk-based, technology-neutral approach and championed the need for a framework which facilitates and encourages open AI models and openness more broadly. It is critical we don’t lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here. We look forward to working with the AI Office and the Commission as they begin the process of implementing these new rules.” 

An OpenAI spokesperson directed us to a company blog post about the EU law, which noted:

“OpenAI is committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented. In the coming months, we will continue to prepare technical documentation and other guidance for downstream providers and deployers of our GPAI models, while advancing the security and safety of the models we provide in the European market and beyond. ”

Penalties

The fines for violating the EU AI law can be steep. Like the fines for the EU’s privacy law or Digital Markets Act, the violations are tied to a company’s annual global revenue. Companies deploying prohibited “unacceptable risk” AI systems could face up to €35,000,000, or 7% of a company’s annual global revenue, whichever is higher. It’s a little lower than the 10% fine for violating the DMA, as Apple is finding out as it faces a possible $38 billion fine as the first target of that act, but an EU AI violation could still equal a nearly $27 billion hit.  

Google, Apple, and OpenAI did not respond to a request for comment as of press time.

Updated at 5:30 PM to include OpenAI’s response.

More Tech

See all Tech
tech
Jon Keegan
9/11/25

OpenAI and Microsoft reach agreement that moves OpenAI closer to for-profit status

In a joint statement, OpenAI and Microsoft announced a “non-binding memorandum of understanding” for their renegotiated $13 billion partnership, which was a source of recent tension between the two companies.

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling nonprofit arm would hold an equity stake in the PBC valued at $100 billion, which would make it “one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission — and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

Settling the agreement is a requirement to clear the way for OpenAI to convert to a for-profit public benefit corporation, which it must do before a year-end deadline to secure a $20 billion investment from SoftBank.

OpenAI also announced that the controlling nonprofit arm would hold an equity stake in the PBC valued at $100 billion, which would make it “one of the most well-resourced philanthropic organizations in the world.”

The statement read:

“This recapitalization would also enable us to raise the capital required to accomplish our mission — and ensure that as OpenAI’s PBC grows, so will the nonprofit’s resources, allowing us to bring it to historic levels of community impact.”

tech
Rani Molla
9/11/25

BofA doesn’t expect Tesla’s ride-share service to have an impact on Uber or Lyft this year

Analysts at Bank of America Global Research compared Tesla’s new Bay Area ride-sharing service with its rivals and found that, for now, its not much competition for Uber and Lyft. “Tesla scale in SF is still small, and we dont expect impact on Uber/Lyft financial performance in 25,” they wrote.

Tesla is operating an unknown number of cars with drivers using supervised full self-driving in the Bay Area, and roughly 30 autonomous robotaxis in Austin. The company has allowed the public to download its Robotaxi app and join a waitlist, but it hasn’t said how many people have been let in off that waitlist.

While the analysts found that Tesla ride-shares are cheaper than traditional ride-share services like Uber and Lyft, the wait times are a lot longer (nine-minute wait times on average, when cars were available at all) and the process has more friction. They also said the “nature of [a] Tesla FSD ‘driver’ is slightly more aggressive than a Waymo,” the Google-owned company that’s currently operating 800 vehicles in the Bay Area.

APPLE INTELLIGENCE

Apple AI was MIA at iPhone event

A year and a half into a bungled rollout of AI into Apple’s products, Apple Intelligence was barely mentioned at the “Awe Dropping” event.

Jon Keegan9/10/25
tech
Jon Keegan
9/10/25

Oracle’s massive sales backlog is thanks to a $300 billion deal with OpenAI, WSJ reports

OpenAI has signed a massive deal to purchase $300 billion worth of cloud computing capacity from Oracle, according to a report from The Wall Street Journal.

The report notes that the five-year deal would be one of the largest cloud computing contracts ever signed, requiring 4.5 gigawatts of capacity.

The news is prompting shares to pare some of their massive gains, presumably because of concerns about counterparty and concentration risk.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Yesterday, Oracle shares skyrocketed as much as 30% in after-hours trading after the company forecast that it expects its cloud infrastructure business to see revenues climb to $144 billion by 2030.

Oracle shares were up as much as 43% on Wednesday.

It’s the second example in under a week of how much OpenAI’s cash burn and fundraising efforts are playing a starring role in the AI boom: the Financial Times reported that OpenAI is also the major new Broadcom customer that has placed $10 billion in orders.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.