Tech
Apple and European Union
(Photo by Metin Aktas/Anadolu via Getty Images)
Risky business

The EU's Artificial Intelligence Act upends the free-for-all era of AI

It’s week one for the new regulation which could cost big tech billions if they don’t comply

Jon Keegan

For American tech companies building AI, the past few years have been an unregulated free-for-all, with companies rapidly developing and releasing their powerful new technologies, turning the public marketplace into a sort of petri dish. 

In the race to stuff these expensive new tools into every existing digital product, discussions about potential harms, transparency, and intellectual property have taken a back seat.

But on August 1 in the EU, the first comprehensive law regulating AI took effect, and together with other EU regulations aimed at big tech, already affecting American companies operating in Europe.  

Citing concerns with the EU’s GDPR privacy law, Meta recently announced that it will withhold its multi-modal Llama model (images, text, audio, and video) from the region, “due to the unpredictable nature of the European regulatory environment," said a Meta spokesperson in a statement to Sherwood News. 

Apple also is playing hardball with European regulators by threatening to withhold its new “Apple Intelligence” features, citing the Digital Markets Act, an EU law that seeks to enforce fair competition among the largest tech platforms.

While the new AI regulation gives companies a lot of time to comply with the laws (most regulations won’t be enforced until two years from now), it lays down some regulatory concepts that may make their way to the US, either at the federal, or more likely state level.

Risky business

Because AI is such a broad catch-all term that can include decades old machine learning algorithms and today’s state-of-the-art large language models that can be used in so many applications, the EU law starts with classifying AI systems focused on risks to people. Here’s how they break down: 

Unacceptable risk — This group is flat-out prohibited in the law. It includes the systems that have already been seen causing real harm to humans. Some of these include:

High risk — This is the category that faces the most regulation, which is the meat of the law. Insurance companies, banks, and AI tools used by government agencies, law enforcement and healthcare that make consequential decisions affecting peoples’ lives are likely to fall into this category. Companies developing or using these “high risk” AI systems would have increased transparency requirements and allow for human oversight. This includes any systems that involve: 

  • Biometric identification systems

  • Critical infrastructure, such as internet backbones, the electric grid, water systems and other energy infrastructure

  • Educational training, such as grading assessments, dropout prediction, admissions or placement and behavioral monitoring of students

  • Employment – such as systematically filtering resumes or job applications, and employee monitoring

  • Government and financial services, such as insurance pricing algorithms, credit scoring systems, public assistance eligibility and emergency response systems such as 911 calls or emergency healthcare

  • Law enforcement systems, such as predictive policing, polygraphs, evidence analysis such as DNA tests in trials and criminal profiling

  • Immigration application processing, risk assessment or profiling

  • Democratic processes, such as judicial decision making, elections and voting

Limited risk — This applies to generative AI tools like the chatbots and image generators that you might have actually used recently, such as ChatGPT or Midjourney 

  • Disclosure. When people are using these tools, they need to be informed that they are indeed talking to a AI powered chatbot

  • Labeling AI-generated content, so other computers (and humans) can detect if a work was generated via AI. This faces some serious technical challenges, as it has proven difficult to detect AI generated content automatically  

Minimal risk — These systems are left unregulated and include some of the AI that has been part of our lives for a while, such as spam filters, or AI used in video games. 

General purpose AI

Another key concept in the regulation is the definition of “general purpose AI” systems. This means AI models that have been trained on a wide variety of content, meant to be useful for a broad assortment of applications. The biggest models out there today, such as OpenAI’s GPT-4 or Google’s Gemini would fall under this category.

Any of these model builders are required to comply with the EU’s copyright laws, share a summary of the content that was used to train the model, release technical documentation about how it was trained and evaluated, and provide documentation for anyone incorporating these models into their own “downstream” AI products. 

The EU law actually lessens the restrictions for open source models, which would include Meta’s new Llama 3.1, especially because the model also includes the “weights” (the weighted contextual relationships between words). Open models — or to use a term preferred by FTC Chair Lina Khan, “open weights models” — like this would only need to comply with EU copyright laws and a summary of the training content. 

When asked about Meta’s plans to comply with the EU AI act, its spokesperson said, “We welcome harmonized EU rules to ensure AI is developed and deployed responsibly. From early on we have supported the Commission’s risk-based, technology-neutral approach and championed the need for a framework which facilitates and encourages open AI models and openness more broadly. It is critical we don’t lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here. We look forward to working with the AI Office and the Commission as they begin the process of implementing these new rules.” 

An OpenAI spokesperson directed us to a company blog post about the EU law, which noted:

“OpenAI is committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented. In the coming months, we will continue to prepare technical documentation and other guidance for downstream providers and deployers of our GPAI models, while advancing the security and safety of the models we provide in the European market and beyond. ”

Penalties

The fines for violating the EU AI law can be steep. Like the fines for the EU’s privacy law or Digital Markets Act, the violations are tied to a company’s annual global revenue. Companies deploying prohibited “unacceptable risk” AI systems could face up to €35,000,000, or 7% of a company’s annual global revenue, whichever is higher. It’s a little lower than the 10% fine for violating the DMA, as Apple is finding out as it faces a possible $38 billion fine as the first target of that act, but an EU AI violation could still equal a nearly $27 billion hit.  

Google, Apple, and OpenAI did not respond to a request for comment as of press time.

Updated at 5:30 PM to include OpenAI’s response.

More Tech

See all Tech
tech

Meta projected 10% of 2024 revenue came from scams and banned goods, Reuters reports

Meta has been making billions of dollars per year from scam ads and sales of banned goods, according internal Meta documents seen by Reuters.

The new report quantifies the scale of fraud taking place on Meta’s platforms, and how much the company profited from them.

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

$350B

Google wants to invest even more money into Anthropic, with the search giant in talks for a new funding round that could value the AI startup at $350 billion, Business Insider reports. That’s about double its valuation from two months ago, but still shy of competitor OpenAI’s $500 billion valuation.

Citing sources familiar with the matter, Business Insider said the new deal “could also take the form of a strategic investment where Google provides additional cloud computing services to Anthropic, a convertible note, or a priced funding round early next year.”

In October, Google, which has a 14% stake in Anthropic, announced that it had inked a deal worth “tens of billions” for Anthropic to access Google’s AI compute to train and serve its Claude model.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.