Tech
tech
Jon Keegan

FTC Chair: Open AI models can “liberate startups”

The Commissioner of the FTC, Lina Khan, spoke at a Y Combinator event yesterday, supporting open AI models as a way to help level the playing field for “little tech” in the fast-moving industry.

Khan noted that at moments of technological disruption such as AI, “these are also the moments when incumbents can try to tighten their grasp, even if it means abusing their power because they have the most to lose.”

“Open weights models can reduce costs for developers so that they can focus their capital on products and services rather than expensive model training, and they can free up venture capitalists to pursue promising new applications of models rather than starting at square one with model development.”

-Lina Khan, FTC Chair

Khan made a point to clarify that the FTC supports “open weights models,” rather than AI models simply labeled as “open source,” which don’t make the model weights available. Model weights are basically the weighted contextual relationships between words. When weights are made available, others have more freedom to tailor “distilled” AI models from the larger model, and in Khan’s view “drive innovation, promote competition and consumer choice and lower costs and barriers to entry for startups like the ones that incubate here.”

This week’s release of Llama 3.1 from Meta is an example where the model weights were made available, in addition to the model itself.

“Open weights models can reduce costs for developers so that they can focus their capital on products and services rather than expensive model training, and they can free up venture capitalists to pursue promising new applications of models rather than starting at square one with model development.”

-Lina Khan, FTC Chair

Khan made a point to clarify that the FTC supports “open weights models,” rather than AI models simply labeled as “open source,” which don’t make the model weights available. Model weights are basically the weighted contextual relationships between words. When weights are made available, others have more freedom to tailor “distilled” AI models from the larger model, and in Khan’s view “drive innovation, promote competition and consumer choice and lower costs and barriers to entry for startups like the ones that incubate here.”

This week’s release of Llama 3.1 from Meta is an example where the model weights were made available, in addition to the model itself.

More Tech

See all Tech
tech

Meta projected 10% of 2024 revenue came from scams and banned goods, Reuters reports

Meta has been making billions of dollars per year from scam ads and sales of banned goods, according internal Meta documents seen by Reuters.

The new report quantifies the scale of fraud taking place on Meta’s platforms, and how much the company profited from them.

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

Per the report, Meta internal projections from late last year said that 10% of the company’s total 2024 revenue would come from scammy ads and sales of banned goods — which works out to $16 billion.

Discussions within Meta acknowledged the steep fines likely to be levied against the company for not stopping the fraudulent behavior on its platforms, and the company prioritized enforcement in regions where the penalties would be steepest, the reporting found. The cost of lost revenue from clamping down on the scams was weighed against the cost of fines from regulators.

The documents reportedly show that Meta did aim to significantly reduce the fraudulent behavior, but cuts to its moderation team left the vast majority of user-reported violations to be ignored or rejected.

Meta spokesperson Andy Stone told Reuters the documents were a “selective view” of internal enforcement:

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”

$350B

Google wants to invest even more money into Anthropic, with the search giant in talks for a new funding round that could value the AI startup at $350 billion, Business Insider reports. That’s about double its valuation from two months ago, but still shy of competitor OpenAI’s $500 billion valuation.

Citing sources familiar with the matter, Business Insider said the new deal “could also take the form of a strategic investment where Google provides additional cloud computing services to Anthropic, a convertible note, or a priced funding round early next year.”

In October, Google, which has a 14% stake in Anthropic, announced that it had inked a deal worth “tens of billions” for Anthropic to access Google’s AI compute to train and serve its Claude model.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.