Don’t get AI high on its own supply
A new study published in Nature provides fresh evidence of a serious problem that AI researchers have been warning about: ”model collapse.”
According to this study, when an AI model is trained on AI generated content, it “becomes poisoned with its own projection of reality” and after a few cycles of such training, starts to produce gibberish.
AI companies are feverishly searching for troves of human generated text and images (breaking rules and possibly laws along the way), and the well is running dry, having scraped and ingested a significant portion of our output as a species. This has led to a race among AI companies to lock up content deals with major publishers to slake their thirst for more content.
AI companies are considering using “synthetic” or AI generated text to help train the next generation of AI models, which could lead to exactly the kind of nonsense the study noted.
AI companies are feverishly searching for troves of human generated text and images (breaking rules and possibly laws along the way), and the well is running dry, having scraped and ingested a significant portion of our output as a species. This has led to a race among AI companies to lock up content deals with major publishers to slake their thirst for more content.
AI companies are considering using “synthetic” or AI generated text to help train the next generation of AI models, which could lead to exactly the kind of nonsense the study noted.