The best AI image generators have improved massively in quality, reaching the point that their output can be hard to distinguish from photography at times. But can they get any better? That might depend on whether they can avoid AI cannibalisation.
Some artists have begun fighting back against AI by using Nightshade, a tool that’s designed to “poison” AI-image generators by providing corrupted training data. But it seems that the generators could end up poisoning themselves. AI images are now so widespread online (see our pick of the best and worst AI advertising) that they’re likely be hoovered up too if attempts are made to train future models with images scraped from the net. And that could through a spanner in the works.
A paper by computer scientists Matyas Bohacek and Hany Farid with the catchy title ‘Nepotistically Trained Generative-AI Models Collapse’ shows that training AI image generators on AI images quite quickly leads to a deterioration in the quality of output. Farid likened the phenomenon to inbreeding. “If a species inbreeds with their own offspring and doesn’t diversify their gene pool, it can lead to a collapse of the species,” he said. And, as reported by Nature, a more recent study found a similar effect in text generation, with the use of synthetic data leading to increasingly nonsensical results.
This means that if the developers of AI image generators want new training data for future models they may need to rethink they they get it from, or find a way to identify and remove AI images, which could increase the cost of development. It also suggests that adding content credentials that enable the public to identify AI-generated images could also be in the interest of developers themselves, helping them to avoid including these images in future training.
It may be that AI image generators already have all the training material that they need for quality and variety, but if they are to be retrained to be able to identify new trends and inventions, it seems that they will need the same quality of human-created art and photography that they were trained on the first time around.