Artificial intelligence was the trend of 2023, and became trendier than ever in 2024. We’ve seen more creative material produced with generative AI – essentially technology that turns a text description into an image or a video sequence – than in any previous year. And yet, there is still the sense that the real onslaught has yet to begin. It’s like looking to the horizon, seeing a small dust cloud generated by invading forces, and wondering how soon your world is going to change. This year has seen a few advance raids that gave us a few clues.
It has to be said that none of the visions of an AI-driven future that 2024 has offered seem particularly hopeful for artists and creatives – but, beneath all the noise, some hope remains. And at least we’ve had less of the tosh about AI becoming sentient.
Theft or remix?
Perhaps the biggest complaint (there are so many to choose from) creatives have had about generative AI since the likes of Midjourney burst onto the scene is the use of original, human-made artwork as the raw material to train AI systems to produce its own images. Whether you view scraping (finding online content to use as training data) as outright copyright infringement or as a 21st-century spin on the famous maxim (usually attributed to Picasso) “good artists copy, great artists steal”, it suddenly becomes a lot more real when you see AI-made work that’s stylistically eerily similar to your own, as many prominent illustrators have.
As Natalie reported in October, creatives are fighting back. Fairly Trained, an organisation advocating for the ethical use of data in training, has issued a straightforward declaration: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” Among the 38,000 signatories to date are illustrator Claire Wendling and comic book artist Duncan Fegredo.
The mood music from the AI industry suggests this message is at least starting to be taken on board. OpenAI, the maker of flagship large-language model ChatGPT, has spent 2024 cutting deals with content publishers to license material that earlier training sets might simply have scraped from the web. There are rather fewer signs of movement from generative AI developers, though.
Adobe, which has been rapidly integrating its generative AI Firefly into Photoshop, has consistently stated that it doesn’t use customer images stored on Creative Cloud as training data – but customers remain alive to the possibility. In June, Dan told the story of how an apparently minor update to Adobe’s Terms Of Use triggered an online backlash when one clause was worded openly enough to reactivate the community’s enduring fear. The company rapidly clarified that access to customer images is limited to standard screening (to pick up on inappropriate images, for example), but still took a knock to its reputation.
Adobe fell victim to a law no tech firm should ever forget: be careful about how you release updated terms and conditions – someone might actually read them.
Where do people fit in?
Many creatives are worried or angry when they see generative AI producing material they might have expected themselves or their industry colleagues to have made previously. Artists and designers who’ve spent years honing their technical skills and their personal styles now find themselves pitching against people making material through text prompts.
It’s a hornets’ nest that Coca-Cola, arguably the world’s biggest brand, has poked on more than one occasion in 2024. In July, Ian reported from Siggraph on how agency WPP was collaborating with Nvidia to produce AI-generated Coca-Cola assets by the truckload, placing AI-made 3D assets into configurable virtual photoshoots. More contentious was the recent remake of a classic Coca-Cola Christmas TV spot, made entirely with AI and challenging the company’s own ‘Real Magic’ tagline: it felt neither real nor magical.
The most unfortunate example of 2024 is Netflix, which closed down its in-house game dev studio Team Blue, only to open an generative AI game studio weeks later, as Joe reported in November. For all the Netflix rhetoric about establishing “a creator-first vision for AI”, the announcement was met with a wave of disapproval. “While it’s fine for homemade experiments, it’s shocking how VPs of incredible firms pretend that this is okay,” wrote one commenter.
Against slop
Beyond economic concerns, the strong feedback from both the creative community and the broader audience of consumers towards AI creative content is that so much of it is – how do I put this diplomatically? – awful. AI-shaming, where everyone competes to find the most egregious examples of bad AI art, has become one of social media’s biggest bloodsports.
Just this month, sneakers brand Skechers ran a print ad using what looked for all the world like an AI illustration in a recent print ad, with hallmarks like warped faces and nonsensical text in the background detail. Natalie covered the flak the ad received: “everyone just wants the cheapest/quickest option with no regard for quality,” wrote a commenter who works in advertising. (Also worth noting is the criticism Vogue caught simply for running the ad.)
More recently still, Natalie reported on the blowback against an Anthropologie shopping bag design that may or may not be AI-generated, but sure feels like it – displaying what Natalie identified as “Gaussian blur soullessness”.
This is the real warning creatives should pay attention to. Set aside your ethical and economic concerns for a moment, if you can, and you can probably agree that generative AI is another arrow in the creative quiver. (Or, as freelancer site Fiverr provocatively declared in October about the use of AI: “Nobody cares”.) The question is: is it a useful one? Can it help you produce worthwhile material that connects with your audience?
Put it another way. If you take a technology that’s swallowed up the contents of millions of images in order to make its own, how can it produce anything other than work that feels generic and derivative, regardless of how original a spin you enter into your prompt? And if you can’t bring any original thought to the table (step forward, makers of endless bloody Super Panavision 70 AI movie trailers), you’re doomed.
There was undoubtedly slop (a wonderful term popularised in a post by Simon Willison that went viral) before there was AI. AI just makes it easier. Welcome to the democratisation of slop.
The toolmaker’s dilemma
The mixed response to AI content has created a dilemma for companies making hardware and software for creative professionals: the people making the tools we all use. Do they embrace AI and risk diluting their appeal to their users, or reject the technology and risk falling behind their rivals?
Adobe is the flag-bearer for the former approach, with its Firefly AI tech becoming ever more integrated into the Creative Cloud ecosystem. Speaking to Ian at October’s Adobe MAX conference, Adobe VP of generative AI Alexandru Costin’s argument in favour of AI development was simple: “If you don’t use the tech, you won’t compete with other creatives that use the tech.” His context for that assertion is not solely AI, but all the earlier technologies Adobe has brought to market, including PDF and vector graphic file formats.
Standing in opposition to Adobe’s stance are Procreate and Wacom, which both offer digital tools (a painting app and graphics tablets respectively) that are explicitly designed to be used by the human hand. In August, Dan covered a bold statement released by Procreate that remains prominent on its website: “AI is not our future”. The company described generative AI as “built on a foundation of theft”, declaring, “We’re here for the humans.”
Wacom has been less vocal about its overall stance towards AI, but has still gone to the extent of developing an entire platform dedicated to identifying human-made artwork. In his visit to the VFX Festival in June, Ian covered Wacom’s Yuify, which gives your image an invisible, permanent mark that links to a blockchain-stored record of authorship. At once a shield against copyright theft, a potential licensing mechanism and a banner of human ownership, Yuify is currently available for Photoshop, Clip Studio Paint and Rebelle.
AI done right?
If large-scale generative AI brings with it the risk of producing generic, lowest-common-denominator material, perhaps the most promising direction forward lies with private AI models – engines trained on smaller, privately owned data sets, with a tighter end goal than the ability to create anything that might appear in a text prompt.
The agency Rehab is a compelling example that Joe looked at for Creative Bloq’s AI Week in June. Rehab is using AI not to produce artwork, but to streamline the research stage of its projects, leaving responsibility for generating ideas and creative from that research to people. Data from social media and commissioned research helps Rehab’s AI to create profiles of potential customers: what they’re thinking, what they’re buying, and so on. “It’s basically trying to leverage data,” Rehab founder Tim Rodgers told Joe. “If your designers now have more visibility into what consumers want and are needing and can do that on a real-time basis, it’s going to lift the quality of all of the work.”
Another example that’s proved more contentious came in September, when Lionsgate (behind franchises including John Wick, Saw and The Hunger Games) announced its intention to train up a video production AI from Runway with Lionsgate-produced content. This doesn’t necessarily mean we should look forward to AI movies in cinemas: the system could be used to speed up preproduction, for example. Still, responding to the announcement, director Joe Russo commented: “I don’t think I’ve ever seen a grosser string of words than: ‘to develop cutting-edge, capital-efficient content creation opportunities’.”
A return to discernment (please)
If 2024 has taught us anything about AI in the creative fields, it’s that human taste is going to be more important than ever. The novelty of having access to a toolset that makes creating images and video easier than ever before is rubbing off fast, and expecting people to respond to your content simply because of the tool you used to make it isn’t, and has never been, enough.
We’ve been here before with digital art that felt too airbrushed and 3D graphics that hurled us into the Uncanny Valley, where something looks right and wrong at the same time. Step into this broader context, and generative AI becomes the newest inflexion point in a challenge artists and creatives have faced ever since Photoshop was unveiled in 1987. At what stage in your use of technology do you lose the sense of human connection that lies at the heart of any worthwhile creative endeavour? When does making art become so easy that you miss the part where you put something of yourself into the work?
AI image-making is succeeding massively at democratising the creative process, and failing wholeheartedly at fostering art with a genuine human connection. Can it ever find a balance? 2025 will start to answer that question.