Like it or loathe it, there’s no ignoring AI. Six months ago, we wrote about How AI is changing graphic design, and explored the ways new artificial intelligence technologies were transforming the industry. And things have moved on considerably since then.
New launches by Google, Microsoft, Open AI and others have embedded AI more tightly into everyday computing. Now, whether you’re typing in a search query into Google or using the Finder on your Windows PC, AI is there, waving at you. Adobe, too, is bringing more and more AI tools into its software. And up and down the country, more and more agencies tell us it’s now fully embedded as part of their day-to-day workflows.
In this article, we speak to a hand-picked selection of graphic designers, working at well-known agencies, to find out what they’re actually using AI for right now, and gather their views on whether all this is heading in the future.
AI in graphic design baked in from the start
Typically, AI is used most at agencies around the start of the process. Because while generative AI art might not be perfect, and the legal issues around have certainly not gone away, it does offer a quick and easy way to get going with ideas.
“At the moment, AI has the most impact for us during the ideation and concepting phases,” says Iain Acton, motion design director at Mother Design.
“Several AI-powered tools have become staples in our toolkit, including MidJourney, ChatGPT, Topaz Video, Runway Gen-2 and Adobe Firefly inside Adobe Photoshop. And they’re particularly useful in creative sprints where time is limited, helping us visualise ideas early on and generate unique images that stand out from common inspiration boards on platforms like Pinterest and Are.na. That saves us hours of time that would have previously been spent searching for the perfect reference.”
For example, on one project a client brought the studio images that represented a series of immersive worlds. Mother Design then used image-to-video AI generators to bring these worlds to life as videos.
“These videos were used as a pre-visualisation and proof of concept before creating these worlds using 3D software, where there would be much greater creative control,” explains Iain. “Additionally, we’ve used text-to-voice generators to create guide VOs for brand videos at the start of production. We also regularly use AI tools to enhance, extend or upscale images and videos, ensuring high-quality outputs.”
The team have also prototyped design tools in Processing and P5js using ChatGPT, which allows them to create proofs of concept for clients that they can buy into. For example, on a recent project they created a texture generator, which went on to become a complete asset generator that they built in collaboration with an external partner.
Helen Fuchs, executive director of design at ustwo, tells a similar story. “We’re using AI to accelerate our processes and workflow, freeing up more time to make the work exceptional,” she explains. “For instance, in the early stages we’ll use it to support workshop setup and timings, to bounce around early thinking, and to create moodboards and treatments to inform content creation.”
Speeding up work in this way helps them make quick strides when pitching, she adds. “For instance, in a recent project, we used AI to generate gameplay concepts in storyboards, to help communicate ideas across teams and with clients. That enabled us to get everyone on the same page quickly.
“Currently, we’re seeing best results when combining different AI tools for areas of the design process like tone of voice, art direction, synthesising research or user trends,” she adds. “Some of these tools include ChatGPT 4 into Grammarly, for tone of voice; Chat GPT4, for journey development and user trends, DALL-E 3, Firefly, MidJourney for art direction and Pi for emotive product copy.
Getting experimental
For many agencies, AI isn’t just useful for client work. “We use it for more general exploration and experimentation,” says Grant Hunter, chief creative officer at Iris. “The focus here is very much on playing with the tools to see where they might take us. This isn’t for work that will enter the public domain for a client; rather these are creative experiments to explore the possibilities.”
Denmark-based studio Swift Creatives are taking a similar approach. Recently, as part of a side project away from client work, they came up with a concept that combines a smart voice speaker with an AI-generated holographic avatar and AI assistant, housed in a dedicated device.
In simple terms then, this is taking the idea of a smart speaker and turning it into a true companion: for example, providing homework help to children, sharing music tastes with teenagers, or offering company to elderly people.
“As designers, our role primarily involved facilitating the AI assets and output,” explains studio co-founder Carsten Eriksen. “And this was a fascinating example of how AI is not only a tool for ideation but can also become a creative collaborator in its own right.”
Improving collaboration
Interestingly, Helen adds that AI is also helping to cross-pollinate ideas across disciplines at ustwo. “Building digital products is never a solo endeavour,” she points out. “You need designers and engineers and product and strategy: so anything that helps us all exchange ideas more fluidly is a great thing.”
Traditionally, creating visuals and mock-ups was the designer’s domain, while the feedback and communication from other stakeholders like developers, product designers and clients was predominantly verbal. “But that means things can easily get lost in translation,” says Helen. “AI tools, in contrast, have enabled our teams to be able to trial and share ideas quickly across disciplines and speak the same language.”
So what does that look like in practice? “Developers are now dropping visuals they’ve generated using AI into Slack channels to share ideas, which is so exciting,” Helen enthuses. “It’s not that everyone can now do each other’s jobs, or that everyone is a graphic designer – but there is a lower barrier to bringing thinking to life. This helps collaboration and get to stronger ideas and output, faster.”
She adds that aside from artwork, AI is also proving helpful when it comes to copy. “ChatGPT has helped open up writing for graphic designers who aren’t so comfortable communicating through writing,” she explains. “It levels the playing field a bit and brings everyone in. Ultimately, since more voices are able to be expressed and included, using AI tools can help us make stronger products that are more inclusive and accessible.”
AI for research and analysis
A further use for AI in graphic design that we’re hearing a lot about is for research and analysis. Simon Collister, director at UNLIMITED‘s Human Understanding Lab, provides a example.
“Using our in-house platform, called LUCA, we can identify a target audience for a campaign and in minutes run an analysis using a sample of the audience’s social content,” he explains. “This analysis deploys AI to generate a pyschographic profile – essentially an AI-power personality-based persona of the audience which in turn can inform creative development.”
And the beauty of AI is that the team can ‘stitch’ different algorithms together – creating incremental value. “So, for example we can also take the attributes and qualities of an audience persona and connect it to another Generative AI engine which compares the persona with a behaviourial science ‘cheat sheet’.
“This automated process is able to rapidly create a basic creative brief, effectively revealing in minutes that an audience with a particular psychographic ‘fingerprint’ is best influenced or engaged by dialling up or down particular creative elements.”
Again, it isn’t about replacing creatives but helping them get to in an initial proposition or concept faster. As Simon puts it: “Reducing time to output for more valuable human input is the order of the day, not replacing the human specialist altogether.”
Dangers and downsides of AI in graphic design
But even if AI isn’t directly costing jobs in graphic design (yet), some pretty tricky questions are still arising. For instance, we all know from experience that AI art looks pretty samey and generic. So how can designers make best use of it while maintaining a distinctly human, creative touch in their work?
Iain fully acknowledges this challenge. “I see AI as a powerful tool, but its reliance on patterns and predictions from its training data doesn’t allow for serendipity to enter the creative process,” he points out. “Nor can it capture those random bits of inspiration that strike while you’re living your life: that could be something you see, hear, or experience.”
With that in mind, he says: “I’ll continue to use AI during the ideation stages, but I don’t feel confident enough to deliver completely AI-generated work to clients. The human touch remains crucial for ensuring the emotional and creative impact of the final product.”
Helen echoes this sentiment, emphasising that AI-generated media often has an “AI feel” that she believes designers will become better and better at sniffing out. “I hope it pushes forward beautiful ideas and difference,” she adds. “Originality will become an even more rarified commodity and standing out will become even harder. That’s an exciting challenge for designers and creatives. I read recently that we’ll all become editors with AI, curating our way to great craft. That made a lot of sense to me.”
Endless tweaks and legal worries
Right now, though, the supposed benefits of using AI can be a bit of a mirage if it’s not approached thoughtfully. “It can become an endless cycle of tweaking prompts and parameters, sometimes taking longer than traditional methods,” admits Iain. “Despite impressive results, the quality isn’t always there.
“For instance, with image-to-video generators, we’ve found it very difficult to control the output of the generators, as new objects can randomly appear or become deformed, making it unreliable for anything other than ideation, unless you lean into these mistakes as an aesthetic.
“Additionally, using AI assistants for research can be risky because they can often hallucinate and provide false information, which you don’t want to use as the foundation of an idea.”
And then, of course, there are the legal issues. “When using generative AI for our clients, we are very aware of the legalities and have looked at all of our contracts to make sure we are compliant,” says Grant. “When it comes to using generative AI for our clients, we are very aware of the legalities and have looked at all of our contracts to make sure we are compliant.
“Tools such as Adobe Firefly give us some certainty,” he adds, “as they have trained the machine learning on a dataset they own and they are financially compensating creators. But for us, being transparent about the use of AI is key.”
At the same time, the agency recognises the positive aspects of AI, such as the potential to reducing a project’s carbon footprint. “On a recent live job, we used AI-generated landscapes as backdrops in a virtual production,” Grant recalls. “Going virtual has a massive upside in terms of the reduction of the production’s carbon footprint. This particular job would have involved a multi-location shoot over a number of weeks. We achieved comparable results in just a week down at Twickenham Studios in London.”
At the end of the day, Helen points out, the most important thing is to recognise that AI in graphic design is a tool: no more or less. The nature of graphic design itself has not changed.
“At the end of the day, it’s still all about a team solving problems together,” she explains. “Finding the right problem, solving it in interesting ways, launching to market. AI will accelerate us get there and help further de-risk the process as we go, but it’s just a tool.”
It’s not replacing designers themselves, she stresses. “It’s just moving the process along, allowing us to brainstorm more quickly and turn some tasks over to AI so that we really have the time to flex that human and creative touch.”
Everything is changing
At the same time, the types of problems designers have to solve are fast-changing as well, and that’s where AI can really come into its own.
“We are being asked a lot by clients to explore and think about the future of interfaces and experiences in an AI paradigm,” Helen notes. “Everyone is recognising that chat can feel slow and laboured… so what’s next?” AI will, she believes, open up the opportunity to create adaptable interfaces – “living systems that shape to your needs, your mood, your context”.
“Designing for these experiences is super interesting,” she adds. “How do brands show up? How do experiences feel familiar and easy to navigate whilst adapting and changing? I hope a positive of this adaptability will be that experiences become more accessible and inclusive and ‘for all’, being able to cater to individual needs in a seamless way.”
More generally, many graphic designers are looking forward to AI tech continuing to evolve, and present fresh and exciting opportunities to speed up workflows. “At Swift Creatives, we foresee AI playing an ever-expanding role in our design process,” says Carsten. “For instance, while AI generation of 3D models is still in its early stages, I’m eagerly anticipating the day when we can prompt our 3D creations with AI and seamlessly review them in our design process. I expect this to be a reality in the very near future.”