AI in video game development isn’t new, but generative AI is a fresh development and is already changing how studios are making games. This goes beyond simply creating 2D concept art from AI systems like Stable Diffusion and covers every aspect of game development and design.
While many game developers are cool on the idea of using AI commercially, and the indie developers we’ve spoken to aren’t keen on AI until the ethical issues are ironed out (though many use AI Copilot to aid with coding), many studios are experimenting for internal use.
For a deeper context, GDC’s 2024 State of the Industry survey revealed 49% of those who took part revealed AI tools were being used broadly by their studios, with 31% saying they were using AI directly. Even those 15% who said they have yet to use generative AI said they were interested. Only 23% said they wouldn’t use AI tools at all.
Below I briefly share some areas where AI tools are being used now, and some of the better systems that many are interested in using. I’m avoiding generative AI art, because it’s largely discussed elsewhere on Creative Bloq – I’d recommend reading Martin Nebelong’s opinions on how AI tools can be an artists ally and how to use AI to transform a Photoshop sketch into a finished painting in real time.
Text-to-image generators are being trialled by game developers. Game website VGC reported how Foamstars developer Toylogic used Midjourney to create in-game music album cover art, but many developers are weary of the toxic atmosphere around using AI to create game art (and the copyright and legal issues persist).
01. Procedural generation
This kind of AI has been in game development and game art for years now, and procedural generation is established. This creates randomised scenery and foliage as well as character placement and level design, often from pre-made assets. Typical examples would be No Man’s Sky, Starfield and Remnant 2 (read our interview with the art team from Gunfire Games for more detail).
The difference with procedural generation and the new emerging generative AI systems is they can go further, and be used to create models and assets and then populate game worlds. Last year we saw how new AI tool CityBLD for Unreal Engine 5 created a 1:1 recreation of New York City using UE5’s Nanite technology. Is this the end of 3D modelling?
02. 3D modelling
The big advancement this year has been in the rise of new text-to-3D prompt-based AI modelling systems. The big software brands are involved, including Shutterstock that has an AI to generate ‘game-ready’ simple 3D models that has been developed in partnership with Nvidia and trained on assets from Shutterstock’s own subsidiary TurboSquid.
While currently these 3D assets are simple and it’s debatable how useful they are for most game developers who need total control over the style, detail and construction, more software developers are working on AI tools, including AI in Maya and research has begun on AI integration broadly into Autodesk’s new releases.
03. Audio and voice work
AI in audio and character performances is something many developers are already looking into using, which is causing concern amongst actors. Generative AI can be used to create voices from a script, as well as adjust audio files for variations.
Right now many game developers are using audio AI early in development to plan how and where dialogue will be used, for example developer Hexworks experimented with AI in the prototype for Lords of the Fallen, but all AI voice samples were removed and replaced by human actors.
Newer games, such as Stellaris from developer Paradox Development Studio, are using AI to adapt voice actors’ recordings to create new performances; despite the new NPCs not being traditionally human, the original actors are being paid royalties.
While Embark Studios, developer of breakout hit The Finals did use AI for its character’s voices. As reported on PC Gamer, the developer used text-to-speech prompts because it was quicker and cheaper – it’s debatable if it’s better.
04. Dynamic quests and storytelling
Procedurally generated quests and missions are already a mainstay of many video games and genres have been designed around this approach, so generative AI will super-power this idea and possibly offer game designers new ways to craft player quests.
As evidenced when I tested Nvidia’s ACE, non-linear storytelling is going to take advantage of AI with narrative arcs adjusting to how a player interacts with characters. This could create more personalised gaming – my experience of the ACE demo was completely different to others who tried it, for example. In fact, simply talking to NPCs in Nvidia ACE demo Covert Protocol feels like a new kind of puzzle to be solved.
05. Generative dialogue
As I discovered when I met the NPCs of Nvidia’s ACE AI, talking directly to characters in a video game, hearing them react to my questions in unique ways, with AI-generated dialogue, is dramatically different to playing current-gen games.
In the case of Nvidia ACE, this works by searching a database of content associated with the game’s world, its characters and missions. Fears of writers being replaced by AI are dampened because each NPC will have a deep personality profile, relationship web and world details that all need to be crafted by humans.
06. NPCs remember everything
God-sims like Utopia, Black & White and The Sims pioneered AI in games, and it’s possible generative AI, combined with blockchain technology, could usher in a new era of games in this genre.
Recently indie developer Antler Interactive revealed its AI-based game Cloudborn that aims to permanently record player interactions with NPCs on the blockchain – so these characters will remember how they were treated and will alter their behaviour accordingly, forever. No save and restarts, the NPCs in Cloudborn will remember.
Another new game, AI People from Keen Software House puts AI NPCs at the forefront of the game’s design. Like a very realistic The Sims, players will be able to create and play out scenarios with AI NPCs that interact with each other in realistic ways, as well as the environment and the player – learning as they go.
07. AI assistants
Nvidia revealed plans for an AI that can help gamers with strategies, tips and advice; the new Project G-Assist will use an Nvidia RTX AI chatbot to show players how to manage resources and improve in-game item crafting. Anyone who’s ever looked at menus of gathered items in a survival game or RPG like Elden Ring and wondered how to best use this stuff will now be shown.
Nvidia demoed this AI at Computex, using it in ARK: Survival Ascended from Studio Wildcard. It can even analyse multiplayer replays to assist with advice for improved performance and offers new creative approaches to players stuck in a game.
Project-G could spark a new era of game tips and advice using AI, as it’s rumoured Microsoft is developing an AI chatbot for Xbox and even the iconic Game Shark game cheats brand is returning as an AI platform, called AI Shark.
For game development, similar AI assistants will likely soon be in most complex software and apps to help users speed up performance or find new workflows. Unity has already shared news of an AI helper to onboard artists in how to get started in the, more complex, Weta VFX tools that have been added to the platform.
08. Game creator platforms
For anyone new to game development the rise in end-to-end game AI developer platforms, such as Rosbud and Layer, could be the ideal entry point. These platforms can take you from text description to final game, creating art, animation, NPCs, scripts, coding – everything.
These are new platforms and users need to agree to the data use, and there could be commercial restrictions. You can generate art on a platform like Layer, or train it on your own art. Games can be created from scratch, from templates or from cloning existing games made in the apps. Even if these don’t end up being your finished game, for newcomers this kind of AI platform could be a way to get into game creation and devise prototypes.
09. AI mocap and animation
The days of needing complex setups for recording motion capture could be over. One of the standout AI mocap apps is Move One AI, which can be used with an iPhone 8 and above to capture markerless footage that can then be turned into data to animate 3D characters. This use of AI for animation could open up a new level of fidelity for indie developers.
Other emerging AI mocap systems include Roko that enables you to use a webcam or upload a video to capture your motions in 3D, Plask Motion is being used by studios including Square Enix and Activision, while Deep Motion uses AI to deliver both text to 3D animation and video to animation.
Creative Bloq’s AI Week is held in association with DistinctAI, creators of the new plugin VisionFX 2.0, which creates stunning AI art based on your own imagery – a great new addition to your creative process. Find out more on the DistinctAI website.