At Siggraph Shutterstock revealed its new text-to-3D generative AI model, an API that can be used in a browser or plugged into 3D software such as Blender to create workable models in minutes. Being able to create 3D models from prompts or image references could really open up a complex art form to millions of new creatives.

As the Adobe report from last year stated, the future of design is 3D, and we’ve had small steps to text-to-3D in the past year, but the new Shutterstock generative AI model revealed at Siggraph 2024 is one of the best I’ve seen. This generative API is built using NVIDIA Edify generative AI architecture and has been trained on Shutterstock’s own content, which includes ‘half a million ethically-sourced 3D models, more than 650 million images’, so it has an ethical basis.