Back
Image alt

We can already use AI to produce concept art, 3D models, textures, audio, copy and creative ideas.

Looking at 2022 it feels like we’ve reached a turning point towards mainstream adoption of AI, excited to see the exponential development the coming years will bring.

As of today these are the tools and possibilities that have caught my attention.

1/ GPT

GPT is the tool which will have the largest impact in my daily work. Here are a few examples of regular use cases :

  • producing copy and user research for the strategic elements of a pitch or client deliverable
  • generating idea listicles to proactively propose to clients
  • kickstart workshops and brainstorming sessions with more information, data or ideas

The developers I collaborate with have also begun using GPT to support technical workflows, such as learning and experimenting with new technologies or to optimise code (simultaneously improving experience performance).

There will be unimaginable opportunities beyond these examples, for example using AI to power NPCs and character dialogues.

2/ 3D

A rapidly evolving space that is becoming increasingly powerful, I'm curious to see what the capabilities of these tools produce in a year’s time. For now the studio team has been able to generate textures with Stable Diffusion or produce model variations to populate WebGL scenes.

We’ll soon need to look further into text-based model generation with Nvidia or Dreamfusion.

Other possibilities that I hope to experiment with next year :

  • generating 3D models from images with Kaedim
  • create entire worlds in 3D with Promethean

3/ Creative

By far the most popular applications today, instantly generating incredible artwork.

Anyone can generate images with text using the Hugging Face demo, otherwise you can subscribe to Midjourney, Dall-e or Stable Diffusion.

I’ve also been having fun generating paintings from images with Ebsynth.

.

4/ Video

A tremendously exciting area of innovation for me although I have yet to test it on the next project. RunwayML drastically reduces production time on motion design and video, proposing a whole bunch of features to automate repetitive tasks.

Automatic subtitles, sound effect generation and beat detection alone would have saved me days on the last documentary.

5/ Audio

Sound is often even more intangible and we need to produce a lot of variations to agree on a direction. Looking at past experiences, we could save days of work using tools such as Soundful or comparables to :

  • generate and test narrative voiceovers
  • generate sound effects for WebGL scenes
  • generate character dialogues for games

We're already in motion to experiment with these early next year.

6/ Pitching

In the waitlist to try this one out, but it looks like Magical Tome will make it into my pitch building process. Combined with MidJourney for visuals and GPT for research, copy and roadmaps I should be able to condense a day or two of work into a single hour.

Conclusions

The industry is going to shift dramatically in the next few years, I’ll be doing my best to leverage these tools for creativity and output while reducing unnecessary work.

No doubt we’ll need less hours to produce the same results, will agencies be expecting working hours to remain the same - coupled with an increase in projects? Or will we invest the extra time to stay relevant by learning new technologies.

I'd love to hear how others are using AI in their projects, keep me honest!