Paul Trillo directs ‘Cycles of Self’.
Using DALL-E 2 to generate hundreds of outfits.
Filmed in Los Angeles.
Words from Paul below.
Tell us about using DALL-E to cycle through a series of looks.
I’ve done this kind of outfit change motif in a short film and a handful of commercials I’ve directed with ArtClass. I think a lot of my work is pursuing new techniques, exploring practical stuff, digital stuff, or a combination of both.
I did a short film about 10 years ago called ‘A Truncated Story of Infinity‘, and it’s about the multi-verse before the multiverse was hot, and it was using this technique, but in a sort of stop-motion approach.
How does DALL-E hold up against your previous methods?
It’s way crazier than what I could have done at the time. When I first gained access to the AI, my mind immediately went to using techniques that I’ve used in the past, but pushing them further, if you were to rent all those outfits or design them with a wardrobe designer, it would be incredibly expensive and time-consuming trying to get a talent in-and-out of 100 different outfits while the sun is setting and the camera is moving. And then if you did it digitally with traditional 3D, you’d have to model, texture, and all that stuff.
When working with these tools, are you just looking to explore a certain technique or AI, or do you have an objective of what you want to achieve?
Each of the AI experiments has been to explore it in a different way, even though it’s a similar technique — I’m always trying to find ways to twist it or to push it. I try to look at new technology by thinking about what it opens up that we haven’t been able to do before. Looking at it from a more obtuse angle, instead of just repeating the same stuff that everyone else is making.
How do you deal with the infinite possibilities in front of you?
It’s a little overwhelming, but it does free you up and offers something that you couldn’t have done before without having to sit over a designer’s shoulder and tediously art-direct millions of different options. So to be able to use this tool without having to really disturb other people is interesting, just for iteration and exploring, and following different tangents.
How do you think it will affect VFX artists working today?
For me, this is not about taking work away from anyone.
For me, this is not about taking work away from anyone, I see it as a risk for perhaps illustrators that are doing 2D work, but I think my hope or intention with these experiments is to show filmmakers and VFX artists that there’s a new tool we have to play with that opens up new possibilities. VFX artists already have to work nights and weekends. So hopefully down the pipeline, when some of these tools get better and higher resolution, it will allow people to work more efficiently, but also more creatively.
In what other ways do you think you will bring DALL-E into your workflow as a filmmaker?
There are definitely other uses for it beyond the end product of these kinds of experimental animations.
I’ve already started using it for treatments. I’m working on a script with a buddy and we write a scene and then go to DALL-E and try to create the imagery from that scene. Which is insane. You can write in a DP or director’s name, and you get better lighting results because of it. For treatments where you’re usually limited to media that already exists, it does allow you to envision and show more pointedly what you’re hoping to accomplish.
I want it to feel like soap bubbles, but I also want it to feel like brutalist architecture.
You could also get set design concepts out of it. I think a lot of directors think abstractly, where they give directions to a production designer such as “I want it to feel like soap bubbles, but I also want it to feel like a brutalist architecture” and with DALL-E you can enter two disparate ideas and see what it comes up with, and take that to a designer. Not all directors can do storyboards and stuff like that, you can discover your text-based ideas visually and maybe better communicate with other crew members.
Have you seen the recent videos that have emerged, the first generated video from text?
That is next. It’s gonna be a little scary. I think using it in a way that actually entertains people is going to take a long time, but it could be a replacement for stock footage.
These are all very much in their infancy — they’re not great, but you can imagine:
#stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://t.co/7SFUVKoUdl
Working on a more permissive release & inpainting checkpoints.
Soon™ coming to @runwayml for text-to-video-editing pic.twitter.com/7XVKydxTeD
— Patrick Esser (@pess_r) August 11, 2022
https://t.co/EMDHgVyEyF — Patricio Gonzalez Vivo (he/him) (@patriciogv) February 17, 2022
Took a face made in #stablediffusion driven by a video of a #metahuman in #UnrealEngine5 and animated it using Thin-Plate Spline Motion Model & GFPGAN for face fix/upscale. Breakdown follows:1/9 #aiart #ai #aiArtist #MachineLearning #deeplearning #aiartcommunity #aivideo #aifilm pic.twitter.com/hzUtJvB8IK — CoffeeVectors (@CoffeeVectors) September 12, 2022
made with #CogVideo pic.twitter.com/JvQdGB5ZhA — Michael Friesen (@MichaelFriese10) August 5, 2022
Lastly, I recently posted a video zooming out from the endings of ‘There Will Be Blood’, ‘Fight Club’, and ‘2001: A Space Odyssey’.
I used DALL-E to extend the images using out-painting to create a behind-the-scenes of the shot. It shows that these realities we get pulled into are false — when shooting you’re so precious about the frame, and then as soon as you pan a little bit right it’s pure chaos — working right up to the edge.
- Paul Trillo
- Director