The craft of screen graphics and movie user interfaces – interview with Paul Beaudry
Continuing a series of interviews with designers and artists that bring user interfaces and graphics to the big screens, it’s my pleasure to host Paul Beaudry. You have seen his work on “Avatar”, “The Hunger Games” and “Ender’s Game”, and in this interview Paul talks about what goes into designing screen graphics, drawing inspiration from the latest explorations in real-world software and hardware, holographic and 3D displays as a possible evolution of human-computer interaction in the next few decades, challenges in using technologies such as Google Glass or Siri in film, the ongoing push to create more detailed and elaborate sequences, and his thoughts on working remotely with the current crop of collaboration tools.
Kirill: Tell us about how you started in the field of motion graphics.
Paul: I started out wanting to be an AVID editor, editing documentaries and similar productions. As soon as I finished school and got into the industry, I found out that what I liked the most was coming up with graphics for documentaries and shows that I ended up working on. From there I started teaching myself motion graphics, moving into opening title sequences and getting some cool opportunities.
There are really good communities online for learning. At the time for me it was talking with other people at the great mograph.net site, talking about how to get into the industry, the challenges and technical issues. That’s how I got my start. The software itself is not crazy, and a lot of people learn how to use, for example, Photoshop even though they’re not professional graphic designers or photographers. They way I look at After Effects and other 3D tools that we use is that they are more complex than Photoshop, but not so much so that it’s not impossible to learn on your own. It was years going crazy, huddled over my computer, teaching myself in every bit of free time I had during late nights, not having much of an outside life for sure [laughs].
Kirill: That’s on the technical side of things. What about the design side?
Paul: I hope I’m still learning as I go. It was a lot of the same, learning design and technical stuff together hand-in-hand. I think it’s important, actually. A lot of the conversations we had online was about getting critiques of your work, moving forward in design and technical side at the same time.
Kirill: How did you start building out your portfolio?
Paul: A whole bunch of spec pieces. My interest at the start was not really in UI design for film. At the time not that many people even knew that could be a full-time job. I was more on the television side of things, doing commercials, title sequences, more traditional motion graphics. And it was also doing my own stuff, building up reputation to get real projects.
Kirill: What were those first real projects?
Paul: I was working with the company Frantic Films on a half-hour documentary show for the Discovery channel. I don’t think it ran in US; it was a Canadian thing. I got a chance to do the opening title sequence for them, expressing my interest in doing that, and they gave me my first shot. From there I started doing a lot more work for them, and some stuff for HBO and A&E a few years later, and as a freelancer I kind of branched from there.
Kirill: What’s the story of the iOS music app Anthm that you have in the portfolio section on your site?
Paul: Anthm came out in February 2012, and actually the name is now Jukio because we ran into a bit of a legal issue. That’s something I did with my friends in our free time. It’s me, Tyler Johnston who is a graphic designer, and Ben Myers who I worked with on Avatar. We were having drinks at a bar, and we were annoyed at the music they were playing. So we came up with an app for iOS that lets you request and vote on the music playing in your location from your phone, like a jukebox with millions of songs.
Kirill: Perhaps jumping a bit forward, your work for movie UIs is the tip of the iceberg above the surface, with playback loops or basic interactivity that mostly focuses on the presentation layer. And on the other hand, creating a real application that people run on their devices forces you into the full design and implementation cycle, complete with crashes, bug fixes, feature requests etc.
Paul: My first passion is to create fantasy user interfaces for film, but at a certain point you want to make something that’s real, something that a real user can use. Something that doesn’t only look like magic, but hopefully feels magical to use. Not that Jukio is earth-changing or anything, it’s simply a music app, but there are small UX choices there that feel magical to us and that’s not always something you can do in film. It’s definitely something that we’re really interested in – getting real feedback from people, making something real that can be used to solve real-world problems.
I should mention that none films I can talk about right now had real software in them, everything that’s been released was done in post, but the company I’m working with now, G Creative Productions, has the ability to create real software that’s used on set by the actors while they’re filming. It’s all done using live playback so it’s not a post-production thing at all – they create real software that the actors can tap, change on the fly and really interact with while they’re filming.
Kirill: And you’re focusing mostly on presentation and interaction part?
Paul: Absolutely. We fake a lot of the underlying pieces. But it’s still more involved than what you do in the post process. There’s a lot more interaction involved, and I think there’s actually a lot more thought that has to go into it. There’s a programming level involved that isn’t there when you do it in post.
Kirill: Is it more challenging to do something that actors interact with on the live set?
Paul: I’m definitely still more comfortable with post production, as I’m still on my first couple films doing playback on sets. There are benefits to both ways of doing it. On the playback side it just looks more realistic. In post production there are different challenges to consider, like the interactive lighting, how the the light from the screen will reflect off of someone’s face, for example. If you do it in post, it becomes a big job to fake the light created by these screens, whereas in playback it’s not really a concern anymore.
In most cases it’s a lot more practical, as the director can actually see the screens while he’s filming on set. On the other hand, in post we can do all kinds of crazy stuff like holograms, the craziest ideas we want and there’s really nothing stopping us from doing it.
Kirill: Well, except for budget and time.
Paul: Time and budget, yeah. We push those pretty hard [laughs]. I think I still prefer doing things in post where we can create this crazy stuff. That’s really where a lot of fun is – envisioning these really far futuristic pieces of technology without concern for what’s technically possible.
Kirill: Avatar is your only released movie so far that did 3D effects. Did that add a lot of complexity to what you did?
Paul: Absolutely. Avatar was the only one that we did that way. It was definitely challenging. It looks great in the end, but it’s not something I’m super-anxious to do again, I’ll say that [laughs].
It’s a lot more interesting to think about something in 3D space. If the user actually has to use it this way, how would you use the layering to enhance user experience, and how would you use the layering in film to help tell the story better. And at the time, getting the technical aspect across was hard because the software tools didn’t have much stereo support. I was at Prime Focus for Avatar, and they had great custom-made tools that would take our After Effects layers and disperse them in 3D space within the scene, really making the compositors’ lives a lot easier.
I worked with Ben Myers and we had basically wrapped up a month and a half early, getting everything into position to be approved by James Cameron, and seeing the end in sight. And we decided that for a lot of hero screens we would rig our own stereo cameras within After Effects, and use those to actually render hundreds of layers of depth rather than just 3-5. That was cool, to come up with that process on our own before it was really a common thing, before the software was geared to allow us to do that easily.
Kirill: Was that for the big holo table?
Paul: I didn’t do the table, but rather all the other 30 or so screens you see in that set. The holo table was fully 3D, and I believe they used 3ds Max which I assume was a bit easier to work in stereo than After Effects had allowed. For us the challenge was to get After Effects to render things in stereo quickly and efficiently, which was a big hassle. We were taking care of those 30 screens across a bunch of shots, so it was a big job to do everything in stereo.
Kirill: Leaving the technical side of things, what about the initial explorations of the overall space? Do you sit down with the director, the production designer or perhaps the VFX supervisor to discuss the general interaction aspects? I’m looking at the list of films you worked on – Avatar, Hunger Games and Ender’s Game – and it’s sufficiently far in the future that you don’t necessarily have to be bound by the limitations of the current technology. Who is involved in defining the interactions?
Paul: I don’t really interface directly with the directors. Usually there’s a layer between us as Gladys Tong is in the case of G Creative right now. There’s a lot of back and forth, pitching the craziest ideas, throwing a lot of stuff out. On Avatar, for example, a lot of things were set out before we joined. James Cameron was on that film for 14 years, I think, and Ben Procter had already done a lot of designs for our screens. It was working under our Art Director Neil Huxley with Ben Myers and others to animate them.
On The Hunger Games we looked at a lot of real-world references, extrapolating them into the future. Where would the surveillance technology be, how it would look in this dystopian future where they can really watch and control everything that is happening within the Games. We looked at Microsoft Photosynth, for example, and we really loved the idea a 3D interface traveling between photos to see something from a different perspective. We used that in some of the screens – if the game keepers have the technology to see and control everything, they surely have some way to view the arena and view everything within it.
Kirill: I loved this idea that you have removed all the intermediary steps for controlling the arena, where the keepers operate on the scaled down digital replica and are able to virtually touch and control every part of the terrain, to manipulate the digital representation and have the physical counterpart immediately “react” to those manipulations. They don’t type, they don’t move a mouse in some kind of intermediate plane.
Paul: That was definitely the most challenging set I worked on. You have 24 people controlling the same computer essentially. They all touch and control the same centered model, and it was a challenge to think about things like what do they need to control, how do they control it, what kind of interactions are necessary when it comes time to throw a fireball at Katniss, for example. What type of stuff they need to do when they knock down a tree? When they unleash the dogs? It’s a fine line as you’re trying to figure out what would the user would want to do in that scenario, and what’s going to tell this story in the most effective way possible, and how we can walk that line. How we can make this look futuristic and feel like a really intelligent all-encompassing model of the arena?
Kirill: Do you still need to stay at least somehow connected to the current technology, to not get too futuristic?
Paul: It’s difficult. Our first focus is on telling the story. We always start from there. Someone needs to throw a fireball at Katniss, and how do we show that? You take real-world references and extrapolate to technology that is more advanced by a certain number of years. What would make that process simpler? What would someone want to see?
For example if we’re working on a medical animation for a film, what would a surgeon want to see if he had limitless technology. What is the ideal way to perform a surgery? Is it wearing something like Google Glass and seeing an augmented reality display in front of you, showing where to make the incision? Maybe the medical animation is using nanobots instead, and the doctor’s interface is used to control them. A lot of this comes from current technology that we see and that we extrapolate into the future. What would someone ideally be using, and can we get that across in our film and still tell our story?
Kirill: You mentioned Photosynth, and I’m sure you have a whole bunch of other references. Are you trying to stay current and read about all the new technological explorations, even if they are not necessarily commercialized? Is it a big endeavor to stay aware of all the new things?
Paul: It’s a big job. I’m interested in these things anyway though, so a lot of my free time is spent looking at new technology. Any time we start a job, we’re looking at references and what kind of crazy stuff is coming out right now – robotics, networking, holograms – really anything. We’re trying to keep afloat of that so we can use it in film and hopefully try to figure out where it’s all headed in the future.
Kirill: I hate bringing up “Minority Report” as I’m sure you’re sick of hearing about it again and again, but it’s a popular example of things going the other way – interactions portrayed in a film that find their way into real-world products. Is this be a two-way street, a sort of a feedback loop between fantasy UIs and cutting-edge actual products?
Paul: Absolutely. “Minority Report” is funny, I don’t think I’ve ever been on a project where it wasn’t referenced by somebody. They did such a good job that it’s still relevant, and I’m sure had a lot to do with things like Kinect. There is that feedback loop.
You look back at old “Star Trek” episodes and how that’s now inspiring people to do medical apps on smartphones. I would love if what we’re doing will inspire people to make something in real life the same way real life is inspiring us to put these things in movies. I saw a Twitter exchange between Elon Musk and John Favreau about Elon trying to get his team to make some of the Iron Man interfaces for real, with holographic displays. And I think that’s powerful. You need people dreaming up the crazy stuff that’s not limited by what’s possible now, and movies are a great outlet for that.
Kirill: You mentioned Google Glass which is, for me, an interesting piece in the sense that it is a very personal gadget. It’s this screen that nobody but the person who wears it actually sees, which makes it hard to use in a movie, as you’d need to switch constantly back and forth between that small screen and what happens around the person wearing that screen. And that’s not necessarily the best story-telling experience.
Paul: That’s actually one of the challenges we’re having now. We’re working on a concept and we unfortunately can’t have something like Google Glass, as the viewer isn’t able to see it easily without switching to a first-person perspective. You see the trend in films for the past few years to use transparent displays, which is maybe not the most useful thing in the real world outside of goggles or windshields – I don’t really want to see through my laptop right now to the wall behind it. But in a movie it helps a lot where you can have a reverse shot of an actor’s face looking through his display. It’s a wonderful device in a film.
And things like Google Glass, as much as we want to put them in film, that’s exactly the challenge. It’s difficult because you can’t really see it all that easily. You have to walk the line, throwing some of your favorite ideas by the wayside because it is a film and we have a story to tell first and foremost.
Kirill: Are you afraid of people watching your films in 25 years and seeing how off-mark they were, or is each one just the product of its time?
Paul: I’m not afraid of it. You look back at some of the great stuff – like “Blade Runner”, for example – and it may look dated now, but at the time it was incredible. I hope my stuff looks really dated in 10 or 20 years, because that means technology has become so advanced that we’ll have better technology in real life than we ever dreamed of in movies, and I’d love to have real technology that actually works like this.
I think anytime you’re predicting what the future will hold, it’s inevitable that some of the stuff will be fodder for jokes. In 10 years we will look back and think “What was that, that’s so ridiculous that they thought in the future we’d be using interfaces like that.” It’s not something that necessarily “scares” me though, I hope that happens.
Kirill: It’s my impression watching sci-fi movies in the last few years – like Avatar, Prometheus or Avengers – that the trend is to use screens everywhere, putting glass surfaces of all sizes and shapes all over the place. That may not necessarily be the way of the future if you look at something like Google Now, Siri or Glass where the visible surface of the technology is receding and shrinking into the background – instead of expanding around us as portrayed in these movies.
Paul: Absolutely. That’s the trend now in real life – smaller screens and more info that is intelligently designed to fit into smaller spaces. In a movie that doesn’t work so well. It’s a double-edged sword. We can’t always put what we think it’s going to look like in 30 years, say. In 30 years I don’t think I’m going to have these giant monitors sitting on my desk anymore.
Holograms are fun in that way. They can get out of your way. They feel a lot more amorphous, taking whatever shape you want. That’s an interesting thing to think about. These little gadgets don’t always translate to film very well, so we’ll always have these giant operation centers [laughs] with huge screens everywhere, whether or not it’s the best user experience.
Kirill: You say that you don’t see your desk in 30 years’ time having these gigantic monitors. How would you like it to look? What kinds of obstacles do you wish to see removed in your interactions with computers in our lifetime?
Paul: It depends on how far into the future we’re looking. Direct brain-to-computer interfaces are absolutely the holy grail, but that’s obviously a long way out. And that’s not something that is going to work well in movies at all.
If you ask about our lifetime, it’s a tough question. The idea of 3D displays is interesting. Right now we’re using stereo displays for entertainment purposes, but I’m really interested in trying them out while I’m working. What kind of interfaces can we design and what kind of obstacles can we overcome if we think and operate in 3D – instead of the 2D aspect that we’re locked to right now? If we can break the 2D plane of our screens, what kind of interactions and experiences can we create?
It’s possible now, but you don’t see many tools using it. I’m looking at After Effects right now, and I see how having depth to not just the viewport, but the interface itself might be able to communicate some ideas better, make the interface more efficient and powerful to use. Perhaps some subtle depth cues could make an interface feel more natural to use.
Kirill: You’re dreaming up these futuristic interfaces and showing us how the interaction feels, and yet you’re still bound to the capabilities and limitations of the current crop of software tools. Is that frustrating in a sense?
Paul: It’s definitely frustrating from the human perspective. I look at a 3D model on my computer, but I can’t actually reach out and touch it, I can’t manipulate it with my hands. I have to use a mouse or a pen tablet and a keyboard in order to manipulate these things and work with them. The idea of a Kinect-like sensor that I can use – and I hate to bring up “Minority Report” again [laughs] – that’s exactly what I’d like to do. It doesn’t need to be an Iron Man style hologram. It can be in front of a flat screen, or augmented reality like Google Glass, for example.
Being able to move my hands forward and manipulate these models that I’m working with, to animate the camera with my hands – as opposed to a mouse and a keyboard. It’s a lot more natural and opens up a lot more room for creativity, if we’re using it in a human-like way with our hands and our thumbs instead of the makeshift interfaces of the mouse and the keyboard. Those are really there because the tactile interfaces were not available. There’s a lot of power there if we can work that way and remove some of the tools coming between our brains and the software we use.
Kirill: Waving your hands around all day long to complete tasks might not be the most ergonomic way…
Paul: To be clear I don’t ever want to be waving my whole arm “Minority Report”-style. That would get very tiring, but the idea of being able to use gestures in 3D space with your hands – and I mean very minimal stuff, not necessarily waving your whole arm around although that does work well in film – I mean letting go of the mouse and the keyboard, and working with your hands like you would if you were molding clay or sitting at a work bench. Not just sliding fingers across a flat piece of glass, but letting go of the glass and using the movements and gestures your hands have evolved to perform naturally.
Kirill: Getting back to your released movies (four so far), has it become easier to do the interfaces on the technical side of things, or do you see the matching rise in demand for details from directors or VFX supervisors? Do those demands scale with the capabilities of the tools you have at your disposal?
Paul: Absolutely, like anything else in visual effects. It scales with the toolset. When we were on “Avatar”, it was a big technical challenge to just account for how much time it took to render something. And that’s not always a concern anymore as computers have become more powerful. A lot of the time I can render something in 30 minutes and it’s not really affecting my day. But at the same time the demands from directors are getting crazier – a lot more 3D holograms etc.
You look at the holograms in “Avatar” – only four years later – and it’s pretty simple in comparison, which, to be clear, is not a knock on “Avatar”. I watched that team go through making it, and it was a huge challenge, and it took a lot of people to make it. And now it’s kind of par for the course in these movies, if you look at the crazy holograms in “Iron Man” and “Prometheus” for example. As our technology to make these films progresses, the demands from directors get more elaborate. We have to always look for new ways of doing things, new ways of tackling those challenges that a new film may bring.
Kirill: So it’s never going to get to a point where the director will say that what you have is good enough and you can stop.
Paul: [laughs] I don’t think so. Usually no matter how good your first version of something is, you’re going to do 30 more versions before it gets approved. That’s good though, you’re always pushed to make something more elaborate. I don’t think anything we’re doing now is any easier than what we were doing when I started out. It’s an arms race. As the technology improves, the demands get more and more elaborate. Even if the software got to a point where designs and animations were mostly automated, I don’t think the directors would want that. They’d want a custom solution, always raising the bar.
Kirill: And there’s also the part where you and your team don’t want to repeat what’s been done in the past on other films.
Paul: Always. We always want to push the ball forward with each project so we’re not just rehashing what’s been done before, by ourselves or others.
Kirill: You’ve talked about the technical aspects of doing 3D, but what about your personal opinion as a moviegoer? Do you think it’s going to take over more genres, or perhaps it’ll remain confined to tentpole sci-fi productions?
Paul: I think it’s going to be a sci-fi, fantasy-only thing. I don’t think it’s going to become more prevalent. The audience has spoken. For huge movies like “Avatar” it works great. It was filmed in 3D which makes a huge difference – as opposed to the post-conversion process where they convert a movie into 3D later. As a moviegoer, I’d rather see a 2D version of the film than to see something that was post-converted. And unfortunately I don’t think the average viewer knows the difference, so they see “3D” in the marketing materials and they think it’s going to be the same as a truly 3D movie like “Avatar” was. I don’t think that effort of post-conversion is worth it, and I hope people vote with their wallets – showing that they don’t like it. And correct me if I’m wrong, but I don’t think that 3D television sets are selling particularly well either, because the content isn’t even there.
Kirill: It might be slightly different from movies where you pay slightly more every time you go to see a 3D movie. For a 3D TV set purchase you have to have a really good reason – such as a wealth of live and recorded content to justify paying a hefty upfront premium for the device. And that content just doesn’t exist yet.
Paul: Yes. I think in the case of huge blockbusters like “Avatar” it adds to the experience. I won’t name any names, but I can think of some post-converted movies that were very distracting. Post-conversion will hopefully go away, and true 3D will be reserved for the big-budget spectacles who can do it properly. I hope it does, as it definitely adds to the experience when it’s done well.
Kirill: I went to see “Ender’s Game” and it had two surprises. The first one was that there was no 3D version, at least in my area. And the second one was that I didn’t miss it at all, especially in the last part where they are in the simulation cave. It was staged and shot in a very immersive way that made me feel like I was right in there with Ender. I thought I’d be missing the extra dimension, but it wasn’t even necessary for me as the viewer. Maybe I’m too “conditioned” to expect these blockbusters to be in 3D “by default”, and “Gravity”, for example, is pushing that even further by having audio panning around you to follow the camera direction.
Paul: Exactly, it’s not necessary for every viewer. But that being said, if you did see that scene in 3D, I bet it would be just mind-blowing. I generally don’t buy 3D tickets, but for things where I know it was filmed in 3D, or rendered in 3D like the Pixar movies for example – those I find worth it, having millions of layers of depth. And when you have three or four layers of depth in post-conversion, I don’t find it’s worth it. I think people are going to slowly learn the difference. And we’ll never know for “Ender’s Game” unless they post-convert it.
There’s this amazing thing about movies. It’s a whole bunch of people, thousands of people coming together to form this amazing whole. And you go to the theater, and you notice these things, like audio following the camera movement and actors’ positions in “Gravity”, or the screens that we’re doing – that’s exactly what it’s all about. As individuals we’re small part of the massive whole that becomes a movie, and hopefully these details come across.
Kirill: How do all these thousands of people collaborate? As a freelancer working on a particular facet of the much bigger production, do you need to be on the set, to sit in the same physical space with your team, to be available to other departments? How much can be done remotely with the current crop of software collaboration tools?
Paul: I do enjoy sitting with people and working together in the same room. But for the past two years now I’ve worked almost exclusively from my home office. For “The Hunger Games” it was half-and-half, where I’d go to Montreal for a while and then come back here. It worked well, but the tools are sufficient enough now that there are more benefits to working remotely for me personally. You can work on your own time, which is perfect in a creative field like this. Contrary to popular belief, you actually put in a lot more hours than you would clocking in and out of an office every day.
Using tools like Basecamp from 37 Signals, Skype and Google Docs makes it so easy to collaborate online. I personally have never had to be on the sets – there are other highly skilled people on our team at G Creative who do that part of the job. As a designer and animator, it’s not really necessary for me to be there, so online collaboration actually works really well and allows me to just focus on the creative side of things.
Kirill: Is this your field for the foreseeable future – mixing screen graphics for movies and your own software projects?
Paul: That’s my plan for now. I have 3 movies yet to be released that all involved the design and animation of screen graphics. I formed a bit of a specialty before joining G Creative – and it’s their specialty as well so it’s a natural fit. I have this great opportunity to work with Gladys Tong who has over a decade of experience bringing this cool technology to life in film. And in my free time we have a lot of apps and real software we’re working on. It’s definitely a different challenge from film, and I love both aspects. I love making the fantasy UI stuff for film where we can make it as crazy as the director will let us, and not to worry too much how usable something really is. And then in my free time it’s rewarding to make something that does have the constraints of real software. It’s rewarding to make something that people can download and use in real life.
And here I’d like to thank Paul Beaudry for taking the time out of his busy schedule to talk about crafting screen graphics and user interfaces for movies.