A self-described science-fiction fan, visual effects supervisor John Nelson saw a “dream gig” and rare opportunity in Denis Villeneuve’s Blade Runner 2049. Getting the chance to work on a sequel to a revered sci-fi film—one of the pillars of the genre—Nelson would construct surreal cityscape shots out of real aerial footage shot across the globe, while exploring holograms in all their various visual representations.
Specifically, in 2049, Nelson would create holograms of iconic musical artists Elvis and Frank Sinatra, a holographic fusion of two women in blade runner K (Ryan Gosling)’s life, and a recreation of Rachel (played in the original film by Sean Young), looking not much older than she did in Ridley Scott’s original.
With way more tools at his disposal—effects-wise—than Scott had when he set out on his Blade Runner journey, Nelson’s challenge would be to figure out the best use of visual effects within the sequel while not going overboard, landing at a sensibility in tandem with the original.
What was discussed in your first conversations with Denis surrounding this project?
Denis and I spoke and I showed him some pictures I’d taken of soccer players in fog shot in Venice, near my house. He goes, “Yeah, yeah. That’s what we’re going for.” I can’t say enough about Denis. He’s such a good director and a good human being. I think he sort of rips the vision straight out from his insides.
I once asked him, “Do you want it to be like the first movie?” Have industry in it—belching smokestacks and stuff. I’m from Detroit; I certainly know that. He said, “No no. I want it to be like Montreal, on a cool day in February.” We kept looking at visibility, all these coal fire smog alerts in China where you couldn’t see maybe half a mile—stuff like that.
The notion of a Blade Runner follow-up seemed particularly interesting given that the original was made in the ‘80s, at a time when visual effects weren’t nearly as sophisticated. The sequel must have been handled quite differently.
The first movie was just so well done. No one talks about the fact that they had an actors strike in the first film, so they had an immense amount of prep time. Having worked with Ridley before on Gladiator, he’s a supreme visualist. His sense of design on the first one was really quite acute, and it has all the food groups of what Ridley likes: backlight, dramatic composition. The first film is such a mixture of the future and the past—it’s like a future film noir thriller.
Looking at it that way, in the new movie, we had the technology to do things better. But so many films have been based on Blade Runner, with new technology, and all they do is add tons of stuff. It’s overkill sometimes. We really wanted to use the new technology, but we also had miniatures, as a nod to the first movie, and to make it feel different. I did the same thing with matte paintings. We worked with Deak Ferrand from Rodeo [FX], one of the best matte painters in the world. It made the film seem warmer, and we really wanted the film to feel analogue.
We wanted to shoot as much as we could, and build as much set around the actors as we could, but we were limited by the size of our backlots, and the nature of what we’re doing is, no city looks like what we put in the movie. So we did massive amounts of work, and really, I think every wide shot in the film would be a visual effect. The point is that we had the new technology, but we consciously reined in our effects a bit, and I think we’re a better film for it. CG does shiny things well, but we didn’t like shiny things in our movie. We liked dirty, wet, grimy.
Let’s talk about holograms. Can you give a sense of how some of those effects were put together?
The hardest three areas were the cities, Ana [de Armas] as Joi in the merge, and Rachael reborn. We didn’t like the way most of the holograms that we’d seen in movies looked, sort of glitzy and not very realistic. Early on, we figured out that we wanted realism there. We really wanted it to seem not like a CG character, like an effect, but more like technology that’s evolved.
Early on, we knew that we’d shoot our actors, but then we would map them onto CG surrogates. That, particularly for Ana, was a really good thing. We would model a CG character for Ana and then put her in the right position and map her, so her performance would be right. But then in CG, we would be able to put in a back-facing shell, which is like: If you hold up a glass in front of you and you rotate it, you can see the front and back of the glass. The front of Ana would be Ana, so it would look real, and then the back would look volumetric. If she rotates, you can see that she has real volume because you’re seeing both the front and back textures, and they counterrotate against one another.
For the merge, we shot both actresses together with Ryan [Gosling]. Then, we would map their images onto a CG geometry and be able to merge them together. Mackenzie [Davis] would do her first take, and we would sit and go through Mackenzie’s tape and go, “Okay, at two seconds she raises her hand; at four seconds, she touches his head; at six seconds, she begins to walk around.” Then, I would take an iPad, put it in front of Ryan’s face, and have Ana look at Ryan until I got Ana’s eyes to be in the same place Mariette’s eyes were—because it was really magic when the eyes lined up.
Of course, the actors are not robots, so we would have great moments when the actors would line up, and then we’d have moments when they wouldn’t, so once we shot both actresses, we mapped them onto CG surrogate geometry. We had cameras all over the room to record them from every angle so we could know exactly where they were, and then we would map them. Then, we would be able to trim them and move them over a little bit.
Many shots in that sequence start out of sync and end up in sync; then, in the next shot—out of sync, in sync, out of sync, in sync, until that moment where he grabs her from behind and pulls her close to him. Then, two or three shots in a row where you see both women’s performances, and the eyes line up. When the eyes line up, she turns into this third woman, and she throws a look at him, which was very exciting. That’s the merge.
For Rachael, early on I knew that we needed to do a head replacement. So we cast an actress, Lauren Peta, who was in full hair and makeup on the day she did the scene with Harrison. Sean [Young] was on set as an advisor that day. We would shoot Lauren coming in, and then we’d replace her from the neck up with a CG head and CG hair. That was a pretty elaborate process.
We scanned Sean as she is now and we scanned Lauren, and then we found a facial cast of Sean when she was about 28, around eight years after Blade Runner. We digitized that, and from all of that, we built a skeleton, and then muscles over the skeleton, and put in little imperfections for realism.
The response I got back was, this looked like a girl that looked like Sean, but it wasn’t Sean. So I went back and studied little unusual things, like the ‘80s makeup—the particular bee sting lipstick that she had. I also studied how much Sean’s eyes bulged out of her face, which is very unusual. I made all these corrections; then, I went with Richard Clegg at MPC and we picked three scenes from the original Blade Runner, and replaced one shot in each of those scenes.
We showed those scenes to Denis and the producers, Andrew Kosove and Broderick Johnson at Alcon. They go, “Why we looking at this? This is the old movie.” I said, “That’s because the digital double is one shot in each of those scenes. You tell me which it is.” At that point, everyone agreed that we had a digital double that looked like Sean. Now, we needed to make her act. That really confident Rachael from the first movie is what we wanted when she first comes out. When she gets to Deckard, Denis said, “It’s like two people who meet at a train station after 20 years, and the emotions just pour out of them.” We looked for moments in the original movie where Sean had face longing or rejection, and we incorporated those tells into the performance.
Supposedly, the exterior cityscapes had a baseline in real aerial footage. What did you build over that real-world material to create the final product?
Whenever we could find something to shoot, we would, but every wide shot is a visual effect, either in altering or adding on to something we shot. There are lots of shots that are full-on CG shots with no photography whatsoever. But early on, we did aerial shoots in Mexico City, in Iceland, and in Spain. The solar farms in the beginning of the movie are based on aerial shoots of solar farms that we found in Spain, with lots of stuff added to them. Going into the city, Roger [Deakins, cinematographer] and Denis liked this picture of the favelas in Mexico City. But we wanted the light to be overcast for the whole film—no hard, bright sunny light.
We looked at when it was going to be overcast in Mexico City, with the highest chance of clouds there. While I was shooting in Budapest, I would use Google Earth and scout Mexico City, and find the places I wanted to fly over. Then, I would make little copies of those flyovers complete with GPS coordinates and give it to Dylan Goss, our aerial cameraman, who had worked with Roger before. When he went to Mexico City and scouted the places, he said, “They’re really good, and I found even more.” So he went and shot those.
When you look at those, those are overcast, but it’s day. It looks nowhere near what it’s supposed to look like in our film. We recreated those scenes and added a ton of atmosphere and CG to that. Then, we extended the canyons of roads, extending the city down, while at the same time, we extended the city up in multiple layers of brutalistic buildings at different scales, with tons of atmosphere going off into the distance. That’s how we changed Mexico City. It’s got a photographic base, but it’s massively changed.
We used some of Iceland in our wider scenes, going into the Trash Mesa and whatnot. But then again, those also have massive amounts of CG work on top on them. Often at that point in the movie, the big wides were just full CG cities. We really worked really hard to give a variance for the cities, because cities are made by millions of decisions over thousands of years.