"Avengers: Infinity War":
THANOS Artist Interview
KEY CONCEPT: The Age of Pixelated Humans is Here and Now.
In press coverage on the making of the movie, much attention has been paid to new software that allowed for an unprecedented level of efficiency and accuracy in transforming the details of Brolin’s facial performance to Thanos. For years there has been a push in the industry to make this pipeline – actor to character - more and more automated, and it sounded like the version employed in Infinity War was more efficient than ever—so much for the artists.
And one critical and subtle detail stands out above all as central to the success of Thanos which proved to be the greatest failing of previous attempts to create a convincing CG main character: the eyes.
It’s always been obvious to me as a facial expression specialist that motion capture technology was simply not up to the challenge of successfully recording the subtle movements of the eyes.
In the case of Thanos, the Digital Domain team employed several first-time software tools to automate the bulk of the human-to-model transfer. One piece of software – Masquerade – used machine learning to match super high resolution Brolin poses stored in a digital library with much lower resolution on-set shots filmed by a MoCap helmet camera while he is in performance. Another piece of software – Direct Drive – then mapped the resulting hi-res Brolin mask onto DD’s Thanos rig, giving Phil’s animation team a huge head start on their work; Phil estimated that a shot that might have previously taken three to four weeks to animate could now be done in three to four days (a shot being several seconds of screen time) given the sheer amount of machine-modified imagery that now flowed into their animation systems based on the transfer techniques.
Instead, the eyes of Thanos in Infinity War were all hand-created based, of course, on close-ups of Brolin in performance, but adjusted according to a huge number of other considerations, with the DD artists driving the process. Phil explained the various details that his animators focused on in their eye work – the refraction and reflection of the light on the eye depending on the conditions of the shot and the direction of the gaze, the local color of the eye whites, the blink rate, the way the eye moves as it tracks its surroundings (eyes in general are more jittery than steady), and the asymmetrical way the left and right eyes are modified by their lids and brow (a constant feature of Thanos’ poses.)
From a perceptual point of view, this matches exactly the expectations that audiences unconsciously bring to their experience of a human face, where they unconsciously use the eyes to signal the degree of human presence; if any of these elements – and there are lots! – aren’t right, the Uncanny Valley rears its ugly and disconcerting head, and the audience perception of Thanos goes from human to robotic.
Ultimately, Thanos works as well as he does because of the hyper-realism of his eyes more than any other single detail. Phil tells the story of the test that originally helped reassure the filmmakers of the process. The animators were particularly aware of two things – the directors saw Thanos as a dialed-back villain (think anti-Joker) visualized as scarier for his restraint and, they knew that getting his eyes “right” would be critical to their success. “I got the most positive feedback I’ve ever received in my career," Phil told me, “in response to that short.” Ironically, the content of the reel is simply Brolin/Thanos sitting in the dark on a throne, talking quietly, using random dialogue having nothing to do with the movie. No fist-pumps, no explosions, but great eyes.
The success of this superhero movie – one of the most lucrative of all time – is almost certain to be a game-changer in terms of what we can expect to see coming down the road in vfx. For clients willing to spend the money – and in terms of photorealistic CG characters, that realism comes at a cost – the possibilities seem endless. One can imagine historical figures, like Lincoln and Napoleon, coming to life; younger and older versions of living actors populating the same movie; Pinocchio turning into a “real” little boy; actors interacting with every manner of environments without the need of stunt doubles.
Phil and I ended our conversation by discussing the current state of the art of the robotics industry, where their efforts to create empathetic faces for their mechanical creations remind Phil of the animation industry in the early days of the CG revolution. Even though there are promising signs, today’s robots have a long way to go to reach the level of realism possible in CG. But the first steps are being taken!
Unlike robot designers, filmmakers now have the electronic tools to literally make convincing humans out of thin air. Fortunately, actors and artists are still 100% required. Tools are one thing, the vision and heart to exploit these resources in ground-breaking ways will be revealed in years to come.
in a Hollywood movie.