Article by Lisa Rye, Stirfire’s Creative Director
Dark, chaotic and hilarious, The Dark Room hit Steam Early Access in 2018 with glowing reviews. Here’s how we translated John Robertson’s manic energy and expressiveness into a fully animated video game character.
The game’s success hinged around translating John’s on stage persona as an in-game character, matching his energy and expressiveness as much as possible. We looked for a solution that could be used to animate hours of dialogue with only a small team. Facial animation is a time consuming process – to do just 10 seconds of lip-synced animation can take many hours.
Clearly, we were in need of a motion capture solution.
Fortunately, facial motion capture technology is in a state of rapid improvement, with software designed for feature films and Triple A video games becoming more accessible to smaller studios. After investigating a number of options, we decided on using Faceware due to the quality of animation produced, potential for batch processing and pipeline into Autodesk Maya.
To work with The Dark Room’s retro games aesthetic, the prototype John Robertson rig was built around an old school low-poly look.
While this matched the brief, the model didn’t express as well as we would have liked. We were able to test out the pipeline and certify that Faceware worked as intended. However, to avoid the uncanny valley and capture the full range of John’s performance, a redesign was needed.
The final John Robertson model
With this redesign, we focused on flexibility and expressiveness, edging more towards a caricature of John.
The model was rebuilt to a much higher poly form that could support crazy expressions and detailed mouth shapes, while still referencing the aesthetic of 90’s video games.
With hundreds of takes to cover, recording for The Dark Room took place over three full on days.
To capture detailed facial movements during John’s performance, we used the Faceware head-mounted camera while simultaneously recording audio. A secondary camera was used to record John’s movements for animation reference.
Stirfire created a custom software solution to manage the narrative content and shot list, with a crew member recording a unique code for each take.
Motion capture to animation
A few steps were necessary in order to translate the headcam footage from video to 3D animation. Firstly, the footage needed to be put through the Faceware Analyzer process – essentially allowing the software to recognise the marker points on John’s face throughout the recordings.
Although under many situations this process can be automated, due to John’s quite extreme expressions a large amount of additional hand tooling was required to output clean motion data.
With the motion capture data from Analyser ready to use, the process to translate that onto the rig in Autodesk Maya began. Retargeting the animation requires careful pose matching on any animation frames involving extremes (e.g., a wide-open mouth or giant grin). Retargeter takes these extreme frames and extrapolates from that data set to fill the remaining frames. The animator can adjust and add extra frames wherever they feel the animation is not matching correctly or add extra stylistic expressiveness where wanted.
Retargeting an animation for the first time can be a time-consuming process, but as the animation progresses, the animator will build up a shared pose database that can be used to do additional animations that share the same rig and motion capture artist. It is this combination of automation and hand tooling that results in the high quality facial animations seen in The Dark Room.