Justus Thies

Postdoctoral Researcher at the Visual Computing Group at TU Munich

Hi! I'm a Postdoctoral Researcher in the Visual Computing Group at the Technical University of Munich (TUM). I received my PhD from the University of Erlangen-Nuremberg in 2017 for my research on marker-less motion capturing of facial performances and its applications. More recently, I'm focusing on neural image synthesis techniques that allow for video editing and creation. Thus, my work combines methods from the Computer Vision, the Machine Learning and the Computer Graphics field.


Neural Voice Puppetry

Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input.

1 minute read     [Paper]  [Video]  [Bibtex] 

Deferred Neural Rendering - Image Synthesis using Neural Textures

Deferred Neural Rendering is a new paradigm for image synthesis that combines the traditional graphics pipeline with learnable Neural Textures. Both neural textures and deferred neural renderer are trained end-to-end, enabling us to synthesize photo-realistic images even when the original 3D content was imperfect.

2 minute read     [Paper]  [Video]  [Bibtex] 

Research Highlight: Face2Face

Research highlight of the Face2Face approach featured on the cover of Communications of the ACM in January 2019. Face2Face is an approach for real-time facial reenactment of a monocular target video. The method had significant impact in the research community and far beyond; it won several wards, e.g., Siggraph ETech Best in Show Award, it was featured in countless media articles, e.g., NYT, WSJ, Spiegel, etc., and it had a massive reach on social media with millions of views.

1 minute read     [Paper]  [Video]  [Bibtex]