Séminaire ORIGAMI de George Drettakis, "Bringing Together Learning and Graphics: Rendering, 3D Representations and Synthetic Training Data"
From 15/12/2021 at 10:00 to 11:00. Salle Fontannes, Darwin
Deep learning methods such as image-to-image translation and more recently multi-layer perceptrons coupled with volume rendering, have been used to develop Neural Rendering solutions that synthesize or render new images with stunning visual quality. The former methods are often ``end-to-end'' operating entirely on 2D photos, and both are often trained exclusively on 2D image data. In this talk we discuss an alternative approach, that exploits the huge body of knowledge developed in traditional physically- and image-based Computer Graphics rendering and uses it hand-in-hand with such learning methods to effectively solve both inverse and forward problems in graphics. In particular, we discuss the importance of three key elements in these solutions: the use of explicit 3D data extracted from multi-view stereo, the use of rendering and rendered synthetic data for effective and efficient supervised learning. We illustrate these ideas on three recent projects: point-based neural rendering using learned features, free-viewpoint rendering of faces using generative adversarial networks and neural relighting and rendering of indoors scenes, and discuss some issues related to neural representations for rendering.
Links to papers:
Point-Based Neural Rendering with Per-View Optimization
http://www-sop.inria.fr/reves/Basilic/2021/KPLD21/ and https://repo-sam.inria.fr/fungraph/differentiable-multi-view/
FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold http://www-sop.inria.fr/reves/Basilic/2021/LD21/ and https://repo-sam.inria.fr/fungraph/freestylegan/
Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
http://www-sop.inria.fr/reves/Basilic/2021/PMGD21/ and https://repo-sam.inria.fr/fungraph/deep-indoor-relight/
George Drettakis graduated in Computer Science from the University of Crete, Greece, and obtained an M.Sc.and a Ph.D., (1994) at the University of Toronto, with E. Fiume. He was an ERCIM postdoctoral fellow in Grenoble, Barcelona and Bonn (94-95). He obtained a Inria researcher position in Grenoble in 1995, and his "Habilitation" at the University of Grenoble (1999). In 2000 he founded the REVES research group at INRIA Sophia-Antipolis, and now heads the follow-up group GRAPHDECO and is a INRIA Senior Researcher (full professor equivalent). He received the Eurographics (EG) Outstanding Technical Contributions award in 2007, is an EG fellow, and received the prestigious ERC Advanced Grant in 2019. He was an associate editor for ACM Transactions on Graphics, technical papers chair of SIGGRAPH Asia 2010, co-chair of Eurographics 2002 & 2008, associate editor and co-editor in chief of IEEE Trans. on Computer Graphics and Visualization. He has worked on many different topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. He has worked on textures, weathering and perception for graphics and in recent years on image-based and neural rendering/relighting as well as deep material acquisition.