Thursday 27, April – 11:00-12:00

Room: Auditorium Lumière

Art-Directable Nature: Large Scale Environment Dressing for VFX

Speaker: Paolo Emilio Selva - Weta Digital

THE JUNGLE BOOK – (Pictured) MOWGLI and KING LOUIE ©2015 Disney Enterprises, Inc. All Rights Reserved.

Details and variations, interactions between species and competition for resources: these are just some of the features that Mother Nature uses when populating real world environments. The elements in nature’s plan work together with each others in a balanced system. Following the industry trend of adopting physically based methods for modeling natural phenomena, it would seem that a natural extension would be to replicate nature’s rules such as growth and resource competition when modeling natural environments. Much like in other parts of the vex pipeline, two further needs arise when considering a practical design: on the one hand current computers still have a disadvantage in terms of compute power and storage compared to a real-life system. Further, there are authorial concerns from the creative structures involved in movie making, primarily in the space of art-direction, that manifest in a need for rather specific control over such a natural growth process, representing the reality that movie making is fundamentally a story telling activity, and as such communication and narration needs should necessarily come ahead of absolute physical correctness. In this talk, Paolo Selva will discuss a few iterations of the processes and challenges that moved Weta Digital in pushing technology to its limits, production after production. The journey starts with our work on The Jungle Book , which in a number of ways proved more challenging than anticipated, moving into the rather larger environments visible in Pete’s Dragon, and ending with the recent challenge posed by the Giant Country sequence in The BFG. Paolo will show how Weta Digital’s technology stack in the space of environment modeling has been growing over the years, becoming a landmark of quality and realism while maintaining the flexibility necessary to support the high turnaround workflows that the studio’s clients have come to expect.

Thursday 27, April – 13:30-15:00

Room: Rhône 2

Modern Virtual Characters

Speaker: Luca Fascione - Weta Digital

Performance capture technology is approaching a phase of early maturity in which sufficient fidelity is achieved by most systems available around the industry, and entering a second stage where work on quality improvements, while desirable, is weighed against various kinds of process and workflow considerations. The Academy of Motion Pictures Arts and Sciences has recently recognized four different systems for facial performance capture at the Scientific and Technologic Award ceremony of 2017, signaling that what used to be a new tool has now gained acceptance in the entire vertical of the movie making community. Indeed the various layers of the creative movie making structure are now moving from a phase of early experimentation and exploration, often dominated by consideration of a technological nature, to a more concrete phase of development of the more artistic aspects of the tool, focused on understanding the newly available possibilities in storytelling and character construction that these systems provide. In this talk, Luca Fascione will review the evolution of performance capture techniques at Weta Digital, outlining the various technical challenges encountered over the years as well as their proposed solution illustrating them with examples from The Lord of the Rings, through King Kong to Avatar, the Planet of the Apes movies and the Hobbit trilogy. After an analysis of the present state of play, the talk will conclude turning our attention towards the possibilities and open challenges we see for the future in this space.

“Shining”: Rendering Rabbids at Ubisoft Motion Picture

Speaker:  Laurent Noel - Ubisoft Motion Pictures

In this talk, we present the rendering pipeline developed by Ubisoft Motion Pictures for the “Rabbids Invasion” TV series. Ubisoft is a major producer, publisher, and distributor of video games and the third-largest independent game publisher worldwide. Ubisoft Motion Pictures, created in 2011, is focused on bringing Ubisoft’s successful video game brands to cinema and television. We first introduce the “Rabbids Invasion” TV series and the general production process for animation. We also give some metrics about the data (polygons and texture) we have to deal with and how many images we have to render weekly, thanks to our in house renderer “Shinning”. Shining is a physically based path tracer developed by our R&D team with a strong focus on features needed by our production. We present Shining’s architecture and the rendering techniques we have implemented. Particularly, we focus on the shading models we propose to our artists and how we use multiple importance sampling for lighting. We then explain the final compositing process and the consequences on Shining. Finally, we talk about future improvements and our collaboration with LIGM and Telecom ParisTech research teams.

Advanced Path Tracing in RenderMan

Speaker: Per Christensen - Pixar

Copyright ©2015 Pixar/Disney

RenderMan is a modern extensible and programmable path tracer with many features essential to handling the fiercely complex scenes in movie production.  RenderMan has traditionally been focused on off-line rendering of high-quality final movie frames, but has recently been made more suitable also for interactive rendering during scene lay-out and lighting.  Path tracing has gone from being a pure research technique to now being the main rendering technique in many production renderers.  In this talk Per Christensen will describe the theory and practice of path tracing in movie production, and will also describe advanced path tracing techniques such as bidirectional path tracing, progressive photon mapping, vertex connection and merging (VCM), and unified points, beams, and paths (UPBP).

 


Thursday 27, April – 15:30-17:00

Room: Rhône 2

Improving Real-Time Image Synthesis Quality in Computer Aided Design

Speakers: Gilles LAURENT and Victor BACHET - Dassault Systemes

In Computer-Aided Design, the management of a product’s lifecycle often puts heavy constraints on geometrical representation. Indeed, the same model may appear in a wide range of services, from manufacturing plan design to finite element deformation simulations, as well as marketing reviews. In addition, thanks to the continuous improvement of graphic hardware and rendering techniques, real-time synthesis of increasingly realistic images has become the standard in modern applications. Thus, we will first discuss the technical challenges introduced by these specific requirements on the Dassault Systèmes rendering pipeline and how they are addressed in practice. Next, we will present Forward Light Cuts, a technical solution aiming at synthesizing a global illumination approximation in real-time, and a typical example for what aims to be a research paper in our industry.

Tilt Brush: Prototyping as a Paradigm

Speakers: Joyce Lee - Google

Tilt Brush is a room-scale virtual reality application that allows people to paint in three-dimensional space. It is an experience built for everyone, from professional artists, to dreamers, to casual doodlers. Our goal is to build a creative tool for anyone to pick up and use, while also having the potential to create incredible works of art. As the application matures, new features built are subject to a rigorous prototyping gauntlet, where a majority of ideas do not make the cut, and those that do undergo heavy iteration. The Tilt Brush team aims to get new features into the hands of our users as quickly as possible. However, because there is not yet a universal VR UX language, we require ourselves to do a lot of experimentation to create natural, intuitive interactions. Rapid prototyping is an indispensable part of our development flow. VR is breaking the way that people think about human computer interaction; many 2D concepts translate poorly to 3D space. Ideas that sound good in discussion often don’t hold up when implemented, resulting in prototypes demonstrating an extremely variable degree of success. Though the vast majority of our prototypes never leave the office, they inform the decisions we make for the features that do make it out. We have a saying that for every feature that we ship, we throw away three prototypes. Case studies of successful features will be given, including a new feature that will ship in an update come early April: the ability to manipulate the lighting of the scene. Designing UX for virtual reality is still in a nascent stage, where there is much opportunity to set new paradigms. The decisions made today may lay the foundations of design for decades to come, thus it is all the more important to explore as many avenues as possible before selecting the best one.

Capturing, editing and rendering Multiview Intrinsic Decomposition with image based rendering

Speakers: Sylvain Duchêne - Bcom

With the emergence of Virtual Reality and Augmented Reality, some interests increase around capturing 3D content for  a  wide  variety  of  application  including  games,  videos  and  also  but  less  known,  manufacturing  and  building industries. The talk will present the difficulties encountered to capture and manipulate 3D content with a lightweight capture  through  a  simple  scenario,  how  to  manipulate  an  outdoor  scene  captured  with  a  few  photographs  and navigate into this scene ?  The edition of such captured scenes is limited by the lighting condition at the time of capture and the navigation is limited by the quality of the 3D reconstruction using multi view reconstruction algorithm. The presented method allows image-based rendering with changing illumination conditions and reduces the cost of creating 3D content for applications.  Our method uses an automatic 3D reconstruction from these photos and the sun direction as input to decompose each image into reflectance and shading layers. Since inaccuracies are inherent to any 3D reconstruction method making this decomposition non trivial, our approach exploits approximations that can still be transferred between the 3D model and the 2D images. The key of this approach is to iteratively refine some unknowns despite the inaccuracies and missing data of the 3D model by exploiting image based properties. Our multi-view intrinsic decompositions are of sufficient quality to allow for relighting of the input images whereas the re-rendering of the scene is achieved by compensating the 3D inaccuracies of the 3D model by the use of an image-based rendering method.


Friday 28, April – 9:00-11:00

Room: Rhône 2

Overview of Graphics Research at Adobe

Speaker: Sylvain Paris – Adobe

In this talk, I will first present my work on photographic style transfer and show it motivates a series of follow-up projects such as a better edge-aware image filter, a programming language dedicated to image processing, and an approach to cloud computing specialized for photo editing.

Allegorithmic and the Substance Texturing Suite

Speaker: Sebastien Deguy – Allegorithmic

Allegorithmic is the company behind the « Substance » texturing tools and content suite, used by a vast majority of game developers, and now starting its foray into Animation/VFX as well as Architecture and Industrial Design. Sébastien, Founder and CEO, will present the company, its evolution since inception and its challenges in the years to come.

Technicolor, Research & Innovation for storytellers

Speakers: Quentin Avril, Philippe Guillotel – Technicolor

Ghost Recon Wildlands : Procedural World Building

Speaker: Benoit Martinez – Ubisoft

‘Tom Clancy’s Ghost Recon Wildlands’ is an open world shooter developed by Ubisoft Paris. It is the biggest action-adventure open world games published by Ubisoft. An innovating and dedicated toolchain was designed to shape the world and produce a wide variety of environments, from salt desert to deep jungle. This lecture will describe the techniques and technology behind this work and how we gave  the artists the right tools to control large scale landscapes and small details. We’ll explain the procedural approach we adopted to create terrain, roads, forests, rivers, settlements and how all those tools are connected to each other. To name a few example, we used various erosion algorithms to shape the initial terrain then we developed GPU tools to be able to sculpt in real time and to procedurally define layers of textures. We relied on anisotropic shortest path algorithms for generating curvy roads that automatically adapt to the relief of the terrain and to control waypoints. We used mass instantiation of rocks and vegetation and we managed to slice and daily update the world content on a render farm.


Industry Session Chairs

Cyril Crassin, NVIDIA, France

Fabio Pellacini, Sapienza University of Rome, Italy