Tuesday 25, April

9:00-10:30

SP1: Virtual Environments

13:30-15:00

SP2: Lighting and rendering

15:30-17:00

SP3: Animation and Visualization

Wednesday 26, April

13:30-15:00

SP4: Geometric modeling

15:30-17:00

SP5: Images and Appearance

Back to program


Tuesday 25, April


SP1: Virtual Environments

Session chair: Guillaume Gilet

Session details: Tuesday 25, April, 9:00 – 10:30

Room: Rhône 1

Vanessa Lange, Christian Siegl, Matteo Colaianni, Marc Stamminger, FrankBauer

Using multi-projection systems allows us to immerse users in an altered reality without the need to wear additional head-gear. The immersion of such systems relies on the quality of the calibration which in general will degenerate over time when used outside of a lab environment. This work introduces a novel balance term that allows us to hide high frequency brightness seams caused by self-shadowing of the projected geometry and the borders of the projection frustum. We further use this more robust blending between projectors to compensate for occluding spectators, who enter the projection volume, by filling the resulting shadows with light from other projectors.

JungHyun Byun, SeungHo Chae, YoonSik Yang, TackDon Han

Projection-based augmented reality (AR) has much potentials, but is limited in that it requires burdensome installations and is prone to geometric distortion on the display surface. To overcome these limitations, we propose Anywhere Immersive Reality (AIR). This system can be carried and placed anywhere to project AR using pan/tilting motors, while providing the user with the distortion-free projection of a correct three-dimensional view.

Xiong Peikun, DongSheng Cai

With the development of computer technology, virtual reality can mimic a “real-like” world. People who have a virtual reality experience usually are impressed by the immersive feeling. We want to find out if there were some evidences could tell us what’s the mechanism of this immersive feeling or how human body responds to it. The problem is how we define ourselves during the virtual reality experience. Self-consciousness is important for people to think about what a role he or her is playing in the virtual world. We need to consider our “body” is actually existed in VR world and feel it is our own or not. The rubber hand illusion can convince ourselves if a rubber hand, the fake hand out of our body, is our own or not. Researches calls autoscopic phenomena extend the rubber hand illusion to the so-called full body illusion. We conducted 3 type of experiments shown in Figure 1 to find the commons features of these illusory ownerships between the actual world and the virtual world, and we find the same evocation conditions between the rubber hand illusion and the full body illusion in VR space: Human body must receive the synchronized visual and somatosensory signals at the same time; The visual signal must be the first person perceptive; and the subject and the virtual body needs to be the same height as much as possible. All these illusory ownerships accompanied by the body temperature dropping, where the body is stimulated.

SP2: Lighting and rendering

Session chair: Samuel Hornus

Session details: Tuesday 25, April, 13:30 – 15:00

Room: Rhône 1

Ugo Erra, Nicola Felice Capece, Roberto Agatiello

We present a feed-forward neural network approach for ambient occlusion baking in real-time rendering. The idea is based on implementing a multi-layer perceptron which allows a general encoding via regression and an efficient decoding via a simple GPU fragment shader. The non-linear nature of multi-layer perceptions is suitable and effective for capturing non-linearities described by ambient occlusion values. Also, a multi-layer perceptron is random-accessible, have a compact size, and can be evaluated efficiently on the GPU. We illustrate our approach including its quality, size, and runtime speed.

Adam Marrs, Benjamin Watson, Christopher Healey

Existing graphics hardware parallelizes view generation poorly, placing many multi-view effects – such as soft shadows, defocus blur, and reflections – out of reach for real-time applications. We present emerging solutions that address this problem using a high density point set tailored per frame to the current multi-view configuration, coupled with relatively simple reconstruction kernels. Points are a more flexible rendering primitive, which we leverage to render many high resolution views in parallel. Preliminary results show our approach accelerates point generation and the rendering of multi-view soft shadows up to 9x.

Andreas-Alexandros Vasilakis, Konstantinos Vardis, Georgios Papaioannou, Konstantinos   Moustakas

Successfully predicting visual attention can significantly improve many aspects of computer graphics. Despite the thorough investigation in this area, selective rendering has not addressed so far fragment visibility determination problems. To this end, we present the first « selective multi-fragment rendering » solution that alters the classic k-buffer construction procedure from a fixed-k to a variable-k per-pixel fragment allocation guided by an importance-driven model. Given a fixed memory budget, the idea is to allocate more fragment layers in parts of the image that need them most or contribute more significantly to the visual result. An importance map, dynamically estimated per frame based on several criteria, is used for the distribution of the fragment layers across the image. We illustrate the effectiveness and quality superiority of our approach in comparison to previous methods when performing order-independent transparency rendering in various, high depth-complexity, scenarios.

Peter Kán, Maxim Davletaliyev, Hannes Kaufman

This paper presents a novel method for the discovery of new analytical filters suitable for filtering of noise in Monte Carlo rendering. Our method utilizes genetic programming to evolve the set of analytical filtering expressions with the goal to minimize image error in training scenes. We show that genetic programming is capable of learning new filtering expressions with quality comparable to state of the art noise filters in Monte Carlo rendering. Additionally, the analytical nature of the resulting expressions enables the run-times one order of magnitude faster than compared state of the art methods. Finally, we present a new analytical filter discovered by our method which is suitable for filtering of Monte Carlo noise in diffuse scenes.

SP3: Animation and Visualization

Session chair: Beatriz Sousa Santos

Session details: Tuesday 25, April, 15:30 – 17:00

Room: Rhône 1

Steffen Frey, Thomas Ertl

We present a GPU-targeted algorithm for the efficient direct computation of distances and interpolates between high-resolution density distributions without requiring any kind of intermediate representation like features. It is based on a previously published multi-core approach, and substantially improves its performance already on the same CPU hardware due to algorithmic improvements. As we explicitly target a manycore-friendly algorithm design, we further achieve significant speedups by running on a GPU. This paper quickly reviews the previous approach, and explicitly discusses the analysis of algorithmic characteristics as well as hardware architectural considerations on which our redesign was based. We demonstrate the performance and results of our technique by means of several transitions between volume data sets.

George Madges, Idris Miles, Eike Anderson

The process of animating a complex 3D character can be a time consuming activity which may take several iterations and several artists working in collaboration, each iteration improving some elements of the animation but potentially introducing artifacts in others. At present there exists no formal process to collate these various revisions in a manner that allows for close examination of their differences, which would help speed up the creation of 3D animations.To address this we present a method for equivalence checking and displaying differences between differing versions of an animated 3D model. Implemented in a tool that allows selective blending of animations, this provides a first step towards a 3D animation revision control system.

Fabio Turchet, Oleg Fryazinov, Sara  Schvartzman

We propose a novel approach for the generation of volumetric muscle primitives and their associated fiber field, suitable for simulation in computer animation. Muscles are notoriously difficult to sculpt because of their complex shapes and fiber architecture, therefore often requiring trained artists to render anatomical details. Moreover, physics simulation requires these geometries to be modeled in an intersection-free rest state and to have a spatially-varying fiber field to support contraction with anisotropic material models. Inspired by the principles of computational design, we satisfy these requirements by generating muscle primitives automatically, complete with tendons and fiber fields, using physics based simulation of inflatable 3D patches which are user-defined on the external mesh of a character.

Katrin Scharnowski, Steffen Frey, Bruno Raffin, Thomas Ertl

We introduce an approach for distributed processing and efficient storage of noisy particle trajectories, and present visual analysis techniques that directly operate on the generated representation. For efficient storage, we decompose individual trajectories into a smooth representation and a high frequency part. Our smooth representation is generated by fitting Hermite Splines to a series of time windows (as in in situ processing scenarios), adhering to a certain error bound. We show how the individually fitted splines can afterwards be combined into one spline posessing the same mathematical properties, i.e. $\mathcal{C}^1$ continuity as well as our error bound. The fitted splines are typically significantly smaller than the original data, and can therefore be used, e.g., for an online monitoring and analysis of distributed particle simulations. The high frequency part can be used to reconstruct the original data, or could also be discarded in scenarios with limited storage capabilities. Finally, we also demonstrate the utility of our smooth representation using real world data sets.

 


Wednesday 26, April


SP4: Geometric Modeling

Session chair: Jonas Martinez

Session details: Wednesday 26, April, 13:30 – 15:00

Room: Rhône 2

Tyson Brochu, Ryan Schmidt

We introduce a set of tools for interactive modeling of multi-material objects. We use non-manifold surface meshes to define complex objects, which can have multiple connected solid regions of different materials. Our suite of tools can create and edit non-manifold surfaces, while maintaining a consistent labeling of distinct regions.We also introduce a technique for generating approximate material gradients, using a set of thin layers with varying material properties. We demonstrate our approaches by printing physical objects with a multi-material printer.

Gerben Jan Hettinga, Jiri Kosinka

We extend Phong tessellation and point normal (PN) triangles from the original triangular setting to arbitrary polygons by use of generalised barycentric coordinates and S-patches. In addition, a generalisation of the associated quadratic normal field is given as well as a simple algorithm for evaluating the polygonal extensions for a polygon with vertex normals on the GPU.

Sara Shaheen, Bernard Ghanem

We propose a novel method to transfer sketch style at the stroke level from one free-hand line drawing to another, whereby these drawings can be from different artists. Our method exploits on techniques from area of geometrical modeling. It aims to transfer the style of the input sketch at the stroke level to the style encountered in other sketches by other artists. This is done by modifying all the parametric stroke segments in the input, so as to minimize a global stroke-level distance between the input and target styles. To do this, we exploit a recent work on stroke authorship recognition to define the stroke-level distance [SRG15], which is in turn minimized using conventional optimization tools. We showcase the quality of style transfer qualitatively by applying the proposed technique on several input-target combinations.

Ana-Maria Vintescu, Florent Dupont, Guillaume Lavoué, Pooran Memari, Julien  Tierny

This paper presents a new algorithm for the fast extraction of hierarchies of cone singularities for conformal surface parameterization. Cone singularities have been shown to greatly improve the distortion of such parameterizations since they locally absorb the area distortion. Therefore, existing automatic approaches aim at inserting cones where large area distortion can be predicted. However, such approaches are iterative, which results in slow computations, even often slower than the actual subsequent parameterization procedure. This becomes even more problematic as often the user does not know in advance the right number of needed cones and thus often needs to explore cone hierarchies to obtain a satisfying result. Our algorithm relies on the key observation that the local extrema of the conformal factor already provide a good approximation of the cone singularities extracted with previous techniques, while needing only one linear solve where previous approaches needed one solve per hierarchy level. We apply concepts from persistent homology to organize very efficiently such local extrema into a global hierarchy. Experiments demonstrate the approximation quality of our approach quantitatively and report time-performance improvements of one order of magnitude, which makes our technique well suited for interactive contexts.

SP5: Images and Appearance

Session chair: Harvey Carlo

Session details: Wednesday 26, April, 15:30 – 17:00

Room: Rhône 2

Sixing Hu, Pierre-Yves  Laffont, Brian Price, Scott Cohen, Michael Brow

An image’s color palette can be used to search an image collection to retrieve images that share similar colors to the query palette color. The results of this approach are naturally limited to only those images in the database that share similar colors to the query. The idea proposed in this paper is to expand the search results by finding additional images in the collection that can be computationally recolored to better match the query. This not only provides more results to the user but also helps to extend the usefulness of the image collection. We describe a prototype system to realize this idea, using a two-step procedure. First, images in the database with color palettes similar to the query are identified to produce a set of initial results. Then, additional images that are semantically similar to these initial results are found and are modified using palette-based recoloring such that they better match the color query. We demonstrate results from our prototype and discuss several challenges for developing such image search systems.

Xudong Chen, Zhouhui Lian, Yingmin Tang, Jianguo  Xiao

Stroke extraction is one of the most important tasks in areas of computer graphics and document analysis. So far, data-driven methods are believed to perform relatively well, which use the pre-processed characters as templates. However, how to accurately extract strokes of characters is still a tough and challenging task because there are various styles of characters, which may vary a lot from the template character. To solve this problem, we build a font skeleton manifold in which we can always find a most similar character as a template by traversing the locations in the manifold. Because of the similar structure and font style, the point set registration of the template character with the target character would be much more effective and accurate. Experimental results on characters in both printing style and handwriting style reveal that our method using manifold learning has a better performance in the application of stroke extraction for Chinese characters.

Anastasios Gkaravelis, Georgios Papaioannou

In this paper we propose a simple and effective technique for setting up a configuration of directional light sources to accentuate the prominent geometric features of complex objects by increasing the local shadow contrast near them. Practical applications of such a task are encountered among others in professional photography, and cinematography. The method itself, which is based on a voting mechanism, quickly produces consistent and view-independent results, with minimal user intervention.

Hiroki Sone, Toshiya Hachisuka, Takafumi Koike

Rendering of highly scattering media is computationally expensive in general. While existing BSSRDF models can accurately and efficiently approximate light scattering in homogeneous media, we still have to resort to costly Monte Carlo simulation for heterogeneous media. We propose a simple parameter estimation method which enables homogeneous BSSRDF models to approximate the appearance of heterogeneous media. The main idea is to estimate the input optical parameters of a given homogeneous BSSRDF model such that such that the output well approximates light transport within heterogeneous media. Our method takes spatially varying optical coefficients into account by taking averages of the coefficients around the incident and exitant points. This approach is motivated by the path integral theory which predicts how wide the beam of light will spread in heterogeneous media. Since our method provides parameters for homogeneous BSSRDF models, it is applicable to many existing BSSRDF models and easy to integrate into existing rendering systems. We show that our modification produces more accurate results than the existing heuristics with the same goal.


Short Papers Chairs

Carles Bosch, Eurecat, Spain

Adrien Peytavie, University Claude Bernard Lyon1, France