Generalized Image Acquisition and Analysis

BlurTags: Spatially Varying PSF Estimation with Out-of-Focus Patterns

In this paper, we consider the problem of animation reconstruction, i.e., the reconstruction of shape and motion of a deformable object from dynamic 3D scanner data, without using user provided template models. Unlike pre- vious work that addressed this problem, we do not rely on locally conver- gent optimization but present a system that can handle fast motion, tem- porally disrupted input, and can correctly match objects that disappear for extended time periods in acquisition holes due to occlusion. Our approach is motivated by cartography: We first estimate a few landmark correspon- dences, which are extended to a dense matching and then used to recon- struct geometry and motion. We propose a number of algorithmic building blocks: a scheme for tracking landmarks in temporally coherent and inco- herent data, an algorithm for robust estimation of dense correspondences under topological noise, and the integration of local matching techniques to refine the result. We describe and evaluate the individual components and propose a complete animation reconstruction pipeline based on these ideas. We evaluate our method on a number of standard benchmark data sets and show that we can obtain correct reconstructions in situations where other techniques fail completely or require additional user guidance such as a template model.

Projects

Interactive Volume Caustics in Single-Scattering Media

Zhao Dong, Wei Hu, Ivo Ihrke, Thorsten Grosch, Hans-Peter Seidel
In: Proceedings of I3D 2010.



Abstract

Volume caustics are intricate illumination patterns formed by light first interacting with a specular surface and subsequently being scattered inside a participating medium. Although this phenomenon can be simulated by existing techniques, image synthesis is usually non-trivial and time-consuming. Motivated by interactive applications, we propose a novel volume caustics rendering method for single-scattering participating media. Our method is based on the observation that line rendering of illumination rays into the screen buffer establishes a direct light path between the viewer and the light source. This connection is introduced via a single scattering event for every pixel affected by the line primitive. Since the GPU is a parallel processor, the radiance contributions of these light paths to each of the pixels can be computed and accumulated independently. The implementation of our method is straightforward and we show that it can be seamlessly integrated with existing methods for rendering participating media. We achieve high-quality results at real-time frame rates for large and dynamic scenes containing homogeneous participating media. For inhomogeneous media, our method achieves interactive performance that is close to real-time. Our method is based on a simplified physical model and can thus be used for generating physically plausible previews of expensive lighting simulations quickly.
Project Page Video

Bibtex

@INPROCEEDINGS{HDI:2010:VolumeCaustics,
author = {Hu, Wei and Dong, Zhao and Ihrke, Ivo and Grosch, Thorsten and Yuan, Guodong and Seidel, Hans-Peter},
title = {Interactive Volume Caustics in Single-Scattering Media},
booktitle = {I3D '10: Proceedings of the 2010 symposium on Interactive 3D graphics and games},
year = {2010},
pages = {109--117},
publisher = {ACM},
}
Go to project list




Imprint-Dataprotection