BlurTags: Spatially Varying PSF Estimation with Out-of-Focus Patterns
In this paper, we consider the problem of animation reconstruction, i.e., the reconstruction of shape and motion of a deformable object from dynamic 3D scanner data, without using user provided template models. Unlike pre- vious work that addressed this problem, we do not rely on locally conver- gent optimization but present a system that can handle fast motion, tem- porally disrupted input, and can correctly match objects that disappear for extended time periods in acquisition holes due to occlusion. Our approach is motivated by cartography: We first estimate a few landmark correspon- dences, which are extended to a dense matching and then used to recon- struct geometry and motion. We propose a number of algorithmic building blocks: a scheme for tracking landmarks in temporally coherent and inco- herent data, an algorithm for robust estimation of dense correspondences under topological noise, and the integration of local matching techniques to refine the result. We describe and evaluate the individual components and propose a complete animation reconstruction pipeline based on these ideas. We evaluate our method on a number of standard benchmark data sets and show that we can obtain correct reconstructions in situations where other techniques fail completely or require additional user guidance such as a template model.
Three-Dimensional Kaleidoscopic ImagingIlya Reshetouski, Alkhazur Manakov, Hans-Peter Seidel, and Ivo Ihrke
CVPR 2011 (oral)
We introduce three-dimensional kaleidoscopic imaging, a promising alternative for recording multi-view imagery.
The main limitation of multi-view reconstruction techniques is the limited number of views that are available from multi-camera systems, especially for dynamic scenes.
Our new system is based on imaging an object inside a kaleidoscopic mirror system. We show that this approach can generate a large number of high-quality views well distributed over the hemisphere surrounding the object in a single shot. In comparison to existing multi-view systems, our method offers a number of advantages: it is possible to operate with a single camera, the individual views are perfectly synchronized, and they have the same radiometric and colorimetric properties.
We describe the setup both theoretically, and provide methods for a practical implementation. Enabling interfacing to standard multi-view algorithms for further processing is an important goal of our techniques.
Example of labeling process:
|=>||=>||=>||=>||Source image||Silhouette image||Chambers extraction||Visual hull
|Labeling of views|
Supplemental materials [pdf]
Labeling data example (with MatLab loader) [zip]
Labeling of the dynamic scene example: Input movie [mpg], Segmentation movie [mpg], Labeling movie [mpg]