Helge Seetzen | CEO, TandemLaunch Technologies, Canada
Innovation - the act of turning inventions into practical solutions - is always hard. But it is particularly difficult for complex integrated systems such as camera-projection combinations which require a broad interdisciplinary perspective to bring them to market. Using examples of camera-projection technologies developed and commercialized over the last decade, this talk will outline these innovation challenges and provide practical suggestions for innovators.
Helge Seetzen is a successful multi-media technology entrepreneur with deep experience in the university tech transfer space. As the CEO of TandemLaunch Technologies, he provides university inventors with the funding, staff resources, infrastructure, and industry connections necessary to bring their ideas to market. Prior to TandemLaunch, he co-founded Sunnybrook Technologies and later BrightSide Technologies to commercialize display technologies developed at the University of British Columbia. BrightSide was successfully sold to Dolby Laboratories for US$28M at high return to shareholders after having grown to over 30 developers and receiving accolades such as the Best Buzz Award at the 2006 Consumer Electronics Conference and a "Top 100 Technologies in 2006" rank by Popular Science Magazine. At Dolby he led all cross-functional development activities for Dolby's first two consumer video products. In this capacity he built research and engineering departments in Canada and the US, and was closely involved in licensing negotiations with many major consumer electronics manufacturers.
Helge's leadership in the technology transfer, innovation and entrepreneurial space has been widely recognized through awards such as Business in Vancouver's 40 under Forty award for business accomplishment, the NSERC Innovation Challenge Award for university technology transfer, and a Special Recognition Award from the Society for Information Display for the pioneering of LED TV technology. He serves as the General Chair for DisplayWeek, the largest technical conference on displays, and Publication Chair of the Society for Information Display. He has published over 20 articles and holds 30 patents with an additional 30 pending US applications. Helge received a B.Sc. in physics and a Ph.D. in interdisciplinary imaging technology (physics & computer science) from the University of British Columbia.
Amit Agrawal | Principal Member Reseach Staff, MERL
Over the years, projectors and cameras have followed similar trends in becoming versatile, user-friendly and low cost devices. However, projectors and cameras offer different capabilities conceptually. Conventional projectors allow per-pixel control of outgoing light, which enables numerous applications, such as structured light 3D scanning. However, conventional cameras do not offer a similar capability for controlling each pixel independently.
In this talk, I will first describe how to design structured light patterns that allow 3D scanning under global illumination effects such as inter-reflections and sub-surface scattering using a camera-projector system. Then I will describe several computational cameras that allow finer pixel level controls similar to a projector, using coding and modulation of incoming light. I will show applications of these cameras in easy removal of motion blur and defocus blur, single shot capture of light fields and other interesting slices of the plenoptic function, reduction of lens glare, and post-capture control of spatial, angular and temporal resolution.
Amit Agrawal is a Principal Member of Research Staff at Mitsubishi Electric Research Labs (MERL) at Cambridge, USA. He received his B. Tech in Electrical Engineering from Indian Institute of Technology (IIT), Kanpur in 2000, and MS and PhD from University of Maryland, College Park, in 2003 and 2006 respectively. During 2000-2001, he worked as a DSP engineer at Hughes Software Systems, India. His research is focused in the areas of computer vision and computational photography with emphasis on developing novel cameras and algorithms for scene interpretation, and designing physics based models for vision. He has co-authored more than 30 papers in computer vision and computer graphics conferences including ECCV, CVPR, ICCV, and SIGGRAPH and holds several US patents. He has taught several courses in ICCV and CVPR and has served on the program committee for ICCP, WACV, OMNIVIS and PROCAMS.