VCLab

 

RESEARCH AREAS   PEOPLE   PUBLICATIONS   COURSES   ABOUT US
Home / Publications

indicator

IEEE Conference on Computational Photography (ICCP) 2021

 
View-dependent Scene Appearance Synthesis using Inverse Rendering from Light Fields
 
  Dahyun Kang Daniel S. Jeon Hakyeong Kim Hyeonjoong Jang Min H. Kim  
 
KAIST
 
  Fig. 1: (a) Our prototype of wide-baseline light field imaging for capturing scene-scale light fields. (b) Captured lightfield images and a sub-aperture view image. (c) Results of our view-dependent appearance synthesis. Our method successfully simulates appearance changes of specular reflection across different views.  
     
   
  ICCP 2021 presentation
     
   
  Supplemental video
   
  Abstract
   
 

In order to enable view-dependent appearance synthesis from the light fields of a scene, it is critical to evaluate the geometric relationships between light and view over surfaces in the scene with high accuracy. Perfect diffuse reflectance is commonly assumed to estimate geometry from light fields via multiview stereo. However, this diffuse surface assumption is invalid with real-world objects. Geometry estimated from light fields is severely degraded over specular surfaces. Additional scene-scale 3D scanning based on active illumination could provide reliable geometry, but it is sparse and thus still insufficient to calculate view-dependent appearance, such as specular reflection, in geometry-based view synthesis. In this work, we present a practical solution of inverse rendering to enable view-dependent appearance synthesis, particularly of scene scale. We enhance the scene geometry by eliminating the specular component, thus enforcing photometric consistency. We then estimate spatially-varying parameters of diffuse, specular, and normal components from wide-baseline light fields. To validate our method, we built a wide-baseline light field imaging prototype that consists of 32 machine vision cameras with fisheye lenses of 185 degrees that cover the forward hemispherical appearance of scenes. We captured various indoor scenes, and results validate that our method can estimate scene geometry and reflectance parameters with high accuracy, enabling view-dependent appearance synthesis at scene scale with high fidelity, i.e., specular reflection changes according to a virtual viewpoint.

   
  BibTeX
 
  @InProceedings{SceneViewSynthesis:ICCP:2021,
  author  = {Dahyun Kang and Daniel S. Jeon and Hakyeong Kim and
             Hyeonjoong Jang and Min H. Kim},
  title   = {View-dependent Scene Appearance Synthesis using
             Inverse Rendering from Light Fields},
  booktitle = {Proc. IEEE International Conference on
               Computational Photography (ICCP) 2021)},
  year    = {2021},
  month = {May},
  }           
   
   
icon
Preprint paper:
Light PDF (14.1MB)
icon
Supplemental
material #1:
PDF (9MB)
 

Hosted by Visual Computing Laboratory, School of Computing, KAIST.

KAIST