VCLab

 

RESEARCH AREAS   PEOPLE   PUBLICATIONS   COURSES   ABOUT US
Home / Publications

indicator

IEEE Transactions on Image Processing (TIP)

 
Reconstructing Interlaced High-Dynamic-Range Video using Joint Learning
 
 
  Inchang Choi Seung-Hwan Baek Min H. Kim  
         
  Korea Advanced Institute of Science and Technology (KAIST)  
         
  teaser  
  Examples of our extended dynamic range video reconstruction. The left image pair compares the raw interlaced input and the result of our jointly learned deinterlacing. The right image pair compares the noisy original image and the result of our joint denoising.  
     
     
   
 
Results of interlaced HDR video
 
   
  Abstract
   
  For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.
   
  BibTeX
 
  @Article{HDRVideo:TIP:2017,
  author  = {Inchang Choi and Seung-Hwan Baek and Min H. Kim},
  title   = {Reconstructing Interlaced High-Dynamic-Range Video 
             using Joint Learning},
  journal = {IEEE Transactions on Image Processing (TIP)},
  year    = {2017},
  volume  = {26},
  number  = {11},
  pages   = {5353 -- 5366},
  }     
   
   
   
Preprint paper:
Full-res. PDF
(4.1MB)

Supplemental
document:
Full-res. PDF
(3.4MB)

Supplemental
video :
MP4 video
(167.8MB)

IEEE Digital Library
IEEE Xplore
 

Hosted by Visual Computing Laboratory, School of Computing, KAIST.

KAIST