 |
PaperCraft3D: Paper-Based 3D Modeling and Scene Fabrication
Patrick Paczkowski, Julie Dorsey, Holly Rushmeier, Min H. Kim
IEEE Transactions on Visualization and Computer Graphics (TVCG)
pages 1--14, accepted on March 18, 2018
|
[PDF][BibTeX][Video] |
|
A 3D modeling system with all-inclusive functionality is too demanding for a casual 3D modeler to learn. There has been a shift towards more approachable systems, with easy-to-learn, intuitive interfaces. However, most modeling systems still employ mouse and keyboard interfaces, despite the ubiquity of tablet devices and the benefits of multi-touch interfaces. We introduce an alternative 3D modeling and fabrication paradigm using developable surfaces, inspired by traditional papercrafting, and we implement it as a complete system designed for a multi-touch tablet, allowing a user to fabricate 3D scenes. We demonstrate the modeling and fabrication process of assembling complex 3D scenes from a collection of simpler models, in turn shaped through operations applied to virtual paper. Our fabrication method facilitates the assembly of the scene with real paper by automatically converting scenes into a series of cutouts with appropriately added fiducial markers and supporting structures. Our system assists users in creating occluded supporting structures to help maintain the spatial and rigid properties of a scene without compromising its aesthetic qualities. We demonstrate several 3D scenes modeled and fabricated in our system, and evaluate the faithfulness of our fabrications relative to their virtual counterparts and 3D-printed fabrications.
|
 |
 |
Enhancing the Spatial Resolution of Stereo Images using a Parallax Prior
Daniel S. Jeon, Seung-Hwan Baek, Inchang Choi, Min H. Kim
Proc. IEEE Computer Vision and Pattern Recognition (CVPR 2018)
Salt Lake City, USA, June 18, 2018
|
[PDF][BibTeX][Supp.] |
|
We present a novel method that can enhance the spatial resolution of stereo images using a parallax prior. While traditional stereo imaging has focused on estimating depth from stereo images, our method utilizes stereo images to enhance spatial resolution instead of estimating disparity. The critical challenge for enhancing spatial resolution from stereo images: how to register corresponding pixels with subpixel accuracy. Since disparity in traditional stereo imaging is calculated per pixel, it is directly inappropriate for enhancing spatial resolution. We, therefore, learn a parallax prior from stereo image datasets by jointly training two-stage networks. The first network learns how to enhance the spatial resolution of stereo images in luminance, and the second network learns how to reconstruct a high-resolution color image from high-resolution luminance and chrominance of the input image. Our two-stage joint network enhances the spatial resolution of stereo images significantly more than single-image super-resolution methods. The proposed method is directly applicable to any stereo depth imaging methods, enabling us to enhance the spatial resolution of stereo images.
|
 |
 |
High-Quality Hyperspectral Reconstruction Using a Spectral Prior
Inchang Choi, Daniel S. Jeon, Giljoo Nam, Diego Gutierrez, Min H. Kim
ACM Transactions on Graphics (TOG), presented at SIGGRAPH Asia 2017
36(6), Nov. 27-30, 2017, pp. 218:1--13
|
[PDF][BibTeX][Supp.]
[Dataset][Codes] |
|
We present a novel hyperspectral image reconstruction algorithm, which overcomes the long-standing tradeoff between spectral accuracy and spatial resolution in existing compressive imaging approaches. Our method consists of two steps: First, we learn nonlinear spectral representations from real-world hyperspectral datasets; for this, we build a convolutional autoencoder, which allows reconstructing its own input through its encoder and decoder networks. Second, we introduce a novel optimization method, which jointly regularizes the fidelity of the learned nonlinear spectral representations and the sparsity of gradients in the spatial domain, by means of our new fidelity prior. Our technique can be applied to any existing compressive imaging architecture, and has been thoroughly tested both in simulation, and by building a prototype hyperspectral imaging system. It outperforms the state-of-the-art methods from each architecture, both in terms of spectral accuracy and spatial resolution, while its computational complexity is reduced by two orders of magnitude with respect to sparse coding techniques. Moreover, we present two additional applications of our method: hyperspectral interpolation and demosaicing. Last, we have created a new high-resolution hyperspectral dataset containing sharper images of more spectral variety than existing ones, available through our project website.
|
 |
|
We present a novel, compact single-shot hyperspectral imaging method. It enables capturing hyperspectral images using a conventional DSLR camera equipped with just an ordinary refractive prism in front of the camera lens. Our computational imaging method reconstructs the full spectral information of a scene from dispersion over edges. Our setup requires no coded aperture mask, no slit, and no collimating optics, which are necessary for traditional hyperspectral imaging systems. It is thus very cost-effective, while still highly accurate. We tackle two main problems: First, since we do not rely on collimation, the sensor records a projection of the dispersion information, distorted by perspective. Second, available spectral cues are sparse, present only around object edges. We formulate an image formation model that can predict the perspective projection of dispersion, and a reconstruction method that can estimate the full spectral information of a scene from sparse dispersion information. Our results show that our method compares well with other state-of-the-art hyperspectral imaging systems, both in terms of spectral accuracy and spatial resolution, while being orders of magnitude cheaper than commercial imaging systems.
|
 |
 |
DeepToF: Off-the-Shelf Real-Time Correction of Multipath Interference in Time-of-Flight Imaging
Julio Marco, Quercus Hernandez, Adolfo Munoz, Yue Dong, Adrian Jarabo, Min H. Kim,
Xin Tong, Diego Gutierrez
ACM Transactions on Graphics (TOG), to be presented at SIGGRAPH Asia 2017
36(6), Nov. 27-30, 2017, pp. 219:1--12
|
[PDF][BibTeX] |
|
Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input.
|
 |
 |
Reconstructing Interlaced High-Dynamic-Range Video using Joint Learning
Inchang Choi, Seung-Hwan Baek, and Min H. Kim
IEEE Transactions on Image Processing (TIP)
26(11), Nov. 2017, pp. 5353 - 5366
|
[PDF][BibTeX][Supp.]
[Video][Site] |
|
For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extend dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with state-of-the-art high-dynamic-range video methods.
|
 |
 |
Urban Image Stitching using Planar Perspective Guidance
Joo Ho Lee, Seung-Hwan Baek, Min H. Kim
British Machine Vision Conference (BMVC 2017)
Sep. 04-07, 2017
|
[PDF][Supp.][BibTeX]
[Site] |
|
Image stitching methods with spatially-varying homographies have been proposed to overcome partial misalignments caused by global perspective projection; however, local warp operators often fracture the coherence of linear structures, resulting in an inconsistent perspective. In this paper, we propose an image stitching method that warps a source image to a target image by local projective warps using planar perspective guidance. We first detect line structures that converge into three vanishing points, yielding line-cluster probability functions for each vanishing point. Then we estimate local homographies that account for planar perspective guidance from the joint probability of planar guidance, in addition to spatial coherence. This allows us to enhance linear perspective structures while warping multiple urban images with grid-like structures. Our results validate the effectiveness of our method over state-of-the-art projective warp methods in terms of planar perspective.
|
 |
 |
Image Completion with Intrinsic Reflectance Guidance
Soomin Kim, Taeyoung Kim, Min H. Kim, Sung-Eui Yoon
British Machine Vision Conference (BMVC 2017)
Sep. 04-07, 2017
|
[PDF][Supp.][BibTeX]
[Site] |
|
Patch-based image completion methods often fail in searching patch correspondences of similar materials due to shading caused by scene illumination, resulting in inappropriate image completion with dissimilar materials. We therefore present a novel image completion method that additionally accounts for intrinsic reflectance of scene objects, when searching patch correspondences. Our method examines both intrinsic reflectances and color image structures to avoid false correspondences of different materials so that our method can search and vote illumination-invariant patches robustly, allowing for image completion mainly with homogeneous materials. Our results validate that our reflectance-guided inpainting can produce more natural and consistent images than state-of-the-art inpainting methods even under various illumination conditions.
|
 |
 |
Integrated Calibration of Multiview Phase-Measuring Profilometry
Yeong Beum Lee, Min H. Kim
Elsevier Optics and Lasers in Engineering (OLIE)
98C, Nov., 2017, pp. 118-122
|
[PDF][BibTeX][Site] |
|
Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.
|
 |
 |
Dehazing using Non-Local Regularization with Iso-Depth Neighbor-Fields
Incheol Kim, Min H. Kim
Proc. Int. Joint Conf. Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) - Volume 4: VISAPP
Feb. 27 - Mar. 1, 2017 (full paper, oral presentation)
|
[PDF][Supp.][PPT][BibTeX]
[Codes] |
|
Removing haze from a single image is a severely ill-posed problem due to the lack of the scene information. General dehazing algorithms estimate airlight initially using natural image statistics and then propagate the incompletely estimated airlight to build a dense transmission map, yielding a haze-free image. Propagating haze is different from other regularization problems, as haze is strongly correlated with depth according to the physics of light transport in participating media. However, since there is no depth information available in single-image dehazing, traditional regularization methods with a common grid random field often suffer from haze isolation artifacts caused by abrupt changes in scene depths. In this paper, to overcome the haze isolation problem, we propose a non-local regularization method by combining Markov random fields (MRFs) with nearest-neighbor fields (NNFs), based on our insightful observation that the NNFs searched in a hazy image associate patches at the similar depth, as local haze in the atmosphere is proportional to its depth. We validate that the proposed method can regularize haze effectively to restore a variety of natural landscape images, as demonstrated in the results. This proposed regularization method can be used separately with any other dehazing algorithms to enhance haze regularization.
|
 |
 |
Simultaneous Acquisition of Microscale Reflectance and Normals
Giljoo Nam, Joo Ho Lee, Hongzhi Wu, Diego Gutierrez, Min H. Kim
ACM Transactions on Graphics (TOG), presented at SIGGRAPH Asia 2016
35(6), Dec. 05-08, 2016, pp. 185:1-11
|
[PDF][Supp.#1][PPT]
[Supp.#2][video][BibTeX] |
|
Acquiring microscale reflectance and normals is useful for digital documentation and identification of real-world materials. However, its simultaneous acquisition has rarely been explored due to the difficulties of combining both sources of information at such small scale. In this paper, we capture both spatially-varying material appearance (diffuse, specular and roughness) and normals simultaneously at the microscale resolution. We design and build a microscopic light dome with 374 LED lights over the hemisphere, specifically tailored to the characteristics of microscopic imaging. This allows us to achieve the highest resolution for such combined information among current state-of-the-art acquisition systems. We thoroughly test and characterize our system, and provide microscopic appearance measurements of a wide range of common materials, as well as renderings of novel views to validate the applicability of our captured data. Additional applications such as bi-scale material editing from real-world samples are also demonstrated.
|
 |
 |
Birefractive Stereo Imaging for Single-Shot Depth Acquisition
Seung-Hwan Baek, Diego Gutierrez, Min H. Kim
ACM Transactions on Graphics, presented at SIGGRAPH Asia 2016
35(6), Dec. 05-08, 2016, pp. 194:1-11
|
[PDF][Supp.#1][PPT]
[Supp.#2][BibTeX] |
|
We propose a novel birefractive depth acquisition method, which allows for single-shot depth imaging by just placing a birefringent material in front of the lens. While most transmissive materials present a single refractive index per wavelength, birefringent crystals like calcite posses two, resulting in a double refraction effect. We develop an imaging model that leverages this phenomenon and the information contained in the ordinary and the extraordinary refracted rays, providing an effective formulation of the geometric relationship between scene depth and double refraction. To handle the inherent ambiguity of having two sources of information overlapped in a single image, we define and combine two different cost volume functions. We additionally present a novel calibration technique for birefringence, carefully analyze and validate our model, and demonstrate the usefulness of our approach with several image-editing applications.
|
 |
 |
Electrothermal MEMS parallel plate rotation for single-imager stereoscopic endoscopes
Kyung-Won Jang, Sung-Pyo Yang, Seung-Hwan Baek, Min-Suk Lee, Hyeon-Cheol Park, Yeong-Hyeon Seo, Min H. Kim, Ki-Hun Jeong
OSA Optics Express (OE)
24 (9), May 2, 2016, pp. 9667-9672
|
[PDF][BibTeX][Site] |
|
This work reports electrothermal MEMS parallel plate-rotation (PPR) for a single-imager based stereoscopic endoscope. A thin optical plate was directly connected to an electrothermal MEMS microactuator with bimorph structures of thin silicon and aluminum layers. The fabricated MEMS PPR device precisely rotates an transparent optical plate up to 37° prior to an endoscopic camera and creates the binocular disparities, comparable to those from binocular cameras with a baseline distance over 100 μm. The anaglyph 3D images and disparity maps were successfully achieved by extracting the local binocular disparities from two optical images captured at the relative positions. The physical volume of MEMS PPR is well fit in 3.4 mm x 3.3 mm x 1 mm. This method provides a new direction for compact stereoscopic 3D endoscopic imaging systems.
|
 |
 |
Multiview Image Completion with Space Structure Propagation
Seung-Hwan Baek, Inchang Choi, Min H. Kim
Proc. IEEE Computer Vision and Pattern Recognition (CVPR 2016)
Las Vegas, USA, June 26, 2016, pp. 488-496 |
[PDF][Supp.][BibTeX][Site] |
|
We present a multiview image completion method that provides geometric consistency among different views by propagating space structures. Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region. The proposed method incorporates photographs to estimate dense depth maps. We initially complete color as well as depth from a view, and then facilitate two stages of structure propagation and structure-guided completion. Structure propagation optimizes space topology in the scene across photographs, while structure-guide completion enhances, and completes local image structure of both depth and color in multiple photographs with structural coherence by searching nearest neighbor fields in relevant views. We demonstrate the effectiveness of the proposed method in completing multiview images.
|
 |
 |
Laplacian Patch-Based Image Synthesis
Joo Ho Lee, Inchang Choi, Min H. Kim
Proc. IEEE Computer Vision and Pattern Recognition (CVPR 2016)
Las Vegas, USA, June 26, 2016, pp. 2727-2735 |
[PDF][Supp.][BibTeX]
[Code][Site] |
|
Patch-based image synthesis has been enriched with global optimization on the image pyramid. Successively, the gradient-based synthesis has improved structural coherence and details. However, the gradient operator is directional and inconsistent and requires computing multiple operators. It also introduces a significantly heavy computational burden to solve the Poisson equation that often accompanies artifacts in non-integrable gradient fields. In this paper, we propose a patch-based synthesis using a Laplacian pyramid to improve searching correspondence with enhanced awareness of edge structures. Contrary to the gradient operators, the Laplacian pyramid has the advantage of being isotropic in detecting changes to provide more consistent performance in decomposing the base structure and the detailed localization. Furthermore, it does not require heavy computation as it employs approximation by the differences of Gaussians. We examine the potentials of the Laplacian pyramid for enhanced edge-aware correspondence search. We demonstrate the effectiveness of the Laplacian-based approach over the state-of-the-art patch-based image synthesis methods.
|
 |
 |
Stereo Fusion: Combining Refractive and Binocular Disparity
Seung-Hwan Baek, Min H. Kim
Elsevier Computer Vision and Image Understanding (CVIU)
146, May 01, 2016, pp. 52-66 |
[PDF][BibTeX][Slides][Site] |
|
The performance of depth reconstruction in binocular stereo relies on how adequate the predefined baseline for a target scene is. Wide-baseline stereo is capable of discriminating depth better than the narrow-baseline stereo, but it often suffers from spatial artifacts. Narrow-baseline stereo can provide a more elaborate depth map with fewer artifacts, while its depth resolution tends to be biased or coarse due to the short disparity. In this paper, we propose a novel optical design of heterogeneous stereo fusion on a binocular imaging system with a refractive medium, where the binocular stereo part operates as wide-baseline stereo, and the refractive stereo module works as narrow-baseline stereo. We then introduce a stereo fusion workflow that combines the refractive and binocular stereo algorithms to estimate fine depth information through this fusion design. In addition, we propose an efficient calibration method for refractive stereo. The quantitative and qualitative results validate the performance of our stereo fusion system in measuring depth in comparison with homogeneous stereo approaches.
|
 |
 |
Multisampling Compressive Video Spectroscopy
Daniel S. Jeon, Inchang Choi, Min H. Kim
Computer Graphics Forum (CGF), presented at EUROGRAPHICS 2016
35(2), May 12, 2016, pp. 467-477 |
[PDF][Video][PPT][BibTeX] |
|
The coded aperture snapshot spectral imaging (CASSI) architecture has been employed widely for capturing hyperspectral video. Despite allowing concurrent capture of hyperspectral video, spatial modulation in CASSI sacrifices image resolution significantly while reconstructing spectral projection via sparse sampling. Several multiview alternatives have been proposed to handle this low spatial resolution problem and improve measurement accuracy, for instance, by adding a translation stage for the coded aperture or changing the static coded aperture with a digital micromirror device for dynamic modulation. State- of-the-art solutions enhance spatial resolution significantly but are incapable of capturing video using CASSI. In this paper, we present a novel compressive coded aperture imaging design that increases spatial resolution while capturing 4D hyperspectral video of dynamic scenes. We revise the traditional CASSI design to allow for multiple sampling of the randomness of spatial modulation in a single frame. We demonstrate that our compressive video spectroscopy approach yields enhanced spatial resolution and consistent measurements, compared with the traditional CASSI design.
|
 |
 |
Electrothermal MEMS Parallel Plate Rotation for Real Time Stereoscopic Endoscopic Imaging
Kyung-Won Jang, Sung-Pyo Yang, Seung-Hwan Baek, Min H. Kim, Ki-Hun Jeong
Proc. IEEE International Conference on Micro Electro Mechanical Systems (MEMS 2016)
Shanghai, China, Jan. 24, 2016, 4 pages.
|
[Site][BibTeX] |
 |
Ultrathin Camera Inspired by Visual System Of Xenos Peckii
Dongmin Keum, Daniel S. Jeon, Charles S. H. Hwang, Elke K. Buschbeck,
Min H. Kim, Ki-Hun Jeong
Proc. IEEE International Conference on Micro Electro Mechanical Systems (MEMS 2016)
Shanghai, China, Jan. 24, 2016, 4 pages.
|
[Site][BibTeX] |
 |
Foundations and Applications of 3D Imaging
Min H. Kim
In Theory and Applications of Smart Cameras
edited by Chong-Min Kyung
Chapter I.4, pp. 63-84, Springer
|
[Publisher][BibTeX] |
|
Two-dimensional imaging through digital photography has been a main application of mobile computing devices, such as smart phones, during the last decade. Expanding the dimensions of digital imaging, the recent advances in 3D imaging technology are about to be combined with such smart devices, resulting in broadened applications of 3D imaging. This chapter presents the foundations of 3D imaging, that is, the relationship between disparity and depth in a stereo camera system, and it surveys a general workflow to build a 3D model from sensor data. In addition, recent advanced 3D imaging applications are introduced: hyperspectral 3D imaging, multispectral photometric stereo and stereo fusion of refractive and binocular stereo.
|
 |
Measuring Color Defects in Flat Panel Displays
using HDR Imaging and Appearance Modeling
Giljoo Nam, Haebom Lee, Sungsoo Oh, Min H. Kim
IEEE Transactions on Instrumentation and Measurement (TIM)
Oct. 19, 2015, 65(2), pp.297--304
|
[DL][PDF][BibTeX] |
|
Measuring and quantifying color defects in flat panel displays (FPDs) are critical in the FPD industry and related busi- ness. Color defects are traditionally investigated by professional human assessors as color defects are subtle perceptual phenomena that are difficult to detect using a camera system. However, human-based inspection has hindered the quantitative analysis of such color defects. Thus, the industrial automation of color defect measurement in FPDs has been severely limited even by leading manufacturers accordingly. This paper presents a systematic framework for the measurement and numerical evaluation of color defects. Our framework exploits high-dynamic-range (HDR) imaging to robustly measure physically-meaningful quantities of subtle color defects. In addition to the application of advanced imaging technology, an image appearance model is employed to predict the human visual perception of color defects as human assessors do. This proposed automated framework can output quantitative analysis of the color defects. This work demonstrates the performance of the proposed workflow in investigating subtle color defects in FPDs with a high accuracy.
|
 |
 |
Artificial Compound Eye Inspired by Imaging Principle Of Xenos Peckii
Dongmin Keum, Daniel S. Jeon, Min H. Kim, Ki-Hun Jeong
Proc. IEEE International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS 2015)
Anchorage, Alaska, USA, Jun. 21, 2015, 4 pages.
|
[DL][BibTeX] |
 |
Single Camera based Miniaturized Stereoscopic System for 3D Endoscopic Imaging
Kyung-Won Jang, Sung-Pyo Yang, Seung-Hwan Baek, Min H. Kim, Ki-Hun Jeong
Proc. SPIE Nano-Bio Sensing Imaging and Spectroscopy (NBSIS 2015)
Jeju, Korea, Feb. 25, 2015.
|
[DL][BibTeX] |
 |
The Three-Dimensional Evolution of Hyperspectral Imaging
Min H. Kim
In Smart Sensors and Systems
edited by Youn-Long Lin, Chong-Min Kyung, Hiroto Yasuura, and Yongpan Liu
Chapter II.1, pp. 63-84, Springer
|
[Publisher][BibTeX] |
|
Hyperspectral imaging has become more accessible nowadays as an image-based acquisition tool for physically-meaningful measurements. This technology is now evolving from classical 2D imaging to 3D imaging, allowing us to measure physically-meaningful reflectance on 3D solid objects. This chapter provides a brief overview on the foundations of hyperspectral imaging and introduces advanced applications of hyperspectral 3D imaging. This chapter first surveys the fundamentals of optics and calibration processes of hyperspectral imaging and then studies two typical designs of hyperspectral imaging. In addition to this introduction, this chapter briefly looks over the state-of-the-art applications of hyperspectral 3D imaging to measure hyperspectral intrinsic properties of surfaces on 3D solid objects.
|
 |
 |
Lock N' LoL: Mitigating Smartphone Disturbance in Co-located Social Interactions
Minsam Ko, Chayanin Wong, Sunmin Son, Euigon Jung, Uichin Lee,
Seungwoo Choi, Sungho Jo, Min H. Kim
Proc. ACM CHI 2015 Extended Abstracts
April 2015, Work in Progress, pp. 1561--1566
|
[DL][PDF][BibTeX] |
|
We aim to improve the quality of time spent in co-located social interactions by encouraging people to limit their smartphone usage together. We present a prototype called Lock n’ LoL, an app that allows co-located users to lock their smartphones and limit their usage by enforcing users to ask for explicit use permission. From our preliminary study, we designed two modes to deal with the dynamics of smartphone use during the co-located social interactions: (1) socializing mode (i.e., locking smartphones to limit usage together) and (2) temporary use mode (i.e., requesting/granting temporary smartphone use). We conducted a pilot study (n = 20) with our working prototype, and the results documented the helpfulness of Lock n' LoL when used in socializing.
|
 |
 |
Stereo Fusion using a Refractive Medium on a Binocular Base
Seung-Hwan Baek, Min H. Kim
Proc. Asian Conference on Computer Vision (ACCV) 2014
Apr. 16, 2015, Springer LNCS Vol. 9004, Part II, pp. 503-518 (oral presentation)
|
† Best Application Paper Award & Best Demo Award
[DL][PDF][BibTeX][PPT] |
|
The performance of depth reconstruction in binocular stereo relies on how adequate the predefined baseline for a target scene is. Long-baseline stereo is capable of discriminating depth better than the short one, but it often suffers from spatial artifacts. Short-baseline stereo can provide a more elaborate depth map with less artifacts, while its depth resolution tends to be biased or coarse due to the short disparity. In this paper, we first propose a novel optical design of heterogeneous stereo fusion on a binocular imaging system with a refractive medium, where the binocular stereo part operates as long-baseline stereo; the refractive stereo module functions as short-baseline stereo. We then introduce a stereo fusion workflow that combines the refractive and binocular stereo algorithms to estimate fine depth information through this fusion design. The quantitative and qualitative results validate the performance of our stereo fusion system in measuring depth, compared with traditional homogeneous stereo approaches.
|
 |
 |
Design and microfabrication of
an artificial compound eye inspired by vision mechanism of Xenos peckii
Dongmin Keum, Inchang Choi, Min H. Kim, Ki-Hun Jeong
Proc. SPIE Photonics West 2015
San Francisco, California, USA, Feb. 7–8 2015, Vol. 9341, Article. 4.
|
[DL][BibTeX] |
 |
Multispectral Photometric Stereo for Acquiring High-Fidelity Surface Normals
Giljoo Nam, Min H. Kim
IEEE Computer Graphics and Applications (CG&A)
Sep. 09, 2014, 34(6), pp.57--68.
|
[DL][PDF][BibTeX] |
|
An advanced imaging technique of multispectral imaging has become more accessible as a physically-meaningful image-based measurement tool, and photometric stereo has been commonly practiced for digitizing a 3D shape with simplicity for more than three decades. However, these two imaging techniques have rarely been combined as a 3D imaging application yet. Reconstructing the shape of a 3D object using photometric stereo is still challenging due to the optical phenomena such as indirect illumination, specular reflection, self shadow. In addition, removing interreflection in photometric stereo is a traditional chicken-and-egg problem as we need to account for interreflection without knowing geometry. In this paper, we present a novel multispectral photometric stereo method that allows us to remove interreflection on diffuse materials using multispectral reflectance information. Our proposed method can be easily integrated into an existing photometric stereo system by simply substituting the current camera with a multispectral camera, as our method does not rely on additional structured or colored lights. We demonstrate several benefits of our multispectral photometric stereo method such as removing interreflection and reconstructing the 3D shapes of objects to a high accuracy.
|
 |
 |
Paper3D: Bringing Casual 3D Modeling to a Multi-Touch Interface
Patrick Paczkowski, Julie Dorsey, Holly Rushmeier, Min H. Kim
Proc. ACM User Interface Software & Technology Symposium (UIST) 2014
Oct. 5, 2014, pp. 23-32 (oral presentation).
|
[PDF][DL][Video][BibTeX] |
|
A 3D modeling system that provides all-inclusive functionality is generally too demanding for a casual 3D modeler to learn. In recent years, there has been a shift towards developing more approachable systems, with easy-to-learn, intuitive interfaces. However, most modeling systems still employ mouse and keyboard interfaces, despite the ubiquity of tablet devices, and the benefits of multi-touch interfaces applied to 3D modeling. In this paper, we introduce an alternative 3D modeling paradigm for creating developable surfaces, inspired by traditional papercrafting, and implemented as a system designed from the start for a multi-touch tablet. We demonstrate the process of assembling complex 3D scenes from a collection of simpler models, in turn shaped through operations applied to sheets of virtual paper. The modeling and assembling operations mimic familiar, real-world operations performed on paper, allowing users to quickly learn our system with very little guidance. We outline key design decisions made throughout the development process, based on feedback obtained through collaboration with target users. Finally, we include a range of models created in our system.
|
 |
 |
Locally Adaptive Products for Genuine Spherical Harmonic Lighting
Joo Ho Lee, Min H. Kim
Proc. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) 2014
Jun. 2, 2014, pp. 27-36 (oral presentation)
|
[DL][PDF][BibTeX] |
|
Precomputed radiance transfer techniques have been broadly used for supporting complex illumination effects on diffuse and glossy objects. Although working with the wavelet domain is efficient in handling all-frequency illumination, the spherical harmonics domain is more convenient for interactively changing lights and views on the fly due to the rotational invariant nature of the spherical harmonic domain. For interactive lighting, however, the number of coefficients must be limited and the high orders of coefficients have to be eliminated. Therefore spherical harmonic lighting has been preferred and practiced only for interactive soft-diffuse lighting. In this paper, we propose a simple but practical filtering solution using locally adaptive products of high-order harmonic coefficients within the genuine spherical harmonic lighting framework. Our approach works out on the fly in two-fold. We first conduct multi-level filtering on vertices in order to determine regions of interests, where the high orders of harmonics are necessary for high frequency lighting. The initially determined regions of interests are then refined through filling in the incomplete regions by traveling the neighboring vertices. Even not relying on graphics hardware, the proposed method allows to compute high order products of spherical harmonic lighting for both diffuse and specular lighting.
|
 |
 |
Building a Two-Way Hyperspectral Imaging System with Liquid Crystal Tunable Filters
Haebom Lee, Min H. Kim
Proc. Int. Conf. Image and Signal Processing (ICISP) 2014,
LNCS Vol. 8509,
Jul. 2, 2014, pp. 26-34 (oral presentation)
|
[DL][PDF][BibTeX] |
|
Liquid crystal tunable filters can provide rapid and vibrationless section of any wavelength in transmitting spectrum so that they have been broadly used in building multispectral or hyperspectral imaging systems. However, the spectral range of the filters is limited to a certain range, such as visible or near-infrared spectrum. In general hyperspectral imaging applications, we are therefore forced to choose a certain range of target spectrum, either visible or near-infrared for instance. Owing to the nature of polarizing optical elements, imaging systems combined with multiple tunable filters have been rarely practiced. In this paper, we therefore present our experience of building a two-way hyperspectral imaging system with liquid crystal tunable filters. The system allows us to capture hyperspectral radiance continuously from visible to near-infrared spectrum (400-1100 nm at 7 nm intervals), which is 2.3 times wider and 34 times more channels compared to a common RGB camera. We report how we handle the multiple polarizing elements to extend the spectral range of the imager with the multiple tunable filters and propose an affine-based method to register the hyperspectral image channels of each wavelength.
|
 |
 |
Hyper3D: 3D Graphics Software for Examining Cultural Artifacts
Min H. Kim, Holly Rushmeier, John ffrench, Irma Passeri, David Tidmarsh
ACM Journal on Computing and Cultural Heritage (JOCCH)
7(3), Aug. 01, 2014, pp. 1:1-19
|
[DL][PDF][BibTeX] |
|
Art conservators now have access to a wide variety of digital imaging techniques to assist in examining and documenting physical works of art. Commonly used techniques include hyperspectral imaging, 3D scanning and medical CT imaging. However, viewing most of this digital image data frequently requires both specialized software, which is often associated with a particular type of acquisition device, and professional knowledge of and experience with each type of data. In addition, many of these software packages are focused on particular applications (such as medicine or remote sensing) and do not permit users to access and fully exploit all the information contained in the data. In this paper, we address two practical barriers to using high-tech digital data in art conservation. First, users must deal with a wide variety of interfaces specialized for applications besides conservation. We provide an open-source software tool with a single intuitive interface consistent with conservators’ needs that handles various types of 2D and 3D image data and preserves user-generated metadata and annotations. Second, previous software has largely allowed visualizing a single type or only a few types of data. The software we present is designed and structured to accommodate multiple types of digital imaging data, including as yet unspecified or unimplemented formats, in an integrated environment. This allows conservators to access different forms of information and to view a variety of image types simultaneously.
|
 |
 |
Digital Cameras: Definitions and Principles
Min H. Kim, Nicolas Hautiere, Celine Loscos
In 3D Video: From Capture to Diffusion
edited by Laurent Lucas, Celine Loscos and Yannick Remion
Chapter 2, pp. 23-42, Wiley-ISTE, London
|
[Publisher][BibTeX] |
|
Digital cameras are a common feature of most mobile phones. In this chapter, we will outline the basics of digital cameras to help users understand the differences between image features formed by sensors and optics in order to control their governing parameters more precisely. We will examine a digital camera that captures not only still images but also video, given that most modern cameras are capable of capturing both types of image data. This chapter provides a general overview of current camera components required in three-dimensional (3D) processing and labeling, which will be examined in the remainder of this book. We will study each stage of light transport via the camera’s optics, before light is captured as an image by the sensor and stored in a given format. Section 2.2 introduces the fundamentals of light transport as well as notations for wavelength and color spaces, commonly used in imaging. Section 2.3 examines how cameras have been adapted to capture and transform into digital image. This section also describes the details of different components in a camera and their influence on the final image. In particular, we will provide a brief overview of different optical components and sensors, examining their advantages and limitations. This section also explains how these limitations can be corrected by applying post-processing algorithms to the acquired images. Section 2.4 investigates the link between camera models and the human visual system in terms of perception, optics and color fidelity. Section 2.5 briefly explores two current camera techniques; high dynamic range (HDR) and hyperspectral imaging.
|
 |
 |
Preference and Artifact Analysis for Video Transitions of Places
James Tompkin, Min H. Kim, Kwang In Kim, Jan Kautz, Christian Theobalt
ACM Transactions on Applied Perception (TAP), presented at SAP 2013
10(3), Aug. 01, 2013, pp. 13:1-19
|
[DL][PDF][Video][BibTeX] |
|
Emerging interfaces for video collections of places attempt to link similar content with seamless transitions. However, the automatic computer vision techniques that enable these transitions have many failure cases which lead to artifacts in the final rendered transition. Under these conditions, which transitions are preferred by participants and which artifacts are most objectionable? We perform an experiment with participants comparing seven transition types, from movie cuts and dissolves to image-based warps and virtual camera transitions, across five scenes in a city. This document describes how we condition this experiment on slight and considerable view change cases, and how we analyze the feedback from participants to find their preference for transition types and artifacts. We discover that transition preference varies with view change, that automatic rendered transitions are significantly preferred even with some artifacts, and that dissolve transitions are comparable to less- sophisticated rendered transitions. This leads to insights into what visual features are important to maintain in a rendered transition, and to an artifact ordering within our transitions.
|
 |
 |
3D Graphics Techniques for Capturing and Inspecting Hyperspectral Appearance
Min H. Kim
IEEE International Symposium on Ubiquitous Virtual Reality (ISUVR) 2013
Jul. 10, 2013, pp. 15-18
|
[DL][PDF][BibTeX] |
|
Feature films and computer games exhibit stunning photorealistic computer imagery in motion. The challenges in computer graphics realism lie in acquiring physically accurate material appearance in a high spectral resolution and representing the appearance with perceptual faithfulness. While many approaches for true spectral rendering have been tried in computer graphics, they have not been extensively explored due to the lack of reliable 3D spectral data. Recently, a hyperspectral 3D acquisition system and viewing software have been introduced to the graphics community. In this paper, we review the latest acquisition and visualization techniques for hyperspectral imaging in graphics. We give an overview of the 3D imaging system for capturing hyperspectral appearance on 3D objects and the visualization software package to exploit such high-tech digital data.
|
 |
 |
Developing Open-Source Software for Art Conservators
Min H. Kim, Holly Rushmeier, John ffrench, Irma Passeri
International Symposium on Virtual Reality, Archaeology and Cultural Heritage
(VAST 2012)
Nov. 19, 2012, pp. 97-104 |
† Best Paper Award
[DL][PDF][Video][BibTeX] |
|
Art conservators now have access to a wide variety of digital imaging techniques to assist in examining and documenting physical works of art. Commonly used techniques include hyperspectral imaging, 3D scanning and medical CT imaging. However, most of the digital image data requires specialized software to view. The software is often associated with a particular type of acquisition device, and professional knowledge and experience is needed for each type of data. In addition, these software packages are often focused on particular applications (such as medicine or remote sensing) and are not designed to allow the free exploitation of these expensively acquired digital data. In this paper, we address two practical barriers in using the high-tech digital data in art conservation. First, there is the barrier of dealing with a wide variety of interfaces specialized for applications outside of art conservation. We provide an open-source software tool with a single intuitive user interface that can handle various types of 2/3D image data consistent with the needs of art conservation. Second, there is the barrier that previous software has been focused on a single data type. The software presented here is designed and structured to integrate various types of digital imaging data, including as yet unspecified data types, in an integrated environment. This provides conservators the free navigation of various imaging information and allows them to integrate the different types of imaging observations.
|
 |
 |
3D Imaging Spectroscopy for Measuring Hyperspectral Patterns on Solid Objects
Min H. Kim, Todd Alan Harvey, David S. Kittle, Holly Rushmeier, Julie Dorsey,
Richard O. Prum, David J. Brady
ACM Transactions on Graphics (TOG), presented at SIGGRAPH 2012
31(4), Aug. 05, 2012, pp. 38:1-11 |
[DL][PDF][Video][BibTeX] |
|
Sophisticated methods for true spectral rendering have been de- veloped in computer graphics to produce highly accurate images. In addition to traditional applications in visualizing appearance, such methods have potential applications in many areas of scientific study. In particular, we are motivated by the application of studying avian vision and appearance. An obstacle to using graphics in this application is the lack of reliable input data. We introduce an end-to- end measurement system for capturing spectral data on 3D objects. We present the modification of a recently developed hyperspectral imager to make it suitable for acquiring such data in a wide spec- tral range at high spectral and spatial resolution. We capture four megapixel images, with data at each pixel from the near-ultraviolet (359 nm) to near-infrared (1,003 nm) at 12 nm spectral resolution. We fully characterize the imaging system, and document its accuracy. This imager is integrated into a 3D scanning system to enable the measurement of the diffuse spectral reflectance and fluorescence of specimens. We demonstrate the use of this measurement system in the study of the interplay between the visual capabilities and ap- pearance of birds. We show further the use of the system in gaining insight into artifacts from geology and cultural heritage.
|
 |
 |
Insitu: Sketching Architectural Designs in Context
Patrick Paczkowski, Min H. Kim, Yann Morvan, Julie Dorsey, Holly Rushmeier, Carol O'Sullivan
ACM Transactions on Graphics (TOG), presented at SIGGRAPH Asia 2011
30(6), Dec. 12, 2011, pp. 182:1-10 |
[DL][PDF][Video][BibTeX]
|
|
Architecture is design in spatial context. The only current methods for representing context involve designing in a heavyweight computer-aided design system, using a full model of existing buildings and landscape, or sketching on a panoramic photo. The former is too cumbersome; the latter is too restrictive in viewpoint and in the handling of occlusions and topography. We introduce a novel approach to presenting context such that it is an integral
component in a lightweight conceptual design system. We represent sites through a fusion of data available from different sources. We derive a site model from geographic elevation data, on-site point-to-point distance measurements, and images of the site. To acquire and process the data, we use publicly available data sources, multi-dimensional scaling techniques and refinements of recent bundle adjustment techniques. We offer a suite
of interactive tools to acquire, process, and combine the data into a lightweight stroke and image-billboard representation.
We create multiple and linked pop-ups derived from images, forming a lightweight representation of a three-dimensional environment.
We implemented our techniques in a stroke-based conceptual design system we call Insitu.
We developed our work through continuous interaction with professional designers.
We present designs created with our new techniques integrated in a conceptual design system.
|
 |
 |
Radiometric Characterization of Spectral
Imaging for Textual Pigment Identification
Min H. Kim, Holly Rushmeier
International Symposium on Virtual Reality, Archaeology and Cultural Heritage
(VAST 2011)
Oct. 18, 2011, pp. 57-64 |
[DL][PDF][BibTeX] |
|
Digital imaging of cultural heritage artifacts has become a standard practice. Typically, standard commercial cameras, often commodity rather than scientific grade cameras, are used for this purpose. Commercial cameras are optimized for plausible visual reproduction of a physical scene with respect to trichromatic human vision. However, visual reproduction is just one application of digital images in heritage. In this paper, we discuss the selection and characterization of an alternative imaging system that can be used for the physical analysis of artifacts as well as visually reproducing their appearance. The hardware and method we describe offers a middle ground between the low cost and ease of commodity cameras and the high cost and complexity of hyperspectral imaging systems. We describe the selection of a system, a protocol for characterizing the system and provide a case study using the system in the physical analysis of a medieval manuscript. |
 |
 |
Design and Fabrication of a UV-Visible Coded Aperture Spectral Imager (CASI)
David Kittle, Daniel L. Marks, Min H. Kim, Holly Rushmeier, David J. Brady
Frontiers in Optics 2011, Optical Society of America (OSA)
Oct. 16, 2011, paper FTuZ3 |
[DL][PDF][BibTeX] |
|
CASI is a snapshot capable UV-visible spectral imager for measuring bird plumage. Near apochromatic UV-visible optics were designed and built with an MTF for a 4Mpx detector. Wide-spectral bandwidth data from CASI is then presented. |
 |
Edge-Aware Color Appearance
Min H. Kim, Tobias Ritschel, Jan Kautz
ACM Transactions on Graphics (TOG), presented at SIGGRAPH 2011
30(2), Apr. 01, 2011, pp. 13:1-9 |
[DL][PDF][Data][BibTeX] |
|
Color perception is recognized to vary with surrounding spatial structure, but the impact of edge smoothness on color has not been studied in color appearance modeling. In this work, we study the appearance of color under different degrees of edge smoothness. A psychophysical experiment was conducted to quantify the change in perceived lightness, colorfulness and hue with respect to edge smoothness. We confirm that color appearance, in particular lightness, changes noticeably with increased smoothness. Based on our experimental data, we have developed a computational model that predicts this appearance change. The model can be integrated into existing color appearance models. We demonstrate the applicability of our model on a number of examples. |
 |
High-Fidelity Colour Reproduction for High-Dynamic-Range Imaging
Min H. Kim
PhD Thesis in Computer Science
2010, University College London, London, UK |
[DL][PDF][BibTeX] |
|
The aim of this thesis is to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because current characterisation methods and colour appearance models fail to cover the dynamic range of luminance present in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range media such as LCD displays. However, most of these models have only considered luminance compression from a photographic point of view and have not explicitly taken into account colour appearance. Motivated by the idea to bridge the gap between cross-media colour reproduction and HDR imaging, this thesis investigates the fundamentals and the infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, and develops a novel cross-media colour reproduction system for HDR imaging. First, our HDR characterisation method enables us to measure HDR radiance values to a high accuracy that rivals spectroradiometers. Second, our colour appearance model enables us to predict human colour perception under high luminance levels. We first built a high-luminance display in order to establish a controllable high-luminance viewing environment. We conducted a psychophysical experiment on this display device to measure perceptual colour attributes. A novel numerical model for colour appearance was derived from our experimental data, which covers the full working range of the human visual system. Our appearance model predicts colour and luminance attributes under high luminance levels. In particular, our model predicts perceived lightness and colourfulness to a significantly higher accuracy than other appearance models. Finally, a complete colour reproduction pipeline is proposed using our novel HDR characterisation and colour appearance models. Results indicate that our reproduction system outperforms other reproduction methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for HDR imaging, and successfully bridges the gap between cross-media colour reproduction and HDR imaging. |
 |
 |
Perceptual Influence of Approximate Visibility in Indirect Illumination
Insu Yu, Andrew Cox, Min H. Kim, Tobias Ritschel, Thorsten Grosch,
Carsten Dachsbacher, Jan Kautz
ACM Transactions on Applied Perception (TAP), presented at APGV 2009
6(4), Sep. 01, 2009, pp. 24:1-14 |
[DL][PDF][BibTeX]
|
|
In this paper we evaluate the use of approximate visibility for efficient
global illumination. Traditionally, accurate visibility is used in
light transport. However, the indirect illumination we perceive on a
daily basis is rarely of high frequency nature, as the most significant
aspect of light transport in real-world scenes is diffuse, and thus
displays a smooth gradation. This raises the question of whether
accurate visibility is perceptually necessary in this case. To answer
this question, we conduct a psychophysical study on the perceptual
influence of approximate visibility on indirect illumination. This
study reveals that accurate visibility is not required and that certain
approximations may be introduced. |
 |
Modeling Human Color Perception under Extended Luminance Levels
Min H. Kim, Tim Weyrich, Jan Kautz
ACM Transactions on Graphics (TOG), presented at SIGGRAPH 2009
28(3), Jul. 27, 2009, pp. 27:1-9 |
[DL][PDF][Examples]
[Data][BibTeX][Code] |
|
Display technology is advancing quickly with peak luminance increasing significantly, enabling high-dynamic-range displays. However, perceptual color appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conduct a psychophysical study in order to acquire appearance data for many different luminance levels (up to 16,860 cd/m2) covering most of the dynamic range of the human visual system. These experimental data allow us to quantify human color perception under extended luminance levels, yielding a generalized color appearance model. Our proposed appearance model is efficient, accurate and invertible. It can be used to adapt the tone and color of images to different dynamic ranges for cross-media reproduction while maintaining appearance that is close to human perception. |
 |
 |
Consistent Scene Illumination using a Chromatic Flash
Min H. Kim,
Jan Kautz
Eurographics Workshop on Computational Aesthetics (CAe) 2009
May 28, 2009, pp. 83-89 |
[DL][PDF][Video][BibTeX] |
|
Flash photography is commonly used in low-light conditions to prevent noise and blurring artifacts. However, flash photography commonly leads to a mismatch between scene illumination and flash illumination, due to the bluish light that flashes emit. Not only does this change the atmosphere of the original scene illumination, it also makes it difficult to perform white balancing because of the illumination differences. Professional photographers sometimes apply colored gel filters to the flashes in order to match the color temperature. While effective, this is impractical for the casual photographer. We propose a simple but powerful method to automatically match the correlated color temperature of the auxiliary flash light with that of scene illuminations allowing for well-lit photographs while maintaining the atmosphere of the scene. Our technique consists of two main components. We first estimate the correlated color temperature of the scene, e.g., during image preview. We then adjust the color temperature of the flash to the scene’s correlated color temperature, which we achieve by placing a small trichromatic LCD in front of the flash. We demonstrate the effectiveness of this approach with a variety of examples. |
 |
 |
Imperfect Shadow Maps for Efficient Computation of Indirect Illumination
Tobias Ritschel, Thorsten Grosch, Min H. Kim, Hans-Peter Seidel,
Carsten Dachsbacher,
Jan Kautz
ACM Transactions on Graphics (TOG), presented at SIGGRAPH Asia 2008
27(5), Dec. 10, 2008, pp. 129:1-8 |
[DL][PDF][Video][BibTeX] |
|
We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps-low-resolution shadow maps rendered from a crude point-based representation of the scene. These are used in conjunction with a global illumination algorithm based on virtual point lights enabling indirect illumination of dynamic scenes at real-time frame rates. We demonstrate that imperfect shadow maps are a valid approximation to visibility, which makes the simulation of global illumination an order of magnitude faster than using accurate visibility. |
 |
 |
Characterization for High Dynamic Range Imaging
Min H. Kim, Jan Kautz
Computer Graphics Forum (CGF), presented at EUROGRAPHICS 2008
27(2), Apr. 24, 2008, pp. 691-697 |
[DL][PDF][BibTeX] |
|
In this paper we present a new practical camera characterization technique to improve color accuracy in high dynamic range (HDR) imaging. Camera characterization refers to the process of mapping device-dependent signals, such as digital camera RAW images, into a well-defined color space. This is a well-understood process for low dynamic range (LDR) imaging and is part of most digital cameras - usually mapping from the raw camera signal to the sRGB or Adobe RGB color space. This paper presents an efficient and accurate characterization method for high dynamic range imaging that extends previous methods originally designed for LDR imaging. We demonstrate that our characterization method is very accurate even in unknown illumination conditions, effectively turning a digital camera into a measurement device that measures physically accurate radiance values - both in terms of luminance and color - rivaling more expensive measurement instruments. |
 |
 |
Consistent Tone Reproduction
Min H. Kim, Jan Kautz
IASTED Conference on Computer Graphics and Imaging (CGIM) 2008
Feb. 13, 2008, pp.152-159 |
[DL][PDF][BibTeX][Software] |
|
In order to display images of high dynamic range (HDR), tone reproduction operators are usually applied that reduce the dynamic range to that of the display device. Generally, parameters need to be adjusted for each new image to achieve good results. Consistent tone reproduction across different images is therefore difficult to achieve, which is especially true for global operators and to some lesser extent also for local operators. We propose an efficient global tone reproduction method that achieves robust results across a large variety of HDR images without the need to adjust parameters. Consistency and efficiency make our method highly suitable for automated dynamic range compression, which for instance is necessary when a large number of HDR images need to be converted. |
 |
Rendering High Dynamic Range Images
Min H. Kim, Lindsay W. MacDonald
EVA 2006 London Conference, EVA Conferences International (EVA) 2006
Jul. 25, 2006, pp. 22.1-11 |
[PDF][BibTeX] |
|
A high dynamic range (HDR) imaging system, has been developed to overcome the limitations of dynamic range of a typical digital image reproduction system. The first stage is an HDR image-assembling algorithm, which constructs an HDR image from a sequence of multiple image exposures of a scene. The second stage utilises a new file format to store the HDR image in three primaries of 16-bits each. The third stage, described in this paper, uses a new tone-mapping algorithm to display HDR images on typical displays, optimised for sRGB devices. Six HDR tone-mapping techniques were evaluated by observers, and the new technique showed the best performance in all four category judgements: overall, tone, colour, and sharpness. |
 |
KAIST-VCLAB Theses Library
written by KAIST-VCLAB members since 2014 |
|
|
|
|
© Visual Computing Laboratory, School of Computing, KAIST.
All rights reserved.
|
 |
|