Look at the image from very close, then from far away.

Image Filtering and Hybrid Images
CS484: Introduction to Computer Vision

Logistics

Download the raw material

Part 1: Questions

Part 2: Code

Submission

Overview

We will write an image convolution function (image filtering) and use it to create hybrid images! The technique was invented by Oliva, Torralba, and Schyns in 2006, and published in a paper at SIGGRAPH. High frequency image content tends to dominate perception but, at a distance, only low frequency (smooth) content is perceived. By blending high and low frequency content, we can create a hybrid image that is perceived differently at different distances.

Image Filtering

This is a fundamental image processing tool (see Chapter 3.2 of Szeliski and the lecture materials to learn about image filtering, specifically about linear filtering). Some python libraries have efficient functions to perform image filtering, but we will write our own from scratch via convolution.

Task: Implement 2D convolution in my_filter2D(). Your filtering algorithm should:

  1. Pad the input image with zeros.
  2. Support grayscale and color images.
  3. Support arbitrary shaped odd-dimension filters (e.g., 7x9 filters but not 4x5 filters).
  4. Return an error message for even filters, as their output is undefined.
  5. Return an identical image with an identity filter.
  6. Return a filtered image which is the same resolution as the input image.

We have provided hw2_testcase.py to help you debug your image filtering algorithm.

Potentially useful: numpy.pad(), cv2.getGaussianKernel(), numpy.fft.fft2(), numpy.fft.ifft2().

Forbidden functions: cv2.GaussianBlur(), cv2.filter2D(), cv2.sepFilter2D, numpy.convolve(), scipy.signal.convolve2d(), scipy.signal.fftconvolve(), and any other convolution or filtering functions. You can use these for testing, but they should not appear in your executed code.

Hybrid Images

A hybrid image is the sum of a low-pass filtered version of a first image and a high-pass filtered version of a second image. We must tune a free parameter for each image pair to controls how much high frequency to remove from the first image and how much low frequency to leave in the second image. This is called the "cut-off frequency". The paper suggests to use two cut-off frequencies, one tuned for each image, and you are free to try this too. In the starter code, the cut-off frequency is controlled by changing the standard deviation of the Gausian kernel used to construct a hybrid image.

Task: Implement hybrid image creation in gen_hybrid_image.py.

We provide 5 pairs of aligned images which can be merged reasonably well into hybrid images. The alignment is important because it affects the perceptual grouping (read the paper for details). We encourage you to create additional examples, e.g., change of expression, morph between different objects, change over time, etc. For inspiration, please see the hybrid images project page.

Extra Credit:

To get extra credits, you still need to submit code for the original task. And then, feel free to add your own functions or parameters to switch between original task and extra credit task.

Writeup

Describe your process and algorithm, show your results, describe any extra credit, and tell us any other information you feel is relevant. We provide you with a LaTeX template. Please compile it into a PDF and submit it along with your code.

Task: Submit writeup/writeup.pdf

Rubric

Hybrid Image Example

For the example shown at the top of the page, the two original images look like this:

The low-pass (blurred) and high-pass versions of these images look like this:

The high frequency image is actually zero-mean with negative values so it is visualized by adding 0.5. In the resulting visualization, bright values are positive and dark values are negative.

Adding the high and low frequencies together gives you the image at the top of this page. If you're having trouble seeing the multiple interpretations of the image, a useful way to visualize the effect is by progressively downsampling the hybrid image:

The starter code provides a function vis_hybrid_image.py to save and display such visualizations.

Credits

The Python version of this project are revised by Dahyun Kang, Inseung Hwang, Donggun Kim, Jungwoo Kim, and Min H. Kim. The original project description and code are made by James Hays based on a similar project by Derek Hoiem.