hybrid image
Figure 1. Hybrid image of cat-dog. Look at the image from very close, then from far away.

Logistics

You can download the raw material from KLMS.

To do

  • Answer the questions in questions/.

  • Code template in code/.

  • Writeup template in writeup/.

Submission

  • Due: Wednesday, Sept. 27th 2023, 23:59

  • Do NOT change the file structure

  • Use create_submission_zip.py to create zip file.

  • Rename your zip file to hw2_studentID_name.zip (ex: hw2_20231234_JiwoongNa.zip).

  • The zip file must include your code, question answers (in pdf) and writeup (in pdf).

  • Submit your homework to KLMS.

  • An assignment after its original due date will be degraded 20% per day (see course introduction for detail).

QnA

  • Please use KLMS QnA board, and title of the post like: [hw2] question about …​ .

  • Note that TAs are trying to answer question ASAP, but it may take 1—​2 day on weekend. Please post the question on weekday if possible.

  • TA in charge: Jiwoong Na <jwna1885@kaist.ac.kr>

Overview

We will write an image convolution function (image filtering) and use it to create hybrid images! The technique was invented by Oliva, Torralba, and Schyns in 2006, and published in a paper at SIGGRAPH. High frequency image content tends to dominate perception but, at a distance, only low frequency (smooth) content is perceived. By blending high and low frequency content, we can create a hybrid image that is perceived differently at different distances.

Forbidden functions

  • cv2.GaussianBlur(), cv2.filter2D(), cv2.sepFilter2D, numpy.convolve(), scipy.signal.convolve2d(), scipy.signal.fftconvolve()

  • Any other convolution or filtering functions. You can use these for testing, but they should not appear in your executed code.

Useful functions

  • numpy.pad(), cv2.getGaussianKernel(), numpy.fft.fft2(), numpy.fft.ifft2()

Image Filtering

This is a fundamental image processing tool (see Chapter 3.2 of Szeliski and the lecture materials to learn about image filtering, specifically about linear filtering). Some python libraries have efficient functions to perform image filtering, but we will write our own from scratch via convolution.

Task

Implement 2D convolution in my_filter2D(). Your filtering algorithm should:

  1. Pad the input image with zeros.

  2. Support grayscale and color images.

  3. Support arbitrary shaped odd-dimension filters (e.g., 7x9 filters but not 4x5 filters).

  4. Return an error message for even filters, as their output is undefined.

  5. Return an identical image with an identity filter.

  6. Return a filtered image which is the same resolution as the input image.

We have provided hw2_testcase.py to help you debug your image filtering algorithm.

Hybrid Images

A hybrid image is the sum of a low-pass filtered version of a first image and a high-pass filtered version of a second image. We must tune a free parameter for each image pair to controls how much high frequency to remove from the first image and how much low frequency to leave in the second image. This is called the "cut-off frequency". The paper suggests to use two cut-off frequencies, one tuned for each image, and you are free to try this too. In the starter code, the cut-off frequency is controlled by changing the standard deviation of the Gausian kernel used to construct a hybrid image.

Task

Implement hybrid image creation in gen_hybrid_image.py. We provide 5 pairs of aligned images which can be merged reasonably well into hybrid images. The alignment is important because it affects the perceptual grouping (read the paper for details). We encourage you to create additional examples, e.g., change of expression, morph between different objects, change over time, etc. For inspiration, please see the hybrid images project page.

  • cat - dog

  • einstein - marilyn

  • bicycle - motorcycle

  • bird - plane

  • fish - submarine

Writeup

Describe your process and algorithm, show your results, describe any extra credit, and tell us any other information you feel is relevant. We provide you with a LaTeX template. Please compile it into a PDF and submit it along with your code.

Scoring Rubric

Total 100pts

  • [+50 pts] Working implementation of image convolution in my_filter2D.py.

  • [+25 pts] Working hybrid image generation.

  • [+20 pts] Written questions.

  • [+05 pts] Writeup.

Extra credit

Total 10pts

  • [+2 pts] Pad with reflected image content. See cv2.BORDER_REFLECT_101, not cv2.BORDER_REFLECT

  • [+8 pts] FFT-based convolution.

To get extra credits, you still need to submit code for the original task. And then, feel free to add your own functions or parameters to switch between original task and extra credit task.

Hybrid Image Example

For the example shown at the top of the page, the two original images look like this:

dog cat

The low-pass (blurred) and high-pass versions of these images look like this:

low frequencies high frequencies

The high frequency image is actually zero-mean with negative values so it is visualized by adding 0.5. In the resulting visualization, bright values are positive and dark values are negative.

Adding the high and low frequencies together gives you the image at the top of this page. If you’re having trouble seeing the multiple interpretations of the image, a useful way to visualize the effect is by progressively downsampling the hybrid image:

cat hybrid image scales

The starter code provides a function vis_hybrid_image.py to save and display such visualizations.

Credits

The Python version of this project are revised by Dahyun Kang, Inseung Hwang, Donggun Kim, Jungwoo Kim and Min H. Kim. The original project description and code are made by James Hays based on a similar project by Derek Hoiem.