INFO 290T Assignment 3 (two parts)
Part 1: Sharpening: An image can be sharpened by enhancing its high frequencies (e.g., Photoshop's unsharp mask). Implement an unsharp mask that takes as input a grayscale image (I), blurs this image (I_blur), and adds a variable amount (k) of the image's high frequency (I-I_blur) back into the image: I_sharp = I + k(I-I_blur).
Your solution should generate a gif animation of images sharpened with
k = np.arange(0,20,0.5). Because adding back an image's high frequency will change the contrast, use
skimage.exposure's match_histograms to match the sharpened image's intensity distribution to the original image's intensity distribution. Test your code on
al.png.
Part 2: Deconvolution: Given a grayscale image (Ih) convolved with a filter (h), implement a deconvolution function that returns the original image (I), where Ih = I * h.
Since we are only considering 1D convolution, we can consider the convolution (and deconvolution) of each image row separately. To begin, formulate a 1D convolution as
f_h = A f_o, where
f_o is a
n x 1 vector corresponding to a row of the input (original) image,
f_h is
n x 1 vector corresponding to a row of the output (convolved) image, and
A is a
n x n convolution matrix in which each row is a shifted version of the convolution kernel. For example, for the 1D filter
h = [1,2,1] and
n = 7, then the
7 x 7 convolution matrix
A is (notice that we are handing image edges by wrapping the kernel):
[[2. 1. 0. 0. 0. 0. 1.]
[1. 2. 1. 0. 0. 0. 0.]
[0. 1. 2. 1. 0. 0. 0.]
[0. 0. 1. 2. 1. 0. 0.]
[0. 0. 0. 1. 2. 1. 0.]
[0. 0. 0. 0. 1. 2. 1.]
[1. 0. 0. 0. 0. 1. 2.]]
Once convolution is formulated as
f_h = A f_o, deconvolution is simply
f_o = inv(A) f_h (note that A is a square matrix).
Write a Python function
deconvolution that takes as input a grayscale image (Ih) and filter (h) and returns the original image (I) by deconvoling each image row one at a time using the above formulation.
To test your code, convolve the image
al.png with the filter [-1,0.001,1] -- use our 1D implementation of
convolution that handles edges as described above. Display all three images: the original image, the filtered image, and the deconvolved image (the original and deconvolved images should, of course, be nearly identical.