# Denoising Images and Markov Random Fields¶

Undirected Graphical models have remained elusive to me for a long time. One of the best ways to strip away the abstractness around them is to see how it works in the real world. One of the easiest and first real world examples of such models, also called the Markov Random Fields, is in denoising an image.

The idea in brief is to construct a hidden image, which is the denoised version. Each pixel in the noisy image is connected to the one in the denoised image, and each pixel in the denoised image is connected to its four neighbors. These are the conditional dependencies of the undirected graph representing the joint distribution.

We will first read in an image of the letter "A", and then make it noisy by randomly flipping the sign of the pixels. Then we will try to increase the joint probability of observing the image and its hidden version, using a very simple algorithm called the "Iterated Conditional Modes". This closely follows the explanations in Bishop, section 8.3.3.

## Making an Image and its Noisy Version¶

```
from __future__ import division
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(1234)
```

We will first read in an image of the letter A, and display the denoised image in the binary form.

```
import scipy.misc
#img_gray = scipy.misc.lena()
img_gray = scipy.misc.imread('lettera.bmp')
img_gray_arr = np.asarray(img_gray,int)
img_mean = np.mean(img_gray_arr)
img_arr = np.copy(img_gray_arr)
img_arr[img_gray_arr<img_mean] = -1
img_arr[img_gray_arr>=img_mean] = 1
plt.imshow(img_arr, cmap='Greys')
```

Let us do a sanity check that the pixels are indeed -1,1.

```
img_arr[0:10,0:10]
```

Now, we will add some random noise to the image. We will do this by randomly flipping 20% of the pixels.

```
print"Shape of array",img_arr.shape
def make_noisy_image(lena_arr):
total_pixels = lena_arr.shape[0]*lena_arr.shape[1]
num_flips = int(0.2*total_pixels)
px_to_flip = np.random.randint(low=0, high=total_pixels, size=num_flips)
for i in px_to_flip:
row_num = int(math.floor(i/lena_arr.shape[0]))
col_num = i%lena_arr.shape[0]
if lena_arr[row_num,col_num] == 1:
lena_arr[row_num,col_num] =-1
else:
lena_arr[row_num, col_num]=1
noisy_img_arr = np.copy(img_arr) #important to deep copy the original array
make_noisy_image(noisy_img_arr)
plt.imshow(noisy_img_arr,cmap='Greys')
```

We have now created a noisy image. Our goal would be to try to take this image alone, and no other information, and attempt to denoise the image. Let us write one more helper function before we go to town with the Ising model. This is to check what percentage of pixels are flipped in the noisy image. We will regard this as our measure of success. The lower the percentage of pixels flipped in the final image, the more successful we are.

```
def percent_pixel_flipped(noisy_arr,denoised_arr):
num_flipped = 0
total_px = noisy_arr.shape[0]*noisy_arr.shape[1]
for i in range(noisy_arr.shape[0]):
for j in range(noisy_arr.shape[1]):
if noisy_arr[i,j]!= denoised_arr[i,j]:
num_flipped+=1
percent_change = num_flipped*100./total_px
return percent_change
print percent_pixel_flipped(noisy_img_arr, img_arr)
```

Looks like there is some bug in the pixel flipping code we wrote, because we wanted to flip 20% of the pixels, but I observe only 16.38% flipped. This does not have any direct bearing on the model, but I will try to debug this.

## The Ising Model¶

The Ising model expresses the energy of the image and its denoised version in a given form. For a given visible noisy image **y** (where y is a vector, expressing the image), and its hidden representation **x**, we can write the energy of this representation as:
$$E(x,y) = h\sum_i x_i - \beta\sum_{/{i,j/}}x_ix_j - \eta\sum_i x_iy_i$$

Here the second term corresponds to pairs ${i,j}$ such that i,j are neighbors. The probability of the joint distribution can be written simply as: $$ p(x,y) = \frac{1}{Z} exp(-E(x,y)) $$ where $Z$ is the partition function, used to normalize the probability. We will concern ourselves only with the energy, and not worry about the messy partition function here. To increase the probability of observing this image and hidden version, we need to lower the energy of the configuration. Let us start by coding up the energy. For numerical convenience, we will divide it by a constant number, which is the total number of pixels. This will help us in keeping the number low.

```
def check_limit(value, limit):
if value<0:
value=limit-1
if value==limit:
value=0
return value
def add_energy_contribution(visible_arr,hidden_arr, x_val,y_val, const_list):
h_val = const_list[0]
beta = const_list[1]
eta = const_list[2]
total_pixels = hidden_arr.shape[0]*hidden_arr.shape[1]
energy = h_val*hidden_arr[x_val,y_val]
energy += -eta*hidden_arr[x_val,y_val]*visible_arr[x_val,y_val]
x_neighbor = [-1,1]
y_neighbor = [-1,1]
for i in x_neighbor:
for j in y_neighbor:
x_n = check_limit(x_val +i,hidden_arr.shape[0])
y_n = check_limit(y_val +j, hidden_arr.shape[1])
energy += -beta*hidden_arr[x_val,y_val]*hidden_arr[x_n,y_n]
energy = energy/total_pixels
return energy
def calculate_total_energy(visible_arr,hidden_arr,const_list):
energy = 0.
for i in range(visible_arr.shape[0]):
for j in range(visible_arr.shape[1]):
energy += add_energy_contribution(visible_arr,hidden_arr,i,j,const_list)
return energy
#this list is [h, beta,eta]
const_list = [0,.1,.02]
hidden_image = np.copy(noisy_img_arr)
total_energy= calculate_total_energy(noisy_img_arr, hidden_image, const_list)
print total_energy
```

## Iterated Conditional Modes¶

There are many algorithms to try to lower the energy of the configuration. One can resort to Monte Carlo techniques to sample intelligently from the distribution, but it is also possible to use a far simpler algorithm (see Bishop 8.3.3). Iterated Conditional Modes(ICM) initializes the hidden array with the same values as the visible ones. Then, it merely takes a single pixel, flips it, and compares the energy before and after. If flipping the pixel lowers the energy, then it accepts the flip, and if not, this move is rejected. We can keep on doing this, until the energy stops changing. Here, we just loop over all the pixels, but we can as well choose pixels randomly to flip.

```
def icm_single_pixel(visible_arr, hidden_arr, px_x, px_y, total_energy, const_list):
current_energy = add_energy_contribution(visible_arr, hidden_arr,px_x,px_y, const_list)
other_energy = total_energy - current_energy
#flip the pixel
new_hidden_arr = np.copy(hidden_arr)
if hidden_arr[px_x,px_y]==1:
new_hidden_arr[px_x,px_y]=-1
else:
new_hidden_arr[px_x,px_y] = 1
flipped_energy = add_energy_contribution(visible_arr, new_hidden_arr,px_x,px_y, const_list)
#print current_energy, flipped_energy
if flipped_energy < current_energy:
should_flip = True
total_energy = other_energy + flipped_energy
hidden_arr = new_hidden_arr
#print percent_pixel_flipped(hidden_arr, visible_arr)
else:
should_flip = False
return (hidden_arr,should_flip,total_energy)
#return (should_flip, hidden_arr, total_energy)
#main icm simulation
hidden_image = np.copy(noisy_img_arr)
energy_this_round = total_energy
print "% Pixels flipped:", percent_pixel_flipped(hidden_image, img_arr)
for sim_round in range(5):
for i in range(hidden_image.shape[0]):
for j in range(hidden_image.shape[1]):
hidden_image,should_flip,total_energy = icm_single_pixel(noisy_img_arr,hidden_image,i,j, total_energy,const_list)
#print percent_pixel_flipped(hidden_image, lena_arr)
if (total_energy - energy_this_round) == 0:
print "Algorithm converged"
break
energy_this_round = total_energy
print "Total Energy:",total_energy
print "% Pixels flipped:", percent_pixel_flipped(hidden_image, img_arr)
```

```
plt.imshow(hidden_image, cmap='Greys')
```

So, starting from about 16% of the pixels flipped, we have gone down to 1.9% of the pixels flipped after the algorithm has converged. That is quite remarkable given that we give no information to the model on how to denoise the image. One remark I should make here is that the success of the algorithm depends to a large extent on the fact that the letter image has very continuous areas of black and white. This causes a lower energy configuration for images whose neighboring pixels are of the same value. I had first started off with the famous "lena" image, which is a much more complex image, with many neighboring black and white pixels. In that case, Markov Random fields were much less successful.

One other thing that I plan to try, probably in a second part, is to use Gibbs sampling, instead of ICM in order to reduce the energy of the configuration.

```
```