# Case Study

We will be building a Multi Layer Perceptron model to classify hand written digits using TensorFlow. The images which we will be working with are black and white images of size 28 x 28 pixels, or 784 pixels total. Our features will be the pixel values for each pixel. Either the pixel is “white” (blank with a 0), or there is some pixel value. We will try to correctly predict what number is written down based solely on the image data in the form of an array. This type of problem (Image Recognition) is a great use case for Deep Learning Methods!

# 1. Preparation

```
import tensorflow as tf
```

** Popular data set that we can import from tensorflow **

```
from tensorflow.examples.tutorials.mnist import input_data
```

```
mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
```

```
type(mnist)
```

** 55,000 images and each image is an array of 784 **

```
mnist.train.images.shape
```

```
sample = mnist.train.images[12].reshape(28,28)
```

```
import matplotlib.pyplot as plt
%matplotlib inline
```

```
plt.imshow(sample,cmap='Greys')
```

** Parameters **

It is really difficult to know what good parameter values are on a data set for which you have no experience with, however since MNIST is pretty famous, we have some reasonable values for our data below. The parameters here are:

- Learning Rate – How quickly to adjust the cost function.
- Training Epochs – How many training cycles to go through
- Batch Size – Size of the ‘batches’ of training data

```
learning_rate = 0.001 #how quickly to learn
training_epochs = 15
batch_size = 100
```

** Network Parameters **

```
n_classes = 10 #MNIST total classes (0 - 9 digits)
n_samples = mnist.train.num_examples #(how many images)
```

```
n_input = 784 #MNIST data input (img shape is 28x28)
```

```
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
```

# 2. MultiLayer Model

- Receive input data array and send it to the first hidden layer (with 256 neurons in this case). The data will have random weight attached to it between layers and then sent to a node to undergo an activation function (along with a Bias). In this example, we will be using the RELU activation function which is a very simple rectifier fucntion that returns x or zero. For our final output layer, we will use a linear activation with matrix multiplication.
- Then it will proceed on to the next hidden layer (and repeat step 1) and in our case (2 hidden layers) it will then reach the final output layer. The more hidden layers you use, the longer the model will take to run but it has higher chance of producing a more accurate result.
- Once we received the final output data, we need to evaluate it using a loss function (also called a cost function) which tells us how far off we are from the desired result.
- Apply an optimisation function to minimise the cost. This is done through adjusting the weight values accordingly across the network. We will be using the
**Adam Optimiser**in this example. We can adjust how quickly we want to apply this optimisation by tweaking the learning rate parameter above. The lower the rate, the higher the possibility for accurate training results (again, comes at the cost of having to wait longer).

```
def multilayer_perceptron(x, weights, biases):
'''
x: Placeholder for data input
weights: Dictionary of weights
biases: Dictionary of bias values
'''
# First hidden layer with RELU activation
# X * W + B
layer_1 = tf.add(tf.matmul(x,weights['h1']),biases['b1'])
# RELU(X * W + B) = RELU -> f(x) = max(0,x)
layer_1 = tf.nn.relu(layer_1)
# Second hidden layer
layer_2 = tf.add(tf.matmul(layer_1,weights['h2']),biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer
out_layer = tf.matmul(layer_2,weights['out']) + biases['out']
return out_layer
```

** Weights and Bias **

In order for our tensorflow model to work we need to create two dictionaries containing our weight and bias objects for the model. We can use the **tf.variable** object type. This is different from a constant because TensorFlow’s Graph Object becomes aware of the states of all the variables. A Variable is a modifiable tensor that lives in TensorFlow’s graph of interacting operations. It can be used and even modified by the computation. We will generally have the model parameters be Variables.

** tf.random_normal ** – outputs values that are normally distributed

```
weights = {
'h1':tf.Variable(tf.random_normal([n_input,n_hidden_1])),
'h2':tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])),
'out':tf.Variable(tf.random_normal([n_hidden_2,n_classes]))
}
```

```
weights
```

```
biases = {
'b1':tf.Variable(tf.random_normal([n_hidden_1])),
'b2':tf.Variable(tf.random_normal([n_hidden_2])),
'out':tf.Variable(tf.random_normal([n_classes]))
}
```

** TensorFlow Graph Input **

```
x = tf.placeholder('float',[None,n_input])
y = tf.placeholder('float',[None,n_classes])
```

** Construct model **

```
pred = multilayer_perceptron(x, weights,biases)
```

** Define our cost and optimisation functions **

```
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels= y))
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
```

## Training the Model

Next_batch() returns a tuple in the form (X,y) with an array of the data and a y array indicating the class in the form of a binary array. X is out input and y is the actual result

** Run Session **

```
sess = tf.InteractiveSession()
```

** Initialisation of Variables **

The first thing we do is initialise all our tf.Variable objects

```
init = tf.global_variables_initializer()
```

```
sess.run(init)
```

```
# 15 loops as we set training_epochs = 15
for epoch in range (training_epochs):
# Cost
avg_cost = 0.0
total_batch = int(n_samples/batch_size)
for i in range(total_batch):
# Grab the next batch of training data and labels
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Feed dictionary for optimization and loss value
# Returns a tuple, but we only need 'c' the cost
# So we set an underscore as a "throwaway"
_,c = sess.run([optimiser,cost],feed_dict={x:batch_x,y:batch_y})
avg_cost += c/total_batch
print("Epoch: {} cost{:.4f}".format(epoch+1,avg_cost))
print("Model has completed {} Epochs of training".format(training_epochs))
```

## Model Evaluations

Tensorflow comes with some built-in functions to help evaluate our model, including tf.equal and tf.cast with tf.reduce_mean.

**tf.equal()**

This is essentially just a check of predictions == y_test. In our case since we know the format of the labels is a 1 in an array of zeroes, we can compare argmax() location of that 1.

```
correct_pred = tf.equal(tf.argmax(pred,1),tf.argmax(y,1)) # checks if x==y?
```

In order to get a numerical value for our predictions we will need to use **tf.cast** to cast the Tensor of booleans back into a Tensor of Floating point values in order to take the mean of it.

```
correct_pred = tf.cast(correct_pred,'float')
```

Finally, we use the **tf.reduce_mean** function to grab the mean of the elements across the tensor

```
accuracy = tf.reduce_mean(correct_pred)
```

The **accuracy** is still a Tensor object. We still need to pass in our actual test data!

```
mnist.test.labels
```

```
mnist.test.images
```

The **eval()** method allows you to directly evaluates this tensor in a Session

```
accuracy.eval({x:mnist.test.images,y:mnist.test.labels})
```

94% accuracy! But this actually isn’t anywhere near as good as it could be. Running for more training epochs with this data (around 20,000) can produce accuracy around 99%. But that will take a very long time to run!