Neural Nets and Deep Learning

Introduction to Neural Network Theory

Neural networks are modelled after biological neural networks and attempt to allow computers to learn similarly like a human – reinforcement learning. The applicability of neural networks include pattern recognition, signal processing, anomaly detection and etc… Neural networks attempt to solve problems that would normally be easy for humans but hard for computers (such as recognising a picture of a person).

The Perceptron – simplest neural network

A perceptron consists of one or more inputs, a processor, and a single output. A perceptron follows the “feed-forward” mode, where inputs are sent into the neuron and are processed, resulting in an output. A perceptron process follows 4 main steps:

  1. Receive inputs
  2. Weight inputs
  3. Sum inputs
  4. Generate output

Each input that is sent into the neuron must first be weighted (multiply by some value, often between -1 and 1). When creating a perceptron, we’ll typically begin by assigning random weights. The output of a perceptron is generated by passing that sum through an activation function. For example, in a simply binary output, the activation function is what tells the perceptron whether to “fire” or not. There are many activation functions to choose from. For example, we can make the activation function the sign of the sum. Therefore, if the sum is a positive number, the output is 1, if negative, the output is -1. In addition, we also need to consider Bias. We need to add an additional input known as bias input (not shown in graph) to avoid any issues such as what if the sum activation function receives 0 as the sum?

How to train the perceptron

  1. Provide the perceptron with inputs for which there is a known output
  2. Ask the perceptron to guess an output
  3. Compute the error
  4. Adjust all the weights according to the error
  5. Repeat Step 1 and repeat the whole process until we reach an error we are satisfied with

This is how a single perceptron works. In order to create a neural network, you have to link many perceptrons together in layers, which could look as follows:

You will have an input layer and an output layer. Any layers in between are known as hidden layers because you don’t directly “see” anything but the input and output. In the picture above, we have two hidden layers. Deep Learning is just a neural network with many hidden layers! For example, Microsoft’s state of the art vision recognition uses 152 layers.

TensorFlow – Deep Learning

TensorFlow was developed by Google and has quickly become the most popular deep learning library. It can run on either CPU or GPU although typically, deep neural networks run much faster on GPU.

GPU vs CPU Performance

A CPU consists of a few cores optimised for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

The basic idea of TensorFlow is the ability to create data flows graphs. These graphs have nodes and edges just like the neural networks picture above. The data passed along from layer of nodes to layer of nodes is known as a Tensor. There are two ways to use TensorFlow: 1) Customisable Graph Session 2) Scikit-Learn type interface with Contrib.Learn.

TensorFlow Basics

In [10]:
import tensorflow as tf

Simple Constants

In [11]:
hello = tf.constant('Hello World')
In [12]:
type(hello) # Tensor object
In [13]:
x = tf.constant(100)
In [14]:

Running Sessions

A TensorFlow Session is a class for running TensorFlow operations. A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.

In [15]:
sess = tf.Session()
In [16]:
b'Hello World'
In [17]:
In [18]:


In [19]:
x = tf.constant(2)
y = tf.constant(3)
In [20]:
with tf.Session() as sess:
    print('Operations with Constants')
    print('Additions: ',
    print('Subtractions: ',
    print('Mulptiplication: ',*y))
    print('Division: ',
Operations with Constants
Additions:  5
Subtractions:  -1
Mulptiplication:  6
Division:  0.666666666667


You may not always have the constants right away, and you may be waiting for a constant to appear after a cycle of operations. tf.placeholder inserts a placeholder for a tensor that will always be fed. Its value must be fed using the feed_dict optional argument to

In [21]:
x = tf.placeholder(tf.int32)
y =tf.placeholder(tf.int32)
In [22]:
<tf.Tensor 'Placeholder:0' shape=<unknown> dtype=int32>
In [24]:
add = tf.add(x,y)
sub = tf.subtract(x,y)
mul = tf.multiply(x,y)
In [25]:
with tf.Session() as sess:
    print('Operations with Placeholders')
Operations with Placeholders
Addition 60

Matrix Multiplication

In [26]:
import numpy as np
In [27]:
a = np.array([[5.0,5.0]]) # 1x2
In [28]:
b = np.array([[2.0],[2.0]]) # 2x1
In [29]:
mat1 = tf.constant(a)
In [30]:
mat2 = tf.constant(b)
In [31]:
matrix_multi = tf.matmul(mat1,mat2)
In [34]:
with tf.Session() as sess:
    result =
[[ 20.]]

Leave a Reply

Your email address will not be published. Required fields are marked *