Neural Network – Back Propagation – Tutorial in Python

Introduction:

Neural Network is a concept which tries to mimic human brain to solve computational issues. Neural network consists of basic units called Neurons. Each neuron has some functions and they together with other neurons solve complex issues. Neural network as a concept has been around since 1960’s (maybe before that too). But it lost its importance in 1990’s. Due to recent advancements in computational power and some algorithmic breakthrough’s, it has started to play an important role for many complex machine learning problems.

Brief overview:

Neuron : Each Neuron can have one or more inputs and one or more outputs. Each connection between other neurons with some weights associated.

Feed forward : The input is processed through oneĀ  or many layers of neurons to get the output.

Back Propagation : The Error which is the difference between the predicted output and the original target output, is propagated from the output to the input in a reverse manner, so that the weights of the neurons adjust itself to the error.

Python Code:

Feed Forward

Back Propagation

Full code :

Learning materials :

Important Concepts of Machine Learning

In the following days, I am going to write some tutorials and discuss some major concepts of machine learning.

Starting from

  1. SVM — (The most famous Support Vector Machines which was widely used/acknowledged till recently)
  2. Random Forest — (Widely used algorithm in kaggle competitions recently. Also used in Microsoft Kinect)
  3. Neural Networks, Back Propagation — (Important to understand the concept of Deep learning)
  4. RBM — (Building blocks of Deep Learning)
  5. Auto Decoders (Used in Google net)

There are many software packages available for SVM, Random Forest and are fairly simple to use. But the difficult part in using those supervised algorithms (SVM, Random Forest) is selected good features( Feature Engineering). We need to choose which features (characteristics) works better in predicting the output and also pre-process(mean, scale) it. Even-though through random forestĀ  we can find which features are important and which ones are not, this is not enough. A major breakthrough in this area has happened in Deep Learning Neural Networks where the system itself finds the features that are useful and uses it for learning. This deep learning is playing a major role in today’s speech processing, image processing technology.