In the following days, I am going to write some tutorials and discuss some major concepts of machine learning.
- SVM — (The most famous Support Vector Machines which was widely used/acknowledged till recently)
- Random Forest — (Widely used algorithm in kaggle competitions recently. Also used in Microsoft Kinect)
- Neural Networks, Back Propagation — (Important to understand the concept of Deep learning)
- RBM — (Building blocks of Deep Learning)
- Auto Decoders (Used in Google net)
There are many software packages available for SVM, Random Forest and are fairly simple to use. But the difficult part in using those supervised algorithms (SVM, Random Forest) is selected good features( Feature Engineering). We need to choose which features (characteristics) works better in predicting the output and also pre-process(mean, scale) it. Even-though through random forest we can find which features are important and which ones are not, this is not enough. A major breakthrough in this area has happened in Deep Learning Neural Networks where the system itself finds the features that are useful and uses it for learning. This deep learning is playing a major role in today’s speech processing, image processing technology.