• Courses
  • Tutorials
  • DSA
  • Data Science
  • Web Tech
October 28, 2022 |3.5K Views

Multi-Layer Perceptron Learning in Deep Learning

Description
Discussion

In this video, we will learn what is Multi-Layer Perceptron Learning and how it works. This is a concept of Neural Networks and the perceptron are nothing but another name for the neurons of the hidden layers.

Main components of a multilayer perceptron are:

  • Input Layer - The input features for which we would like to build our model.
  • Hidden Layer - This is where non-linear decision boundaries and complex functions are learned by our model.
  • Output Layer - This is where the objective of developing a multilayer perceptron lies as we get the final output or the predictions.

What is an Activation Function?

Activation functions are necessary to learn a non-linear complex model. A very common activation function is Sigmoid function which is used for binary classification purpose. Some other examples of activation function are Relu, tanh, selu, etc.

Each layer have their own set of weights/parameters which are learned by the model by using Forward and Backward Propagation. Optimizers like gradient descent and loss function they all serve a purpose in optimizing these weights and parameters so, that the predictions made by the model are highly accurate.