• Tutorials
  • DSA
  • Data Science
  • Web Tech
  • Courses
November 01, 2022 |2.7K Views

Internal working of Neural Networks in Deep Learning

  Share  2 Likes
Description
Discussion

In this video, we will learn how a deep neural network is trained and the different components of it that make it possible to achieve state-of-the-art results from using neural networks.

Neural networks help to learn very complex and non-linear boundaries between the different classes present in the dataset or the target variable. Have you ever thought Why? In a neural network we have:

  • Multiple hidden layers along with one input and output layer.
  • Big weight matrices enable us to achieve outputs for the next layer.
  • Feedforward and backward propagation algorithm help the model to learn parameters with the objective to optimize the cost function.
  • Activations functions are one of the most important components of a neural network which helps to introduce the non-linearity in the decision boundaries learned by the model.
  • The learning rate is used in the gradient descent to update the weights along with some other hyperparameters.

And this list goes on like this. But the main purpose of learning the internal working of a neural network is that it helps us to gain intuition about which hyperparameters are best or what learning rate should be chosen so, that model does not deviate from the global minima.

Introduction to Neural Network: https://www.geeksforgeeks.org/introduction-artificial-neural-network-set-2/