Neural Networks For Your Dog - 3.3 Gradient Descent
Contents
3.3 Gradient Descent
In this lecture, we’ll see how neural networks learn optimal weights via gradient descent (commonly called “backpropagation. Then we’ll build a neural network with logistic activation functions and log loss objective function.
Code
Course Curriculum
- Introduction
1.1 Introduction - Perceptron
2.1 MNIST Dataset
2.2 Perceptron Model
2.3 Perceptron Learning Algorithm
2.4 Pocket Algorithm
2.5 Multiclass Support
2.6 Perceptron To Neural Network - Neural Network
3.1 Simple Images
3.2 Random Weights
3.3 Gradient Descent
3.4 Multiclass Support
3.5 Deep Learning
3.6 Stochastic Gradient Descent
3.7 Going Further