Preface
Section 1" Getting Started with Deep Learning
Chapter 1: Introduction to Deep Learning
What is deep learning?
Biological and artificial neurons
ANN and its layers
Input layer
Hidden layer
Output layer
Exploring activation functions
The sigmoid function
The tanh function
The Rectified Linear Unit function
The leaky ReLU function
The Exponential linear unit function
The Swish function
The softmax function
Forward propagation in ANN
How does ANN learn?
Debugging gradient descent with gradient checking
Putting it all together
Building a neural network from scratch
Summary
Questions
Further reading
Chapter 2: Getting to Know TensorFIow
What is TensorFIow?
Understanding computational graphs and sessions
Sessions
Variables, constants, and placeholders
Variables
Constants
Placeholders and feed dictionaries
Introducing TensorBoard
Creating a name scope
Handwritten digit classification using TensorFIow
Importing the required libraries
Loading the dataset
Defining the number of neurons in each layer
Defining placeholders
Forward propagation
Computing loss and backpropagation
Computing accuracy
Creating summary
Training the model
Visualizing graphs in TensorBoard
Introducing eager execution
Math operations in TensorFIow
TensorFIow 2.0 and Keras
Bonjour Keras
Defining the model
Defining a sequential model
Defining a functional model
Compiling the model
Training the model
Evaluating the model
MNIST digit classification using TensorFIow 2.0
Should we use Keras or TensorFIow?
Summary
Questions
Further reading
Section 2: Fundamental Deep Learning Algorithms
Chapter 3: Gradient Descent and Its Variants
Demystifying gradient descent
Performing gradient descent in regression "
Importing the libraries
Preparing the dataset
Defining the loss function
Computing the gradients of the loss function
Updating the model parameters
Gradient descent versus stochastic gradient descent
Momentum-based gradient descent
Gradient descent with momentum
Nesterov accelerated gradient
Adaptive methods of gradient descent
Setting a learning rate adaptively using Adagrad
Doing away with the learning rate using Adadelta
Overcoming the limitations of Adagrad using RMSProp
Adaptive moment estimation
Adamax - Adam based on infinity-norm
Adaptive moment estimation with AMSGrad
……
Section 3 Advanced Deep Learning Algorithms