Important Loss functions used in Deep Learning

1. L1 Loss (Least Absolute Deviation)

2. L2 Loss (Least Square Error)

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# Random values for actual and predicted
x_guess = tf.lin_space(-1., 1., 100)
x_actual = tf.constant(0,dtype=tf.float32)
# Based on the equation
l1_loss = tf.abs((x_guess-x_actual))
l2_loss = tf.square((x_guess-x_actual))
with tf.Session() as sess:
x_,l1_,l2_ = sess.run([x_guess, l1_loss, l2_loss])
plt.plot(x_,l1_,label='l1_loss')
plt.plot(x_,l2_,label='l2_loss')
plt.legend()
plt.show()

3. Huber Loss

4. Pseudo-Huber loss function

Where 𝛿 is the set parameter, the larger the value, the steeper the linear part on both sides.
Where 𝛿 is the set parameter, the larger the value, the steeper the linear part on both sides.

5. Hinge Loss

6. Cross Entropy Loss

Equation for Cross Entropy Loss
Graph for Cross entropy loss

7. Sigmoid-Cross-Entropy Loss

Equation for sigmoid cross entropy loss

8. Softmax Cross-Entropy Loss

Where 𝑓𝑗 is the score of all possible categories, and 𝑓𝑦𝑖 is the score of ground true class

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store