Linear Regression using TensorFlow 2.0 — A Simple Auto Back Propagation Example

Linear regression implemented from scratch using auto gradient generation in TensorFlow 2.0

Hasnain Naeem
4 min readJul 20, 2019

In this article, we are going to build our own linear regression model from scratch using TensorFlow 2.0. We’ll be implementing our own loss & forward propagation methods. For backward propagation, we will utilize auto gradient generation in TensorFlow. Let’s jump in.

  1. Begin with importing the libraries

2. Set parameters for our model:

Explanation

  • n_examples: number of training examples to learn from
  • training_steps: number of times we are going to iterate through the training examples for learning
  • learning_rate: we multiply this number with the gradient of the loss to get a number which is added to weights and biases.

3. Let’s create a dataset with points at some random distances from a line:

4. Plotting the training dataset:

A plot of the training dataset

5. Now, we’ll code the main methods in our model:

Explanation:

  • prediction(): It finds “y” by multiplying weights with input “x” and adding the constants in bias.
  • loss(): It finds the difference between the predicted output and the actual output. We use “mean-square error” to find that difference.
  • grad(): It automatically finds the gradient given the loss, weights & biases.

6. Before proceeding with training, let us check initial loss:

Output: Initial loss: 68.032 (Yours may differ)

7. Let us begin training!

Explanation of flow of Execution:
Dataset values (x,y) along with Weights (W) & Biases (B) values are passed to grad() method. grad() method passes those to loss() method and loss() method further passes those to prediction() method and calculates the difference between actual value “y” and predicted value.

Calculated loss is passed to tape.gradient() method and that’s where the magic happens! TensorFlow automatically calculates gradient with respect to loss, weights & biasses.

The calculated gradient is then used to change the values of weights and biases. This is what we call learning. The gradient can be negative or positive and it is added to weights and biases after being multiplied by learning_rate. In this way, weights & biases are modified to change the equation of the line (x * w + b) until we find a line of best fit. Which is illustrated later in the article.

8. Checking final loss and printing weights & biases:

Output: Final loss: 1.068
W = 5.975897789001465, B = -4.990655899047852
Compared with m = 6.0, c = -5.0 of the original line

9. Plotting line of best fit calculated using linear regression along with our dataset:

Explanation

For every input “x” our model will predict a value “y”. (x,y) will be a point on the orange line. This line is calculated such that the sum of distances from this line to all points in our dataset is minimized. This is done to decrease the overall loss.

10. Predict values using weights & biases learned by our model:

Output: 7.065402507781982

11. Plotting our prediction

Output:

12. Plotting dataset, line of best fit & predicted point together:

Output:

Footnote:

We implemented a linear regression model from scratch using TensorFlow 2.0. It was a simple model which predicted “y” given an input value “x”. Try to build a model to predict multiple values using multiple values as input. Procedure for that is almost the same. You only have to replicate the same concepts for higher dimensions. That will be a nice exercise!

Read More:

--

--

Hasnain Naeem
Hasnain Naeem

Written by Hasnain Naeem

Avid reader and tea lover, passionate about tech & entrepreneurship

No responses yet