Gentle Introduction to Linear Regression in Pytorch
Hi Proletariats!
What is the Hell is Pytorch?
According to the official Documentation of Pytorch -
It’s a Python-based scientific computing package targeted at two sets of audiences:
- A replacement for NumPy to use the power of GPUs
- a deep learning research platform that provides maximum flexibility and speed.
As in Simple words, Pytorch is Python open source machine learning package based on torch. Torch yet similar to programming language Lua which is extensible, lightweight programming language written in C.
So Why Pytorch?
PyTorch allows you to define your graph dynamically. Most of the other libraries like TensorFlow where you have to first define an entire computational graph before you can run your model.
Some Concepts Related to PyTorch
- PyTorch Tensors very similar to NumPy arrays with the advantage of that they can run on the GPU. With GPU acceleration we can increase numerical computations, which can increase the speed of neural networks by 50 times or greater.
- PyTorch Autograd provides automatic differentiation for the derivative of a function. Basically, Automatic Differentiation is used for calculation of Backward Passes for Neural Networks. So You are Now thinking What is Backward Passes, so it is nothing but in case of neural networks weights are randomly initialized to a number which is zero or near to zero hence Backward pass is used for adjustments of these weights from right to left.
- PyTorch nn Module for construction of Neural Nets in PyTorch.Hence nn Module solely depends on Autograd for differentiation of Models.
- PyTorch optim does updation of weights. It provides implementations of commonly used optimization algorithms such as AdaGrad, RMSProp, and Adam.
So Much of PyTorch Let’s Jump into Linear Regression
Linear Regression Intuition with PyTorch
Precisely Linear Regression is just understanding the relationship between two variables X and Y. In this section, we will build a simple model that will predict Y for given X and will able to understand Linear Relationship between both variables. Let’s understand it with a simple example as shown in the image below given Years of experience we will predict a person’s Salary. For more info about Linear Regression here is just an awesome tutorial.
In PyTorch this can be achieved by using a type of Layer known as a Linear layer, hence this layer is useful for finding a hidden relationship between X and Y variables.
In order to solve this problem we will be using Feed Forward Neural Network So let’s understand Feed forward Neural Net in detail.
This Neural Net learns by passing an input through its hidden layers all the way to the end and their results are compared to the expected output to determine how well it performed. Learning is actually performed by layers present in the Neural Nets.
Enough Talking Let’s Do Some Super Exciting Stuffs with PyTorch
Step I → First things first Let’s import important libraries.
import torch
from torch.autograd import Variable
from torch.nn import Linear, Module, MSELoss
from torch.optim import SGD
import numpy as np
Step II → Building a Toy Dataset
Building a Simple Dataset consist of two features X and Y. Below Code will create an awesome Dataset that can be used to break down Pytorch Linear Regression.
X_val = [i for i in range(11)]
y_val = [2*i + 1 for i in X_val]
Above code is just a simple Data points Generation using Lists Comprehension in Python. We are creating data points for X_val between a range of 0 to 10 and data points for y_val having just two times of data points of X_val.
Step III → Defining Linear Regression Model
class LinearRegression(nn.Module):
def __init__(self,input_size,output_size):
super(LinearRegression,self).__init__()
self.linear = nn.Linear(input_dim,output_dim)
def forward(self,x):
out = self.linear(x)
return outmodel = LinearRegressionModel()
Above Code Snippet looks a little bit tricky but not to worry we will breakdown it down in simple language.The LinearRegression is class which consists of two function __init__ function and forward function. The __init__ function basically initiates the class and assign it some methods and variables. The forward function is used for a forward pass where it manipulates the data and throws it from input to output.
Step IV → Defining Optimization Algo and Initiating Loss
criterion = nn.MSELoss()
lr = 0.001
opt = torch.optim.SGD(model.parameters(),lr = l)
We have already discussed the Optimization algorithm in the above section. Here we will use Mean Squared Error as our loss function in order to calculate the model’s performance. Also, we will be using SGD (Stochastic Gradient Descent) optimizer for optimizing gradients and make our prediction better around each iteration.
Step V → Kicking off Model Training
X_train = np.array(X_val,dtype=np.float64)
X_train = X_train.reshape(-1,1)
y_train = np.array(y_val,dtype=np.float64)
y_train = y_train.reshape(-1,1)epochs=100
for epoch in range(epochs):
epoch = epoch+1
inputs = Variable(torch.from_numpy(X_train))
labels = Variable(torch.from_numpy(y_train))
opt.zero_grad()
outputs = model(inputs)
loss = criterion(outputs,labels)
loss.backward()
opt.step()
print(‘epochs {} loss {}’.format(epoch,loss))
Hold on ! Stucked on above code Not to Worry we will crack it down.
First, we will convert X_val variable and y_val variable to a numpy array so that it can be easily converted into PyTorch Variable. Also, we will be reshaping our numpy array as it is only in one dimension. Hence we will start our training for only 100 epochs by explicitly defining the number of times we want to train our network on the 10 data points, denoted by epochs=100.
Moving Forward then we will start a loop that will run for the number of epochs and through this loop, we will pass our PyTorch Variables to model(inputs) giving us predicted values in form of outputs. Also, we will calculate Loss by using the Mean Squared Error function ( loss = criterion(outputs, labels) making a comparison between predicted values generated by the model and actual values.
Then you are wondering Why have I used zero_grad() function here? Basically, this function opt.zero_grad() used for making our gradients zero so that it cannot be added into our network every time our model runs. In order to carry out Backpropagation in Neural Nets, we are using loss.backward() .The opt.step() function in PyTorch will perform a parameter update based on the current gradient and the update rule.
Step VI → Interpreting the Results
So we are in the endgame now, Its time to Interpret results. We will evaluate our model performance by using model.eval() function of PyTorch
new_predictions = model(Variable(torch.Tensor(x)))
Now we can start passing new examples to our model In the above code we have x will be the new data points and new_predictions as a predicted output.
If you like this post, please follow me as I will be posting some awesome topics on Machine Learning as well as Deep Learning.
Cheers!