강의영상

- (1/4) Step 1~2 요약 (1)

- (2/4) Step 1~2 요약 (2), Step 3: derivation

- (3/4) Step 4: update (1)

- (4/4) Step 4: update (2), Step 1~4 를 for 문으로 처리

import torch 
import numpy as np 

Data

- model: $y_i= w_0+w_1 x_i +\epsilon_i = 2.5 + 4x_i +\epsilon_i, \quad i=1,2,\dots,n$

- model: ${\bf y}={\bf X}{\bf W} +\boldsymbol{\epsilon}$

  • ${\bf y}=\begin{bmatrix} y_1 \\ y_2 \\ \dots \\ y_n\end{bmatrix}, \quad {\bf X}=\begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ \dots \\ 1 & x_n\end{bmatrix}, \quad {\bf W}=\begin{bmatrix} 2.5 \\ 4 \end{bmatrix}, \quad \boldsymbol{\epsilon}= \begin{bmatrix} \epsilon_1 \\ \dots \\ \epsilon_n\end{bmatrix}$
torch.manual_seed(43052)
n=100
ones= torch.ones(n)
x,_ = torch.randn(n).sort()
X = torch.vstack([ones,x]).T
W = torch.tensor([2.5,4])
ϵ = torch.randn(n)*0.5
y = X@W + ϵ
ytrue = X@W

step1~2 요약

방법1: 모델을 직접선언 + loss함수도 직접선언

What1=torch.tensor([-5.0,10.0],requires_grad=True) 
yhat1=X@What1
loss1=torch.mean((y-yhat1)**2) 
loss1
tensor(85.8769, grad_fn=<MeanBackward0>)

방법2: 모델식을 torch.nn으로 선언 (bias=False) + loss 직접선언

net2=torch.nn.Linear(in_features=2,out_features=1,bias=False) 
net2.weight.data= torch.tensor([[-5.0,10.0]]) 
yhat2=net2(X) 
loss2=torch.mean((y.reshape(100,1)-yhat2)**2) 
loss2
tensor(85.8769, grad_fn=<MeanBackward0>)

방법3: 모델식을 torch.nn으로 선언 (bias=True) + loss 직접선언

net3=torch.nn.Linear(in_features=1,out_features=1,bias=True) 
net3.weight.data= torch.tensor([[10.0]])
net3.bias.data= torch.tensor([[-5.0]]) 
yhat3=net3(x.reshape(100,1)) 
loss3=torch.mean((y.reshape(100,1)-yhat3)**2) 
loss3
tensor(85.8769, grad_fn=<MeanBackward0>)

방법4: 모델식을 직접선언 + loss함수는 torch.nn.MSELoss()

What4=torch.tensor([-5.0,10.0],requires_grad=True) 
yhat4=X@What4 
lossfn=torch.nn.MSELoss() 
loss4=lossfn(y,yhat4) 
loss4
tensor(85.8769, grad_fn=<MseLossBackward>)

방법5: 모델식을 torch.nn으로 선언 (bias=False) + loss함수는 torch.nn.MSELoss()

net5=torch.nn.Linear(in_features=2,out_features=1,bias=False) 
net5.weight.data= torch.tensor([[-5.0,10.0]]) 
yhat5=net5(X) 
#lossfn=torch.nn.MSELoss() 
loss5=lossfn(y.reshape(100,1),yhat5) 
loss5 
tensor(85.8769, grad_fn=<MseLossBackward>)

방법6: 모델식을 torch.nn으로 선언 (bias=True) + loss함수는 torch.nn.MSELoss()

net6=torch.nn.Linear(in_features=1,out_features=1,bias=True) 
net6.weight.data= torch.tensor([[10.0]])
net6.bias.data= torch.tensor([[-5.0]]) 
yhat6=net6(x.reshape(100,1)) 
loss6=lossfn(y.reshape(100,1),yhat6) 
loss6
tensor(85.8769, grad_fn=<MseLossBackward>)

step3: derivation

loss1

loss1.backward()
What1.grad.data
tensor([-13.4225,  11.8893])
  • 이것이 손계산을 통한 이론적인 미분값과 일치함은 이전시간에 확인하였음.

loss2

loss2.backward()
net2.weight.grad
tensor([[-13.4225,  11.8893]])

loss3

loss3.backward()
net3.bias.grad,net3.weight.grad
(tensor([[-13.4225]]), tensor([[11.8893]]))

loss4

loss4.backward()
What4.grad.data
tensor([-13.4225,  11.8893])

loss5

loss5.backward()
net5.weight.grad
tensor([[-13.4225,  11.8893]])

loss6

loss6.backward()
net6.bias.grad,net6.weight.grad
(tensor([[-13.4225]]), tensor([[11.8893]]))

step4: update

loss1

What1.data ## update 전 
tensor([-5., 10.])
lr=0.1 
What1.data = What1.data - lr*What1.grad.data ## update 후 
What1
tensor([-3.6577,  8.8111], requires_grad=True)

loss2

net2.weight.data 
tensor([[-5., 10.]])
optmz2 = torch.optim.SGD(net2.parameters(),lr=0.1) 
optmz2.step() ## update 
net2.weight.data ## update 후
tensor([[-3.6577,  8.8111]])

loss3

net3.bias.data,net3.weight.data
(tensor([[-5.]]), tensor([[10.]]))
optmz3 = torch.optim.SGD(net3.parameters(),lr=0.1) 
optmz3.step()
net3.bias.data,net3.weight.data
(tensor([[-3.6577]]), tensor([[8.8111]]))
list(net3.parameters())
[Parameter containing:
 tensor([[8.8111]], requires_grad=True),
 Parameter containing:
 tensor([[-3.6577]], requires_grad=True)]

loss4

What4.data ## update 전 
tensor([-5., 10.])
lr=0.1 
What4.data = What4.data - lr*What4.grad.data ## update 후 
What4
tensor([-3.6577,  8.8111], requires_grad=True)

loss5

net5.weight.data 
tensor([[-5., 10.]])
optmz5 = torch.optim.SGD(net5.parameters(),lr=0.1) 
optmz5.step() ## update 
net5.weight.data ## update 후
tensor([[-3.6577,  8.8111]])

loss6

net6.bias.data,net6.weight.data
(tensor([[-5.]]), tensor([[10.]]))
optmz6 = torch.optim.SGD(net6.parameters(),lr=0.1) 
optmz6.step()
net6.bias.data,net6.weight.data
(tensor([[-3.6577]]), tensor([[8.8111]]))

step1~4를 반복하면된다.

net=torch.nn.Linear(in_features=2,out_features=1,bias=False) ## 모형정의 
optmz=torch.optim.SGD(net.parameters(),lr=0.1)
mseloss=torch.nn.MSELoss() 
for epoc in range(100): 
    # step1: yhat 
    yhat=net(X) ## yhat 계산 
    # step2: loss
    loss=mseloss(y.reshape(100,1),yhat) 
    # step3: derivation 
    loss.backward() 
    # step4: update
    optmz.step()
    optmz.zero_grad() ## 외우세요.. 
list(net.parameters())
[Parameter containing:
 tensor([[2.4459, 4.0043]], requires_grad=True)]

숙제

아래를 실행해보고 결과를 관찰하라.

net=torch.nn.Linear(in_features=2,out_features=1,bias=False) ## 모형정의 
optmz=torch.optim.SGD(net.parameters(),lr=0.1)
mseloss=torch.nn.MSELoss() 
for epoc in range(100): 
    # step1: yhat 
    yhat=net(X) ## yhat 계산 
    # step2: loss
    loss=mseloss(y.reshape(100,1),yhat) 
    # step3: derivation 
    loss.backward() 
    # step4: update
    optmz.step()