라즈베리파이에 PyTorch를 설치하고 기본 MNIST 예제를 돌려보는 과정을 정리해보았다.
1. Update
sudo apt-get update sudo apt-get dist-upgrade -y
2. Install dependencies
sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools -y sudo apt-get install libavutil-dev libavcodec-dev libavformat-dev libswscale-dev -y
3. Virtualenv Install
sudo pip3 install virtualenv
4. Make virtualenv and Activate
virtualenv env source env/bin/activate
5. Install Torch & Torchvision
git clone https://github.com/sungjuGit/PyTorch-and-Vision-for-Raspberry-Pi-4B.git cd PyTorch-and-Vision-for-Raspberry-Pi-4B sudo pip install torch-1.4.0a0+f43194e-cp37-cp37m-linux_armv7l.whl sudo pip install torchvision-0.5.0a0+9cdc814-cp37-cp37m-linux_armv7l.whl
6. Mnist Example
import sys
import sklearn.datasets
import torch
from torch import nn, optim
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
import matplotlib.pyplot as plt
print(f"Python: {sys.version}")
print(f"pytorch: {torch.__version__}")
mnist = sklearn.datasets.fetch_openml("mnist_784", data_home="mnist_784")
x_train = torch.tensor(mnist.data[:60000], dtype=torch.float) / 255
y_train = torch.tensor([int(x) for x in mnist.target[:60000]])
x_test = torch.tensor(mnist.data[60000:], dtype=torch.float) / 255
y_test = torch.tensor([int(x) for x in mnist.target[60000:]])
fig, axes = plt.subplots(2, 4, constrained_layout=True)
for i, ax in enumerate(axes.flat):
ax.imshow(1 - x_train[i].reshape((28, 28)), cmap="gray", vmin=0, vmax=1)
ax.set(title=f"{y_train[i]}")
ax.set_axis_off()
def log_softmax(x):
return x - x.exp().sum(dim=-1).log().unsqueeze(-1)
def model(x, weights, bias):
return log_softmax(x @ weights + bias)
def neg_likelihood(log_pred, y_true):
return -log_pred[torch.arange(y_true.size()[0]), y_true].mean()
def accuracy(log_pred, y_true):
y_pred = torch.argmax(log_pred, dim=1)
return (y_pred == y_true).to(torch.float).mean()
def print_loss_accuracy(log_pred, y_true, loss_function):
with torch.no_grad():
print(f"Loss: {neg_likelihood(log_pred, y_true):.6f}")
print(f"Accuracy: {100 * accuracy(log_pred, y_true).item():.2f} %")
loss_function = neg_likelihood
batch_size = 100
learning_rate = 0.5
n_epochs = 5
weights = torch.randn(784, 10, requires_grad=True)
bias = torch.randn(10, requires_grad=True)
for epoch in range(n_epochs):
# Batch 반복
for i in range(x_train.size()[0] // batch_size):
start_index = i * batch_size
end_index = start_index + batch_size
x_batch = x_train[start_index:end_index]
y_batch_true = y_train[start_index:end_index]
# Forward
y_batch_log_pred = model(x_batch, weights, bias)
loss = loss_function(y_batch_log_pred, y_batch_true)
# Backward
loss.backward()
# Update
with torch.no_grad():
weights.sub_(learning_rate * weights.grad)
bias.sub_(learning_rate * bias.grad)
# Zero the parameter gradients
weights.grad.zero_()
bias.grad.zero_()
with torch.no_grad():
y_test_log_pred = model(x_test, weights, bias)
print(f"End of epoch {epoch + 1}")
print_loss_accuracy(y_test_log_pred, y_test, loss_function)
print("---")
- output
참고한 사이트