PyTorch 介绍 | Quickstart

本节介绍有关机器学习常见任务重的API。请参阅每一节的链接以深入了解。

Working with data

PyTorch有两个有关数据工作的原型torch.utils.data.DataLoadertorch.utils.data.DatasetDataset 存储了样本及其对应的标签,而 DataLoaderDataset 生成了一个迭代器。

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt

PyTorch提供了针对特定领域的库,如TorchTextTorchVisionTorchAudio,以上均有datasets。在本教程中,我们将使用一个TorchVision dataset。

torchvision.datasets 模块包含多种真实世界的视觉数据集 Dataset 对象,如CIFAR、COCO(full list here)。本教程中,我们使用FashionMNIST数据集。每个TorchVision Dataset 均包括两个参数:transformtarget_transform分别用于修改样本和标签。

# 从公开数据集上下载训练数据
training_data = datasets.FashionMNIST(
    root='data',
    train=True,
    download=True,
    transform=ToTensor(),
)

# Download test data from open datasets.
test_data = datasets.FashionMNIST(
    root='data',
    train=False,
    download=True,
    transform=ToTensor(),
)

输出:

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw

我们将 Dataset 作为参数传递给 DataLoader。这为我们的dataset包装了一个迭代器,并支持自动生成batch、抽样、打乱和多进程数据加载。这里定义了一个大小为64的batch,即,dataloader迭代的每一个元素将返回一个包含64个样本及对应标签的batch。

batch_size = 64

# Create data loaders
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

for X, y in test_dataloader:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtpe)
    break

输出:

Shape of X [N, C, H, W]:  torch.Size([64, 1, 28, 28])
Shape of y:  torch.Size([64]) torch.int64

Read more abount loading data in PyTorch

创建模型

为了在PyTorch定义模型,我们创建了一个类,继承自nn.Module。我们在 __init__函数中定义是网络的layers,并在 forward 函数中指定data如何通过网络。为加快神经网络中的操作,若GPU可用,则把其移动到GPU上。

# Get cpu or gpu device for training
device = 'cuda' if torch.cuda.is_availabel() else "cpu"
print(f"Using {device} device")

# Define model
class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10)

        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork().to(device)
print(model)

输出:

Using cuda device
NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)

Read more about building neural networks in PyTorch

优化模型参数

为了训练模型,我们需要一个loss function和一个优化器

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

在单个训练循环中,模型在训练集上作出预测(分批喂给模型),并且反向传播预测误差来调整模型参数。

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        X, y = X.to(device), y.to(device)
        
        # Compute prediction error
        pred = model(X)
        loss = loss_fn(pred, y)
        
        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(x)
            print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")

我们还可以检查模型在测试集上的性能,确保模型是在学习。

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            # 第一维度是batch,第二维度是预测值
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100 * correct):>0.1f}%, Avg loss: {test_loss:>8f}\n")

训练过程由几次迭代(epochs)组成。每一个epoch,模型学习参数,作出更好的预测。我们在每次epoch都打印了模型的准确率和损失,我们希望看到随着每次epoch,准确率升高,而损失降低。

epochs = 5
for t in range(epochs):
    print(f"Epoch {t+1}\n-----------------------")
    train(train_dataloader, model, loss_fn, optimizer)
    test(test_dataloader, model, loss_fn)
print("Done!")

输出:

点击查看代码
Epoch 1
-------------------------------
loss: 2.296330  [    0/60000]
loss: 2.292454  [ 6400/60000]
loss: 2.276433  [12800/60000]
loss: 2.274833  [19200/60000]
loss: 2.250950  [25600/60000]
loss: 2.225070  [32000/60000]
loss: 2.222952  [38400/60000]
loss: 2.200769  [44800/60000]
loss: 2.202750  [51200/60000]
loss: 2.155594  [57600/60000]
Test Error:
 Accuracy: 40.0%, Avg loss: 2.161115

Epoch 2
-------------------------------
loss: 2.169147  [    0/60000]
loss: 2.165468  [ 6400/60000]
loss: 2.118014  [12800/60000]
loss: 2.129221  [19200/60000]
loss: 2.074899  [25600/60000]
loss: 2.022606  [32000/60000]
loss: 2.033795  [38400/60000]
loss: 1.976709  [44800/60000]
loss: 1.982757  [51200/60000]
loss: 1.881978  [57600/60000]
Test Error:
 Accuracy: 57.4%, Avg loss: 1.902724

Epoch 3
-------------------------------
loss: 1.934711  [    0/60000]
loss: 1.906422  [ 6400/60000]
loss: 1.806563  [12800/60000]
loss: 1.832814  [19200/60000]
loss: 1.717731  [25600/60000]
loss: 1.673628  [32000/60000]
loss: 1.679022  [38400/60000]
loss: 1.602205  [44800/60000]
loss: 1.623030  [51200/60000]
loss: 1.492521  [57600/60000]
Test Error:
 Accuracy: 61.5%, Avg loss: 1.529054

Epoch 4
-------------------------------
loss: 1.592845  [    0/60000]
loss: 1.556097  [ 6400/60000]
loss: 1.417763  [12800/60000]
loss: 1.478243  [19200/60000]
loss: 1.357680  [25600/60000]
loss: 1.356057  [32000/60000]
loss: 1.360733  [38400/60000]
loss: 1.298324  [44800/60000]
loss: 1.329920  [51200/60000]
loss: 1.219030  [57600/60000]
Test Error:
 Accuracy: 63.4%, Avg loss: 1.250318

Epoch 5
-------------------------------
loss: 1.324846  [    0/60000]
loss: 1.306784  [ 6400/60000]
loss: 1.145549  [12800/60000]
loss: 1.245576  [19200/60000]
loss: 1.123671  [25600/60000]
loss: 1.150098  [32000/60000]
loss: 1.164900  [38400/60000]
loss: 1.111517  [44800/60000]
loss: 1.147514  [51200/60000]
loss: 1.059701  [57600/60000]
Test Error:
 Accuracy: 64.6%, Avg loss: 1.081113

Done!

Read more about Training your model.

保存模型

保存模型的一个常见方法是序列化内部状态字典(包含模型参数)

torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")

加载模型

加载模型包括重新创建模型结构和把状态字典加载进去两个过程。

model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))

模型现在可以用来作出预测了。

classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
]

model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
    pred = model(x)
    # 这里batch=1,只想取到值而非tensor,因此,取pred的第一维度,此时,预测值从第二维度变成了第一维度
    predicted, actual = classes[pred[0].argmax(0)], classes[y]
    print(f'Predicted: "{predicted}", Actual: {actual}')

输出:

Predicted: "Ankle boot", Actual: "Ankle boot"

Read more about Saving & Loading your model.

上一篇:线段树 I - Transformation


下一篇:免费美女视频聊天,多人视频会议功能加强版本(Fms3和Flex开发(附源码))