pytorch学习-WHAT IS PYTORCH

参考:https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py

WHAT IS PYTORCH

这是一个基于python的实现两种功能的科学计算包:

  • 用于替换NumPy去使用GPUs的算力
  • 一个提供了最大化灵活度和速度的深度学习搜索平台

Getting Started

Tensors

Tensors与NumPy的ndarrays相似,不同在于Tensors能够使用在GPU上去加速计算能力

from __future__ import print_function
import torch

构造一个5*3的矩阵,不初始化

x = torch.empty(, )
print(x)

输出:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([[ 0.0000e+00, -2.5244e-29, 0.0000e+00],
[-2.5244e-29, 1.9618e-44, 9.2196e-41],
[ 0.0000e+00, 7.7050e+31, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 8.6499e-38]])

随机构造一个初始化矩阵:

x = torch.rand(, )
print(x)

输出:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([[0.4803, 0.5157, 0.9041],
[0.1619, 0.8994, 0.4302],
[0.6824, 0.6559, 0.9317],
[0.5558, 0.8311, 0.2492],
[0.8287, 0.1050, 0.7201]])

构建一个全为0的矩阵,并且设置类型为long:

x = torch.zeros(, , dtype=torch.long)
print(x)

输出:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([[, , ],
[, , ],
[, , ],
[, , ],
[, , ]])

直接使用数据来构造一个tensor:

x = torch.tensor([5.5, ])
print(x)

输出:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([5.5000, 3.0000])

或者基于存在的tensor去创建一个tensor。这些方法将重新使用输入tensor的特性,如dtype,除非用户提供新的值。默认的dtype为torch.float

#-*- coding: utf- -*-
from __future__ import print_function
import torch
x = torch.tensor([5.5, ])
print(x) x = x.new_ones(, , dtype=torch.double) # new_* methods take in sizes,就是新建一个矩阵,其与x无关
print(x) #设置dtype为torch.float64 x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x) # 会得到与x有相同大小的矩阵,dtype又从torch.float64变为torch.float

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([5.5000, 3.0000])
tensor([[., ., .],
[., ., .],
[., ., .],
[., ., .],
[., ., .]], dtype=torch.float64)
tensor([[-3.0480e-01, 1.5148e+00, -1.1507e+00],
[ 5.9181e-04, -8.0706e-01, 3.3035e-01],
[ 1.5499e+00, -6.1708e-01, 5.8211e-01],
[-9.1276e-02, -9.4747e-01, -1.8206e-01],
[-8.9208e-02, -1.5132e-01, 1.2374e+00]])

得到矩阵的大小:

print(x.size())

返回:

torch.Size([, ])

⚠️torch.Size实际上是一个元祖,它支持所有的元祖操作

Operations

这里有着多种操作的语法。如下面的例子,我们将看见的是加法操作:

#-*- coding: utf- -*-
from __future__ import print_function
import torch
x = torch.tensor([5.5, ])
print(x) x = x.new_ones(, ) # new_* methods take in sizes,就是新建一个矩阵,其与x无关
print(x) y = torch.rand(, )
print(y)
print(x + y)

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([5.5000, 3.0000])
tensor([[., ., .],
[., ., .],
[., ., .],
[., ., .],
[., ., .]])
tensor([[2.5123e-04, 9.8943e-01, 5.3585e-01],
[9.4955e-01, 1.3734e-01, 4.0120e-01],
[3.6199e-01, 1.5062e-01, 2.7033e-01],
[9.5025e-01, 6.3539e-01, 2.3759e-01],
[6.7833e-01, 4.3510e-01, 2.3747e-01]])
tensor([[1.0003, 1.9894, 1.5359],
[1.9496, 1.1373, 1.4012],
[1.3620, 1.1506, 1.2703],
[1.9502, 1.6354, 1.2376],
[1.6783, 1.4351, 1.2375]])

等价于:

print(torch.add(x,y))

还可以设置一个输出变量,然后使用变量输出:

torch.add(x, y, out=result)
print(result)

还有内置加法函数:

y.add_(x)
print(y)

⚠️任何改变张量的内置操作都使用了_后缀。如x.copy_(y),x.t_()都会改变x的值

你也可以使用标准的类似于numpy的索引:

print(x[:, ])

调整:如果你想要调整/重塑tensor,你可以使用torch.view:

#-*- coding: utf- -*-
from __future__ import print_function
import torch
x = torch.randn(, )
print(x)
y = x.view()
print(y)
z = x.view(-, ) # -1表示从其他维度推断,即后面设为8,那么前面就推断是2
print(z)
print(x.size(), y.size(), z.size())

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([[ 1.4353, -0.7081, 1.1953, -0.1438],
[-0.9198, -0.8695, -0.3122, -0.0882],
[ 0.5113, -1.3449, -0.9429, 1.7962],
[ 0.5734, 1.0710, -0.9295, -2.0507]])
tensor([ 1.4353, -0.7081, 1.1953, -0.1438, -0.9198, -0.8695, -0.3122, -0.0882,
0.5113, -1.3449, -0.9429, 1.7962, 0.5734, 1.0710, -0.9295, -2.0507])
tensor([[ 1.4353, -0.7081, 1.1953, -0.1438, -0.9198, -0.8695, -0.3122, -0.0882],
[ 0.5113, -1.3449, -0.9429, 1.7962, 0.5734, 1.0710, -0.9295, -2.0507]])
torch.Size([, ]) torch.Size([]) torch.Size([, ])

如果你有只有一个元素的tensor,那么就能够使用.item()去得到一个转换为python数字的值:

#-*- coding: utf- -*-
from __future__ import print_function
import torch
x = torch.randn()
print(x)
print(x.item())

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([-2.3159])
-2.3158915042877197

⚠️100+的张量运算,包括转置、标引、切片、数学运算、线性代数、随机数等,都计算在here

NumPy Bridge

将Torch Tensor转换为NumPy数组,反之亦然,是一件轻而易举的事。
Torch张量和NumPy数组将共享它们的底层内存位置,更改一个将更改另一个。

Converting a Torch Tensor to a NumPy Array

#-*- coding: utf- -*-
from __future__ import print_function
import torch
a = torch.ones()
print(a) b = a.numpy()
print(b) a.add_()
print(a)
print(b)

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
tensor([., ., ., ., .])
[. . . . .]
tensor([., ., ., ., .])
[. . . . .]

从上面的结果可以看见,仅更改tensor a也会导致b被更改

Converting NumPy Array to Torch Tensor

查看如何改变np数组来自动改变torch张量

#-*- coding: utf- -*-
from __future__ import print_function
import torch
import numpy as np
a = np.ones()
b = torch.from_numpy(a)
np.add(a, , out=a)
print(a)
print(b)

返回:

(deeplearning) userdeMBP:pytorch user$ python test.py
[. . . . .]
tensor([., ., ., ., .], dtype=torch.float64)

CPU上除了Char Tensor以外的所有张量都支持转换成NumPy,或者反向转换

CUDA Tensors

Tensors可以被移到任意的设备,使用.to方法

#-*- coding: utf- -*-
from __future__ import print_function
import torch # let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
x = torch.randn(, )
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!

之前使用的机器中没有CUDA,换到另一台运行:

user@home:/opt/user$ python test.py
tensor([[0.6344, 1.7958, 2.3387, 2.0527],
[2.1517, 2.1555, 2.1645, 0.4499],
[2.2020, 1.7363, 3.1394, 0.1240],
[1.9541, 1.6115, 2.0081, 1.8911]], device='cuda:0')
tensor([[0.6344, 1.7958, 2.3387, 2.0527],
[2.1517, 2.1555, 2.1645, 0.4499],
[2.2020, 1.7363, 3.1394, 0.1240],
[1.9541, 1.6115, 2.0081, 1.8911]], dtype=torch.float64)
上一篇:libimobiledevice命令


下一篇:【xmind converse excel】测试用例定制化小工具