def to(self, *args, **kwargs): # real signature unknown; restored from __doc__
"""
to(*args, **kwargs) -> Tensor
Performs Tensor dtype and/or device conversion. A :class:`torch.dtype` and :class:`torch.device` are
inferred from the arguments of ``self.to(*args, **kwargs)``.
.. note::
If the ``self`` Tensor already
has the correct :class:`torch.dtype` and :class:`torch.device`, then ``self`` is returned.
Otherwise, the returned tensor is a copy of ``self`` with the desired
:class:`torch.dtype` and :class:`torch.device`.
Here are the ways to call ``to``:
.. method:: to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor
:noindex:
Returns a Tensor with the specified :attr:`dtype`
Args:
memory_format (:class:`torch.memory_format`, optional): the desired memory format of
returned Tensor. Default: ``torch.preserve_format``.
.. method:: to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor
:noindex:
Returns a Tensor with the specified :attr:`device` and (optional)
:attr:`dtype`. If :attr:`dtype` is ``None`` it is inferred to be ``self.dtype``.
When :attr:`non_blocking`, tries to convert asynchronously with respect to
the host if possible, e.g., converting a CPU Tensor with pinned memory to a
CUDA Tensor.
When :attr:`copy` is set, a new Tensor is created even when the Tensor
already matches the desired conversion.
Args:
memory_format (:class:`torch.memory_format`, optional): the desired memory format of
returned Tensor. Default: ``torch.preserve_format``.
.. method:: to(other, non_blocking=False, copy=False) -> Tensor
:noindex:
Returns a Tensor with same :class:`torch.dtype` and :class:`torch.device` as
the Tensor :attr:`other`. When :attr:`non_blocking`, tries to convert
asynchronously with respect to the host if possible, e.g., converting a CPU
Tensor with pinned memory to a CUDA Tensor.
When :attr:`copy` is set, a new Tensor is created even when the Tensor
already matches the desired conversion.
Example::
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
"""
return _te.Tensor(*(), **{})
3条答案
按热度按时间pes8fvy91#
为什么要在pytorch中使用to(device)方法?
torch.Tensor.to
是多用途方法。你不仅可以进行类型转换,还可以进行CPU到GPUTensor移动和GPU到CPUTensor移动:
字符串
由于CPU和GPU是不同类型的存储器,因此它们必须有一种通信方式。这就是为什么我们有
to("cuda")
和to("cpu")
,我们称之为Tensor。通常在加载训练数据集(图像)时:
to("cuda")
个您可以创建Tensor并将其移动到GPU,就像这样。
型
但有一个技巧,有时你甚至可以直接将它们加载到GPU,而不会弄乱CPU。
型
zkure5ic2#
.to(device)
.to()
可用于将Tensor复制到任何可用设备(CPU,GPU):字符串
它有别名
.cuda()
和.cpu()
。.to(dtype)
当给定一个dtype作为一个arument时,
.to()
充当一个转换方法:型
它具有以dtype命名的别名
.double()
、.float()
、.int()
等rmbxnbpk3#
@prosti和@iacob的回答很好。这里我只想给大家看
pytorch.to()
的to()
函数的源代码有时,还有另一种
to
函数的使用情况,即tensor.view(a,b,c).to(another_tensor)
,在这种情况下,to()
函数是保持输出的类型为another_tensor
本例是以下代码中的最后一个用法示例。
字符串
以下是
torch.to()
函数的全部源代码。型