pytorch DataLoader错误:运行时间错误:stack期望每个Tensor大小相等,但在条目0处得到[1024],在条目13处得到[212

ejk8hzay  于 2023-08-05  发布在  其他
关注(0)|答案(1)|浏览(266)

我有一个由列名input_ids组成的数据集,我正在用DataLoader加载:

train_batch_size = 2
eval_dataloader = DataLoader(val_dataset, batch_size=train_batch_size)

字符串
eval_dataloader的长度为

print(len(eval_dataloader))
>>> 1623


我运行时得到错误:

for step, batch in enumerate(eval_dataloader):
    print(step)
>>> 1,2... ,1621


每个批次长度为1024。如果将train_batch_size更改为1,则错误消失。
我试着把最后一批

eval_dataloader = DataLoader(val_dataset, batch_size=train_batch_size, drop_last=True)


但批量大于1时仍会弹出错误。
完整堆栈:

RuntimeError                              Traceback (most recent call last)
Cell In[34], line 2
      1 eval_dataloader = DataLoader(val_dataset,shuffle=True,batch_size=2,drop_last=True) 
----> 2 for step, batch in enumerate(eval_dataloader):
      3     print(step, batch['input_ids'].shape)

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/dataloader.py:628, in _BaseDataLoaderIter.__next__(self)
    625 if self._sampler_iter is None:
    626     # TODO(https://github.com/pytorch/pytorch/issues/76750)
    627     self._reset()  # type: ignore[call-arg]
--> 628 data = self._next_data()
    629 self._num_yielded += 1
    630 if self._dataset_kind == _DatasetKind.Iterable and \
    631         self._IterableDataset_len_called is not None and \
    632         self._num_yielded > self._IterableDataset_len_called:

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/dataloader.py:671, in _SingleProcessDataLoaderIter._next_data(self)
    669 def _next_data(self):
    670     index = self._next_index()  # may raise StopIteration
--> 671     data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    672     if self._pin_memory:
    673         data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:61, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
     59 else:
     60     data = self.dataset[possibly_batched_index]
---> 61 return self.collate_fn(data)

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:265, in default_collate(batch)
    204 def default_collate(batch):
    205     r"""
    206         Function that takes in a batch of data and puts the elements within the batch
    207         into a tensor with an additional outer dimension - batch size. The exact output type can be
   (...)
    263             >>> default_collate(batch)  # Handle `CustomType` automatically
    264     """
--> 265     return collate(batch, collate_fn_map=default_collate_fn_map)

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:128, in collate(batch, collate_fn_map)
    126 if isinstance(elem, collections.abc.Mapping):
    127     try:
--> 128         return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
    129     except TypeError:
    130         # The mapping type may not support `__init__(iterable)`.
    131         return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:128, in <dictcomp>(.0)
    126 if isinstance(elem, collections.abc.Mapping):
    127     try:
--> 128         return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
    129     except TypeError:
    130         # The mapping type may not support `__init__(iterable)`.
    131         return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:120, in collate(batch, collate_fn_map)
    118 if collate_fn_map is not None:
    119     if elem_type in collate_fn_map:
--> 120         return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
    122     for collate_type in collate_fn_map:
    123         if isinstance(elem, collate_type):

File ~/anaconda3/envs/cilm/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:163, in collate_tensor_fn(batch, collate_fn_map)
    161     storage = elem.storage()._new_shared(numel, device=elem.device)
    162     out = elem.new(storage).resize_(len(batch), *list(elem.size()))
--> 163 return torch.stack(batch, 0, out=out)

RuntimeError: stack expects each tensor to be equal size, but got [212] at entry 0 and [1024] at entry 1


我发现其他有点类似的SO问题/常规问题,但它们似乎与其他设置中的stack功能有关(linklinklinklink
train_dataloader中存在类似问题:RuntimeError: stack expects each tensor to be equal size, but got [930] at entry 0 and [1024] at entry 1

更新感谢@chro和this reddit post解决:“要隔离问题,请在dataloader中使用批大小为1的项进行循环,而不进行shuffle,并打印您得到的数组的形状。然后调查不同尺寸的。

似乎有一个序列的长度不是1024,但如果批的大小不是1,则由于某种原因无法看到。不完全确定你怎么能有一个Tensor的Tensor,具有不同的长度,但唉。为了解决这个问题,我首先过滤了我的数据集,并删除了不是1024的1序列。然后调用了DataLoader就可以了。

jgwigjjp

jgwigjjp1#

你能用(用你的数据的相关代码替换batch.shape)调试它吗?

eval_dataloader = DataLoader(val_dataset,
                             batch_size=1) 
for step, batch in enumerate(eval_dataloader):
    if batch.shape[1]!=1024:
        print(step, batch.shape)

字符串
我的想法是检查以下内容:
1.它是否在数据集中的同一项上失败?
1.失败的项目形状是什么?
通常当它在DataLoader中堆叠多个元素时,我会看到这个错误,但是其中一些元素的大小不同。
请也写一个完整的堆栈跟踪相关的问题。
更新:为了解决问题,首先过滤数据集并删除与其他序列不相同的1个序列。然后在上面调用DataLoader

相关问题