尝试运行一个训练脚本,解决了一些错误消息后,我遇到了这个,有人知道这里发生了什么吗?
Batch size > 1 not implemented! Falling back to batch_size = 1 ...
Building multi-modal model...
Loading model parameters.
Traceback (most recent call last):
File "translate_mm.py", line 166, in <module>
main()
File "translate_mm.py", line 98, in main
use_filter_pred=False)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/IO.py", line 198, in build_dataset
use_filter_pred=use_filter_pred)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 75, in __init__
out_examples = list(out_examples)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 69, in <genexpr>
out_examples = (self._construct_example_fromlist(
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 68, in <genexpr>
example_values = ([ex[k] for k in keys] for ex in examples_iter)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in _dynamic_dict
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in <listcomp>
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1178, in __getattr__
type(self).__name__, name))
AttributeError: 'Vocab' object has no attribute 'stoi'
它是指
def _dynamic_dict(self, examples_iter):
for example in examples_iter:
src = example["src"]
src_vocab = torchtext.vocab.Vocab(Counter(src))
self.src_vocabs.append(src_vocab)
# Mapping source tokens to indices in the dynamic dict.
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
example["src_map"] = src_map
if "tgt" in example:
tgt = example["tgt"]
mask = torch.LongTensor(
[0] + [src_vocab.stoi[w] for w in tgt] + [0])
example["alignment"] = mask
yield example
注意:原来的模型是用一个老得多的版本torchtext,我猜错误是与此有关,但我只是太缺乏经验,不能肯定。
有人有主意吗?谷歌搜索没有提供任何显著的结果。
你好。
1条答案
按热度按时间8ftvxx2r1#
必须使用
get_stoi()[w]
。这是针对删除旧版本后的新版本。也可以使用get_itos()
,它返回元素列表。