无法将产生运行时错误的Pytorch强制转换为所需的输出类型Long

cunj1qz1  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(110)

下面的代码给出了runtimeerror“结果类型Float无法转换为所需的输出类型Long”。
我已经尝试过以下操作:

发件人:torch.div(self.indices_buf, vocab_size, out=self.beams_buf)
收件人:torch.div(self.indices_buf, vocab_size, out=self.beams_buf).type_as(torch.LongTensor)

有问题的代码:

class BeamSearch(Search):

    def __init__(self, tgt_dict):
        super().__init__(tgt_dict)

    def step(self, step, lprobs, scores):
        super()._init_buffers(lprobs)
        bsz, beam_size, vocab_size = lprobs.size()

        if step == 0:
            # at the first step all hypotheses are equally likely, so use
            # only the first beam
            lprobs = lprobs[:, ::beam_size, :].contiguous()
        else:
            # make probs contain cumulative scores for each hypothesis
            lprobs.add_(scores[:, :, step - 1].unsqueeze(-1))

        torch.topk(
            lprobs.view(bsz, -1),
            k=min(
                # Take the best 2 x beam_size predictions. We'll choose the first
                # beam_size of these which don't predict eos to continue with.
                beam_size * 2,
                lprobs.view(bsz, -1).size(1) - 1,  # -1 so we never select pad
            ),
            out=(self.scores_buf, self.indices_buf),
        )
        torch.div(self.indices_buf, vocab_size, out=self.beams_buf).type_as(torch.LongTensor)
        self.indices_buf.fmod_(vocab_size)
        return self.scores_buf, self.indices_buf, self.beams_buf

此代码来自fairseq。

k7fdbhmy

k7fdbhmy1#

也许你可以试试这个self.beams_buf = self.indices_buf // vocab_size

相关问题