text-generation-inference 当使用语法时,Tokenizer未设置eos_token_id导致Galactica模型失败,

rqenqsqc  于 3个月前  发布在  其他
关注(0)|答案(1)|浏览(51)

你好!
Galactica tokenizereos_token_id 没有设置,但在它的模型配置中设置了。我们考虑到 tokenizer 的 pad_token_idCausalLM 中为 None,但没有考虑到 eos_token_id
text-generation-inference/server/text_generation_server/models/causal_lm.py
第547行到第552行
| ifconfig.pad_token_idisnotNone: |
| tokenizer.pad_token_id=config.pad_token_id |
| elifconfig.eos_token_idisnotNone: |
| tokenizer.pad_token_id=config.eos_token_id |
| eliftokenizer.eos_token_idisnotNone: |
| tokenizer.pad_token_id=tokenizer.eos_token_id |
另一方面,Outline 的 RegexFSM 将 EOS 作为最终指令,在我们的情况下是 None

next_tokens_to_end_states = self.states_to_token_maps.get(state)
        if next_tokens_to_end_states is None:
            return Write([self.eos_token_id])

这导致在偏置logits时 GrammarLogitProcessor.__call__ 失败。
text-generation-inference/server/text_generation_server/utils/logits_process.py
第501行到第503行
| allowed_tokens=self.fsm.allowed_token_ids(fsm_grammar_state) |
| mask=torch.full_like(logits, -math.inf) |
| mask[:, allowed_tokens] =0 |

File "/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/logits_process.py", line 506, in __call__
    mask[:, allowed_tokens] = 0
RuntimeError: Could not infer dtype of NoneType
7cwmlq89

7cwmlq891#

感谢您报告这个bug @sadra-barikbin 👍
我会在这里ping @drbh(如果您有带宽的话)

相关问题