sentencepiece 如何使用受限词汇创建新的模型文件?

3gtaxfhh  于 9个月前  发布在  其他
关注(0)|答案(4)|浏览(187)

类似于#474,我想限制我的词汇表,然后保存一个使用受限词汇表的新模型文件。我尝试通过保存词汇表、修改它,然后找出如何保存受限模型来实现这一点,但我发现即使没有任何修改,运行spm_export_vocab后跟spm_encode --vocabulary也会产生不同的结果。例如:

  1. echo "Șeful ONU declară că nu există soluții militare în Siria" | spm_encode --model enro_trimmed/sentence.bpe.model
  2. = > ▁Ș e ful ONU de cla ră că nu există solu ții militare ▁în Siria
  1. spm_export_vocab --model enro_trimmed/sentence.bpe.model --output=sp_vocab.txt
  2. echo "Șeful ONU declară că nu există soluții militare în Siria" | spm_encode --model enro_trimmed/sentence.bpe.model --vocabulary sp_vocab.txt
  3. => Ș e f u l O N U d e c l a r ă c ă n u e x i s t ă s o l u ț i i m i l i t a r e î n S i r i a

这是预期的行为吗?
我的最终目标是在Python中,spm.encode_as_ids只产生小于受限词汇表长度的id,所以如果有更直接的方法实现这个目标,我想知道!谢谢!

xn1cxnb4

xn1cxnb41#

尝试解决这个问题,我创建了一个有序的部件列表,想要保留这些部件,如下所示:

  1. score: 0.0
  2. type: UNKNOWN,
  3. piece: "<s>"
  4. score: 0.0
  5. type: CONTROL,
  6. piece: "</s>"
  7. score: 0.0
  8. type: CONTROL,
  9. piece: ","
  10. score: -3.4635426998138428,
  11. piece: "."
  12. score: -3.625642776489258,
  13. ...
  14. ]

然后,我尝试使用这些部件创建一个新的模型:

  1. sp_new = sentencepiece_model_pb2.ModelProto()
  2. sp_new.pieces = new_pieces

得到的结果是:

  1. AttributeError: Assignment not allowed to repeated field "pieces" in protocol message object.

这种第二种方法是否正确?
肯定还有其他人在尝试限制句子片段模型不使用某些部件之前。

展开查看全部
f0brbegy

f0brbegy2#

我的需求与你相同,希望能够使用受限词汇保存一个新的模型。但是似乎sentencepiece在Python中没有提供这样的API。SetVocabulary 无法更改模型。期待新的API可以用于保存新模型并更改模型的实际词汇。

vcudknz3

vcudknz33#

SetVocabulary 有什么作用?你能举个例子说明如何使用它吗?
设置词汇表示例:#250

wqnecbli

wqnecbli4#

不确定这是否仍然需要。
我设法通过copy.deepcopy(m.pieces[0])创建了一个m.piece,并使用它可以创建一个新的spm。
我是这样使用的:

  1. def new_piece_by_deepcopy(original_piece,token:str,score:float,piece_type:int):
  2. '''
  3. Args:
  4. original_piece:(SentencePiece) the target of deepcopy
  5. piece:(str) token
  6. score:(float) priority of encoding to this token (see spm.vocab).
  7. piece_type:(int) 1:normal, 2:<unk>, 3:control, 4:user defined, 5:unused.
  8. Return:
  9. a SentencePiece with given piece, score and piece_type
  10. '''
  11. new_p=copy.deepcopy(original_piece)# not a good way, but it does work.
  12. piece.piece=token
  13. piece.score=score
  14. piece.piece_type=piece_type
  15. return new_p
  16. serializedStr=open(spm_path,"rb").read()
  17. m=sentencepiece_model_pb2.ModelProto()
  18. m.ParseFromString(serializedStr)
  19. pieces.insert(0, new_piece_by_deepcopy("<s>",0,3,spm.pieces[0]))
  20. pieces.insert(2, new_piece_by_deepcopy("</s>",0,3,spm.pieces[0]))
  21. # this bos,eos are meant for being the same as a fairseq dict.
  22. with open(new_spmPath+".model","wb") as f:
  23. f.write(m.SerializeToString())

在这种情况下,最终的spm将获得和,词汇量增加了2。
此外,它还能正确地对句子进行分词。

展开查看全部

相关问题