sentencepiece 如何使用受限词汇创建新的模型文件?

3gtaxfhh  于 2个月前  发布在  其他
关注(0)|答案(4)|浏览(56)

类似于#474,我想限制我的词汇表,然后保存一个使用受限词汇表的新模型文件。我尝试通过保存词汇表、修改它,然后找出如何保存受限模型来实现这一点,但我发现即使没有任何修改,运行spm_export_vocab后跟spm_encode --vocabulary也会产生不同的结果。例如:

echo "Șeful ONU declară că nu există soluții militare în Siria" | spm_encode --model enro_trimmed/sentence.bpe.model
= > ▁Ș e ful ▁ONU ▁de cla ră ▁că ▁nu ▁există ▁solu ții ▁militare ▁în ▁Siria
spm_export_vocab --model enro_trimmed/sentence.bpe.model  --output=sp_vocab.txt
 echo "Șeful ONU declară că nu există soluții militare în Siria" | spm_encode --model enro_trimmed/sentence.bpe.model --vocabulary sp_vocab.txt
=> ▁ Ș e f u l ▁ O N U ▁ d e c l a r ă ▁ c ă ▁ n u ▁ e x i s t ă ▁ s o l u ț i i ▁ m i l i t a r e ▁ î n ▁ S i r i a

这是预期的行为吗?
我的最终目标是在Python中,spm.encode_as_ids只产生小于受限词汇表长度的id,所以如果有更直接的方法实现这个目标,我想知道!谢谢!

xn1cxnb4

xn1cxnb41#

尝试解决这个问题,我创建了一个有序的部件列表,想要保留这些部件,如下所示:

score: 0.0
 type: UNKNOWN,
 piece: "<s>"
 score: 0.0
 type: CONTROL,
 piece: "</s>"
 score: 0.0
 type: CONTROL,
 piece: ","
 score: -3.4635426998138428,
 piece: "."
 score: -3.625642776489258,
...
]

然后,我尝试使用这些部件创建一个新的模型:

sp_new = sentencepiece_model_pb2.ModelProto()
sp_new.pieces = new_pieces

得到的结果是:

AttributeError: Assignment not allowed to repeated field "pieces" in protocol message object.

这种第二种方法是否正确?
肯定还有其他人在尝试限制句子片段模型不使用某些部件之前。

f0brbegy

f0brbegy2#

我的需求与你相同,希望能够使用受限词汇保存一个新的模型。但是似乎sentencepiece在Python中没有提供这样的API。SetVocabulary 无法更改模型。期待新的API可以用于保存新模型并更改模型的实际词汇。

vcudknz3

vcudknz33#

SetVocabulary 有什么作用?你能举个例子说明如何使用它吗?
设置词汇表示例:#250

wqnecbli

wqnecbli4#

不确定这是否仍然需要。
我设法通过copy.deepcopy(m.pieces[0])创建了一个m.piece,并使用它可以创建一个新的spm。
我是这样使用的:

def new_piece_by_deepcopy(original_piece,token:str,score:float,piece_type:int):
    '''
    Args:
        original_piece:(SentencePiece) the target of deepcopy
        piece:(str) token
        score:(float) priority of encoding to this token (see spm.vocab). 
        piece_type:(int) 1:normal, 2:<unk>, 3:control, 4:user defined, 5:unused. 
        
    Return:
        a SentencePiece with given piece, score and piece_type
    '''
    new_p=copy.deepcopy(original_piece)# not a good way, but it does work.
    piece.piece=token
    piece.score=score
    piece.piece_type=piece_type
    return new_p

serializedStr=open(spm_path,"rb").read()
m=sentencepiece_model_pb2.ModelProto()
m.ParseFromString(serializedStr)

pieces.insert(0, new_piece_by_deepcopy("<s>",0,3,spm.pieces[0]))
pieces.insert(2, new_piece_by_deepcopy("</s>",0,3,spm.pieces[0]))
# this bos,eos are meant for being the same as a fairseq dict.

with open(new_spmPath+".model","wb") as f:
        f.write(m.SerializeToString())

在这种情况下,最终的spm将获得和,词汇量增加了2。
此外,它还能正确地对句子进行分词。

相关问题