llama.cpp Error converting SmolLM-1.7B-Instruct

brqmpdu1  于 2个月前  发布在  其他
关注(0)|答案(1)|浏览(26)

python /content/llama.cpp/convert_hf_to_gguf.py --outtype f16 SmolLM-1.7B-Instruct --outfile SmolLM-1.7B-Instruct.f16.gguf

INFO:hf-to-gguf:Set model tokenizer
WARNING:hf-to-gguf:

WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:**          There are 2 possible reasons for this:
WARNING:hf-to-gguf:**          - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:**          - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:**          Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref:     https://github.com/ggerganov/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh:  855059429035d75a914d1eda9f10a876752e281a054a7a3d421ef0533e5b6249
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:
8ftvxx2r

8ftvxx2r1#

对于那些对此感兴趣的人,chatllm.cpp支持这个。

相关问题