将模型导入到Ollama中,

h6my8fg2  于 2个月前  发布在  其他
关注(0)|答案(1)|浏览(24)

从HuggingFace导入到Ollama的模型是否可以是未在convert-hf-to-gguf.py脚本中列出的?
在我的情况下,模型是:https://huggingface.co/ai-forever/ruGPT-3.5-13B
当我尝试导入时,遇到了错误:

Traceback (most recent call last):
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 2865, in <module>
    main()
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 2850, in main
    model_instance.set_vocab()
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 114, in set_vocab
    self._set_vocab_gpt2()
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 500, in _set_vocab_gpt2
    tokens, toktypes, tokpre = self.get_vocab_base()
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 379, in get_vocab_base
    tokpre = self.get_vocab_base_pre(tokenizer)
  File "llm/llama.cpp/convert-hf-to-gguf.py", line 491, in get_vocab_base_pre
    raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()")
NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()
wooyq4lh

wooyq4lh1#

简短的回答是:没有。Ollama依赖于llama.cpp后端来提供API。在llama.cpp项目上打开一个enhancement request,或者(更好的做法!)看看你是否可以自己添加预分词器并提交pull request!

相关问题