pytorch 在PythonAnywhere上运行句子转换器

gojuced7  于 2023-10-20  发布在  Python
关注(0)|答案(1)|浏览(120)

我正试图运行一个HuggingFace模型来计算向量嵌入,就像PythonAnywhere上的here解释的那样(在我的笔记本电脑上,在WSL 2下的Ubuntu下,它在本地工作得很好)。
安装很顺利:

pip install -U sentence-transformers

但是,当我运行以下代码时:

from sentence_transformers import SentenceTransformer
import time

def ms_now():
    return int(time.time_ns() / 1000000)

class Timer():
    def __init__(self):
        self.start = ms_now()
    
    def stop(self):
        return ms_now() - self.start

sentences = ["This is an example sentence each sentence is converted"] * 10

timer = Timer()
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
print("Model initialized", timer.stop())
for _ in range(10):
    timer = Timer()
    embeddings = model.encode(sentences)
    print(timer.stop())

我得到的错误:

Traceback (most recent call last):
  File "/home/DrMeir/test/test.py", line 17, in <module>
    model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 95, in __init__
    modules = self._load_sbert_model(model_path)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/SentenceTransformer.py", line 840, in _load_sbert_model
    module = module_class.load(os.path.join(model_path, module_config['path']))
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 137, in load
    return Transformer(model_name_or_path=input_path, **config)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 29, in __init__
    self._load_model(model_name_or_path, config, cache_dir)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/sentence_transformers/models/Transformer.py", line 49, in _load_model
    self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
    return model_class.from_pretrained(
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/DrMeir/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3061, in _load_pretrained_model
    id_tensor = id_tensor_storage(tensor) if tensor.device != torch.device("meta") else id(tensor)
RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta

PythonAnywhere上有torch 1.8.1+cpu。在我的笔记本电脑上是2.0.1
出现错误的原因是什么?如何使其正常工作?

uttx8gqw

uttx8gqw1#

正如评论中提到的,meta设备是在PyTorch 1.9版本中添加的。PythonAnywhere附带了PyTorch版本1.8.1。
transformers库降级为2021年5月12日发布的4.6.0(在torch 1.9发布之前)解决了这个问题。

相关问题