我正在使用AWS Sagemaker笔记本示例与LLaMA模型交互,因为Colab Pro仅提供15GB GPU RAM,并且总是内存不足(CUDA内存不足),而我的Sagemaker笔记本是ml.g5.2xlarge示例或ml.g4dn.2xlarge示例,应该不会有任何问题。但是,当我运行此命令时:
!pip install llama-cpp-python==0.1.48
我得到以下错误消息(输出为130行,因此我将其压缩为大多数相关部分)
Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
exit code: 1 [130 lines of output note: This error originates from a subprocess, and
is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install
pyproject.toml-based projects
1条答案
按热度按时间0vvn1miw1#
你的机器上需要Visual C++。下载并安装,放置在PATH中,您的错误应该可以解决。如果你在云上,你需要在环境中使用VC++。