ollama 支持ppc64le架构

w3nuxt5m  于 2个月前  发布在  其他
关注(0)|答案(7)|浏览(20)

你好,各位

感谢Ollama,这是一个很棒的工具:)

我已经在我的本地(Manjaro)上安装了它,运行得很好。然后,我尝试在一台运行IBM POWER8NVL CPU和Ubuntu 18.04的服务器上安装。这意味着我无法运行安装脚本,因为该脚本需要AMD64 CPU架构。于是,我决定自己构建它。

首先,我使用apt安装了gcc、cmake和nvidia-cuda-toolkit包,然后,我使用"snap install go --classic"安装了go。

接下来,我使用"wget https://github.com/jmorganca/ollama/archive/refs/heads/main.zip"下载了Ollama并解压缩。然后,我在解压缩后的目录中执行了"go generate ./...",但最后收到了以下错误信息:

go generate ./...
go: downloading gonum.org/v1/gonum v0.13.0
go: downloading github.com/spf13/cobra v1.7.0
go: downloading github.com/olekukonko/tablewriter v0.0.5
go: downloading github.com/dustin/go-humanize v1.0.1
go: downloading github.com/pdevine/readline v1.5.2
go: downloading golang.org/x/term v0.10.0
go: downloading golang.org/x/sync v0.3.0
go: downloading github.com/gin-contrib/cors v1.4.0
go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
go: downloading github.com/mattn/go-runewidth v0.0.14
go: downloading github.com/gin-gonic/gin v1.9.1
go: downloading golang.org/x/crypto v0.10.0
go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
go: downloading github.com/rivo/uniseg v0.2.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/ugorji/go/codec v1.2.11
go: downloading golang.org/x/net v0.10.0
go: downloading github.com/mattn/go-isatty v0.0.19
go: downloading github.com/pelletier/go-toml/v2 v2.0.8
go: downloading google.golang.org/protobuf v1.30.0
go: downloading github.com/go-playground/validator/v10 v10.14.0
go: downloading golang.org/x/sys v0.11.0
go: downloading github.com/leodido/go-urn v1.2.4
go: downloading github.com/gabriel-vasile/mimetype v1.4.2
go: downloading github.com/go-playground/universal-translator v0.18.1
go: downloading golang.org/x/text v0.10.0
go: downloading github.com/go-playground/locales v0.14.1
fatal: not a git repository (or any of the parent directories): .git
llm/llama.cpp/generate_linux.go:3: running "git": exit status 128

我也进行了一些搜索,但找不到解决方案。你有什么想法吗?

最好的祝福,
Orkut

fzsnzjdm

fzsnzjdm1#

我已经添加了我的SSH密钥,并在"git remote add"之后尝试拉取仓库成功。然后,我再次执行了"go generate ./..."。不幸的是,我收到了一个新的错误信息,如下:

ollama$ go generate ./...
Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml'
Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf'
Cloning into '/home/username/ollama/llm/llama.cpp/ggml'...
remote: Enumerating objects: 4961, done.
remote: Counting objects: 100% (4961/4961), done.
remote: Compressing objects: 100% (1493/1493), done.
remote: Total 4815 (delta 3444), reused 4641 (delta 3291), pack-reused 0
Receiving objects: 100% (4815/4815), 3.26 MiB | 10.01 MiB/s, done.
Resolving deltas: 100% (3444/3444), completed with 102 local objects.
From https://github.com/ggerganov/llama.cpp
 * branch            9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD
Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
CMake Error: The source directory "/home/username/ollama/llm/llama.cpp/ggml/build/cpu" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
llm/llama.cpp/generate_linux.go:10: running "cmake": exit status 1

又是我的CPU问题吗?

6pp0gazn

6pp0gazn2#

你好,@orkutmuratyilmaz ,感谢你提出这个问题。目前Ollama仅支持arm64和aarch CPU,我不认为IBM Power8 CPU与我们运行语言模型所使用的库兼容。

6tdlim6h

6tdlim6h3#

你好,BruceMacD,感谢你的回复。我仍在寻找解决方案。我有机会从源代码编译/构建吗?如果可以,我应该从哪里开始阅读?:)

0dxa2lsx

0dxa2lsx4#

感谢jmorganca为这个问题设置了一个更好的标题:)

kqqjbcuj

kqqjbcuj5#

Power 9 PC支持吗?如果在源代码中添加构建说明,直到编译出ppc64le二进制文件,那将是很好的。至少有人可以在这里放下它。

bvjveswy

bvjveswy6#

我有一个适用于ppc64le的修补版本;我所做的一些更改如下:

必需的补丁:

diff --git a/llm/llm.go b/llm/llm.go
index 33949c7..17e9d1c 100644
--- a/llm/llm.go
+++ b/llm/llm.go
@@ -6,6 +6,7 @@ package llm
 // #cgo windows,amd64 LDFLAGS: ${SRCDIR}/build/windows/amd64_static/libllama.a -static -lstdc++
 // #cgo linux,amd64 LDFLAGS: ${SRCDIR}/build/linux/x86_64_static/libllama.a -lstdc++
 // #cgo linux,arm64 LDFLAGS: ${SRCDIR}/build/linux/arm64_static/libllama.a -lstdc++
+// #cgo linux,ppc64le LDFLAGS: ${SRCDIR}/build/linux/ppc64le_static/libllama.a -lstdc++
 // #include <stdlib.h>
 // #include "llama.h"
 import "C"

仅对 ollama run 有用:

diff --git a/readline/term_linux.go b/readline/term_linux.go
index 2d6211d..69e05bf 100644
--- a/readline/term_linux.go
+++ b/readline/term_linux.go
@@ -5,10 +5,11 @@ package readline
 import (
        "syscall"
        "unsafe"
+    "golang.org/x/sys/unix"
 )

-const tcgets = 0x5401
-const tcsets = 0x5402
+const tcgets = unix.TCGETS
+const tcsets = unix.TCSETSF

 func getTermios(fd int) (*Termios, error) {
        termios := new(Termios)

第一个补丁让它找到llama静态构建,第二个部分让你在不得到 Error: inappropriate ioctl for device 的情况下运行 ollama run
我在conda环境中构建,这样我可以获得比我的基本RHEL安装中更新的clang / cmake / gcc / g++版本。
CC=clang CXX=clang++ NVCC_PREPEND_FLAGS=-allow-unsupported-compiler go generate ./...
(我正在使用CUDA 11.4,而nvcc抱怨说“太新”的编译器,但它们实际上需要有golang 1.22)
(我会为这个做PR,但我没有足够的时间进行充分的测试)

dfuffjeb

dfuffjeb7#

我正在开始测试和编写,但不是一个很好的开发者,只有管理员。
我还在尝试获得一个带有GPU的服务器,然后根据需要进行测试。

相关问题