ollama [Windows 11] 我无法运行任何模型;每当我尝试运行它们时,我都会得到错误0xc0000139,

6ljaweal  于 2个月前  发布在  Windows
关注(0)|答案(8)|浏览(43)

问题:在升级后,出现了错误。已启用调试模式。可能是什么问题?

[GIN] 2024/07/14 - 16:05:11 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/14 - 16:05:20 | 200 | 69.6μs | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/14 - 16:05:20 | 200 | 23.4842ms | 127.0.0.1 | POST "/api/show"
time=2024-07-14T16:05:21.026+08:00 level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\X170.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-06932cd6-8249-ad0c-67b5-f2bcc68be311 parallel=4 available=14287372288 required="6.2 GiB"
time=2024-07-14T16:05:21.026+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[13.3 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-14T16:05:21.046+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\Users\X170\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe --model C:\Users\X170.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --no-mmap --parallel 4 --port 14534"
time=2024-07-14T16:05:21.050+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-14T16:05:21.050+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-14T16:05:21.051+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
time=2024-07-14T16:05:21.314+08:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000139 "
[GIN] 2024/07/14 - 16:

cl25kdpy

cl25kdpy1#

你尝试过重新安装吗?这已经解决了我的问题。我猜想这是一个升级问题,手动安装似乎可以解决它。

你不需要卸载,只需要下载最新的Windows版本,关闭Ollama,让它覆盖即可(这不会改变Ollama环境表或删除模型,只会更改Ollama的核心)。

cxfofazt

cxfofazt2#

你尝试过重新安装吗?这已经解决了我的问题。我猜想这是一个升级问题,手动安装似乎可以解决它。

你不需要卸载,只需要下载最新的Windows版本,关闭Ollama,让它覆盖。(这不会改变Ollama环境表或删除模型,只会改变Ollama的核心。)

我已经重新安装了几次,从0.2.1升级到0.2.4再到0.2.5。每次都会出现同样的问题。

eqfvzcg8

eqfvzcg83#

我已经重新安装了几次,从0.2.1升级到0.2.4,再到0.2.5。每次都会出现同样的问题。

你最近更新过显卡驱动吗?这可能是问题所在。如果没有,尝试使用0.2.x之前的版本。如果在出现问题之前最近安装了任何Windows更新,请尝试卸载它们,以防万一它们导致了什么问题。

clj7thdc

clj7thdc4#

嗨,@hljhyb,抱歉发生了这种情况——我会修复它。
您能否在PowerShell中运行以下命令:

$env:PATH="C:\Users\jmorgan\AppData\Local\Programs\Ollama\cuda;" +$env:PATH
explorer.exe C:\Users\X170\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe

并告诉我显示了什么错误?

polkgigr

polkgigr5#

你好,@hljhyb,很抱歉发生了这种情况——我会尽快修复。
请问您能否在PowerShell中运行以下命令:

$env:PATH="C:\Users\jmorgan\AppData\Local\Programs\Ollama\cuda;" +$env:PATH
explorer.exe C:\Users\X170\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe

并告诉我显示了什么错误?

INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="335452" timestamp=1721003942
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="335452" timestamp=1721003942 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="8080" tid="335452" timestamp=1721003942
llama_model_load: error loading model: llama_model_loader: failed to load model from
llama_load_model_from_file: exception loading model

zf2sa74q

zf2sa74q6#

@hljhyb the attached logs don't seem to have debug enabled. Can you try updating to v0.2.8 and if that doesn't clear it up, try the following so we can try to see why it's not able to start up correctly. Quit the tray app, then run the following which will restart it with debugging enabled.

$env:OLLAMA_DEBUG="1"
& "ollama app"

Then try to load a model, and assuming it fails, share the server.log.

qzlgjiam

qzlgjiam7#

@dhiltgen
I did it as you said, but it still doesn't work.
server.log

bakd9h0s

bakd9h0s8#

错误代码0xC0000139确实与NT_STATUS_ENTRYPOINT_NOT_FOUND的NTSTATUS错误有关,这意味着“指定的过程无法找到”。
这可能意味着他们的Windows安装可能存在损坏的文件或Ollama正常工作所需的缺失文件。
你是否尝试过以管理员身份运行cmd并运行"sfc /scannow"和"dism /online /cleanup-image /restorehealth",这些命令有助于修复损坏的文件并修复系统。
请告诉我们这些命令的结果,以及在执行这些命令后Ollama是否正常工作!
顺便说一下,我使用的NTSTATUS代码列表在这里,如果你想自己检查一下: https://gist.github.com/mbrownnycnyc/24cb49aa2cba35be4764b1fa37bf9063

相关问题