问题:当我将大约20k+个token的大型提示发送到搭载Nvidia A2000 4GB RAM的笔记本电脑上的Phi-3 Mini 128k模型时,我遇到了一个CUDA内存不足错误。首先,olama使用了大约3.3GB的GPU RAM和8GB的CPU RAM,然后GPU RAM使用量逐渐增加(3.4GB、3.5GB等),大约一分钟后,当GPU RAM耗尽时,它抛出错误(任务管理器中最新的是3.9GB)。在崩溃之前,推理没有返回任何令牌(作为答案)。附上服务器日志。使用的操作系统是Windows 11,Ollama版本是0.1.42,VS Code版本是1.90.0,Continue插件版本是v0.8.40。预期的行为应该是不崩溃,并以某种方式重新分配内存,以便GPU内存不会耗尽。我想在olama中禁用GPU使用(仅测试CPU推理 - 我有64GB RAM),但我无法找到如何关闭GPU的方法(尽管我最近看到了一个命令可以关闭它,但现在找不到了)。
继续设置日志:
Settings:
contextLength: 24000
maxTokens: 4000
model: phi3:3.8-mini-128k-instruct-q4_0
stop: <|end|>,<|user|>,<|assistant|>
log: undefined
内存错误:
CUDA error: out of memory
current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375
cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error"
完整的Ollama服务器日志:
time=2024-06-11T20:39:29.457+02:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=3 memory.available="3.2 GiB" memory.required.full="12.7 GiB" memory.required.partial="3.0 GiB" memory.required.kv="8.8 GiB" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="1.5 GiB" memory.graph.partial="1.5 GiB"
time=2024-06-11T20:39:29.470+02:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\username\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\username\\.ollama\\models\\blobs\\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 --ctx-size 24000 --batch-size 512 --embedding --log-disable --n-gpu-layers 3 --parallel 1 --port 58154"
time=2024-06-11T20:39:29.683+02:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3051 commit="5921b8f0" tid="18220" timestamp=1718131169
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="18220" timestamp=1718131169 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="58154" tid="18220" timestamp=1718131169
llama_model_loader: loaded meta data with 27 key-value pairs and 197 tensors from C:\Users\username\.ollama\models\blobs\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = phi3
llama_model_loader: - kv 1: general.name str = Phi3
llama_model_loader: - kv 2: phi3.context_length u32 = 131072
llama_model_loader: - kv 3: phi3.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 4: phi3.embedding_length u32 = 3072
llama_model_loader: - kv 5: phi3.feed_forward_length u32 = 8192
llama_model_loader: - kv 6: phi3.block_count u32 = 32
llama_model_loader: - kv 7: phi3.attention.head_count u32 = 32
llama_model_loader: - kv 8: phi3.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: phi3.rope.dimension_count u32 = 96
llama_model_loader: - kv 11: phi3.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: general.file_type u32 = 2
llama_model_loader: - kv 13: phi3.rope.scaling.attn_factor f32 = 1.190238
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.pre str = default
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32064] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32064] = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32000
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 32000
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 24: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 25: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - type f32: 67 tensors
llama_model_loader: - type q4_0: 129 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 323
llm_load_vocab: token to piece cache size = 0.3372 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = phi3
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32064
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 96
llm_load_print_meta: n_embd_head_k = 96
llm_load_print_meta: n_embd_head_v = 96
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 3072
llm_load_print_meta: n_embd_v_gqa = 3072
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 3.82 B
llm_load_print_meta: model size = 2.03 GiB (4.55 BPW)
llm_load_print_meta: general.name = Phi3
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 32000 '<|endoftext|>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOT token = 32007 '<|end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA RTX A2000 Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.22 MiB
time=2024-06-11T20:39:29.944+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 3 repeating layers to GPU
llm_load_tensors: offloaded 3/33 layers to GPU
llm_load_tensors: CPU buffer size = 2074.66 MiB
llm_load_tensors: CUDA0 buffer size = 182.32 MiB
llama_new_context_with_model: n_ctx = 24000
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 8156.25 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 843.75 MiB
llama_new_context_with_model: KV self size = 9000.00 MiB, K (f16): 4500.00 MiB, V (f16): 4500.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.13 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1986.75 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 58.88 MiB
llama_new_context_with_model: graph nodes = 1286
llama_new_context_with_model: graph splits = 294
INFO [wmain] model loaded | tid="18220" timestamp=1718131173
time=2024-06-11T20:39:33.371+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"
time=2024-06-11T20:39:33.635+02:00 level=INFO source=server.go:572 msg="llama runner started in 3.95 seconds"
[GIN] 2024/06/11 - 20:39:37 | 200 | 8.2721184s | 127.0.0.1 | POST "/api/chat"
CUDA error: out of memory
current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375
cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error"
[GIN] 2024/06/11 - 20:41:34 | 200 | 1m6s | 127.0.0.1 | POST "/api/chat"
1条答案
按热度按时间vsaztqbk1#
看起来问题来自llama.cpp project。我不确定我的ollama中使用的llama.cpp的版本。我现在没有资源去查看llama.cpp项目的问题。