ollama 从v0.1.39开始,推理速度非常慢,仅适用于CPU,

j8yoct9x  于 2个月前  发布在  其他
关注(0)|答案(2)|浏览(40)

问题是什么?

在v0.1.38版本之前,我的设置有9个token/s,但从v0.1.39到实际的v0.1.48和v0.2版本,它将性能降低到了0.12个token/s。
我的设置:

  • Intel(R) Core(TM) i5-9600T CPU
  • 64GB RAM DDR4 2666MHz Dual Channel
  • Linux Proxmox 8.1.10
  • Phi3 3B型号

v0.1.39

v0.1.38

操作系统

Linux,Docker

GPU

其他

CPU

英特尔

Ollama版本

0.1.48

eufgjt7s

eufgjt7s1#

请分享您的服务器日志,以及在proxmox虚拟机内的cat /proc/cpuinfo | grep ^flags | tail -1的输出。最近proxmox是否更新/更改以阻止向您的虚拟机暴露的矢量扩展?您描述的减速听起来像是在没有任何矢量扩展的情况下回退到“cpu”运行器。

sqougxex

sqougxex2#

这是关于我的设置的更新。
我将Proxmox更新到了8.2.4版本。
这里还有LXC配置选项:

以及LXC配置资源

此外,在我的上一条消息之后,我还更改了一些Docker环境配置,如下所示。
OLLAMA_FLASH_ATTENTION = 0
OLLAMA_KEEP_ALIVE = 2h
OLLAMA_MAX_LOADED_MODELS = 1
OLLAMA_NUM_PARALLEL = 1
非常重要的是要知道,慢性能发生在每个模型中,不仅仅是phi3,但我选择它是因为如果我选择任何其他更大的模型,如llama3 8B,我应该分配更多的时间来获取测试结果,而这简直是荒谬的,从v0.1.39版本开始,我花了40到60分钟才得到回复。
Ollama服务器运行在一个LXC容器中,而不是一个虚拟机,这里是cpuinfo:
# cat /proc/cpuinfo | grep ^flags | tail -1
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d
为了理解日志,我将使用Open WebUI v0.3.7和phi3:instruct从ollama库下载的4B模型进行相同的提示。

提示:Introduce yourself
Ollama版本:0.1.38
捕获:

日志:

# docker logs -f ollama

2024/07/24 03:36:18 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-07-24T03:36:18.631Z level=INFO source=images.go:704 msg="total blobs: 57"
time=2024-07-24T03:36:18.632Z level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-07-24T03:36:18.633Z level=INFO source=routes.go:1054 msg="Listening on [::]:11434 (version 0.1.38)"
time=2024-07-24T03:36:18.633Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama837197813/runners
time=2024-07-24T03:36:21.342Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-07-24T03:36:21.344Z level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="62.5 GiB" available="1.5 GiB"
time=2024-07-24T03:37:05.108Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=13 memory.available="1.5 GiB" memory.required.full="3.1 GiB" memory.required.partial="1.4 GiB" memory.required.kv="768.0 MiB" memory.weights.total="2.1 GiB" memory.weights.repeating="2.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="156.0 MiB" memory.graph.partial="175.1 MiB"
time=2024-07-24T03:37:05.108Z level=INFO source=server.go:320 msg="starting llama server" cmd="/tmp/ollama837197813/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 39227"
time=2024-07-24T03:37:05.108Z level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-07-24T03:37:05.108Z level=INFO source=server.go:504 msg="waiting for llama runner to start responding"
time=2024-07-24T03:37:05.109Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="126167069845376" timestamp=1721792225
INFO [main] system info | n_threads=3 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="126167069845376" timestamp=1721792225 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="5" port="39227" tid="126167069845376" timestamp=1721792225
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32064
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 96
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32064]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32064]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 32000
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 323/32064 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32064
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 96
llm_load_print_meta: n_embd_head_k    = 96
llm_load_print_meta: n_embd_head_v    = 96
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 3072
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.82 B
llm_load_print_meta: model size       = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOT token        = 32007 '<|end|>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  2210.78 MiB
.................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-24T03:37:05.360Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:        CPU KV buffer size =   768.00 MiB
llama_new_context_with_model: KV self size  =  768.00 MiB, K (f16):  384.00 MiB, V (f16):  384.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.13 MiB
llama_new_context_with_model:        CPU compute buffer size =   156.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="126167069845376" timestamp=1721792225
time=2024-07-24T03:37:05.862Z level=INFO source=server.go:545 msg="llama runner started in 0.75 seconds"
[GIN] 2024/07/24 - 03:37:13 | 200 |  8.530760551s |      172.19.0.1 | POST     "/api/chat"
[GIN] 2024/07/24 - 03:37:28 | 200 |  14.69540352s |      172.19.0.1 | POST     "/v1/chat/completions"

提示:Introduce yourself
Ollama版本:0.1.39
捕获:

日志:

# docker logs -f ollama

time=2024-07-24T03:48:11.691Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=14 memory.available="1.5 GiB" memory.required.full="3.1 GiB" memory.required.partial="1.5 GiB" memory.required.kv="768.0 MiB" memory.weights.total="2.1 GiB" memory.weights.repeating="2.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="156.0 MiB" memory.graph.partial="175.1 MiB"
time=2024-07-24T03:48:11.691Z level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama2224331924/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 44209"
time=2024-07-24T03:48:11.692Z level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-07-24T03:48:11.692Z level=INFO source=server.go:526 msg="waiting for llama runner to start responding"
time=2024-07-24T03:48:11.692Z level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="74f33ad" tid="126387508656000" timestamp=1721792891
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="126387508656000" timestamp=1721792891 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="5" port="44209" tid="126387508656000" timestamp=1721792891
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32064
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 96
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32064]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32064]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 32000
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 323/32064 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32064
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 96
llm_load_print_meta: n_embd_head_k    = 96
llm_load_print_meta: n_embd_head_v    = 96
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 3072
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.82 B
llm_load_print_meta: model size       = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOT token        = 32007 '<|end|>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  2210.78 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-24T03:48:11.944Z level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:        CPU KV buffer size =   768.00 MiB
llama_new_context_with_model: KV self size  =  768.00 MiB, K (f16):  384.00 MiB, V (f16):  384.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.13 MiB
llama_new_context_with_model:        CPU compute buffer size =   156.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="126387508656000" timestamp=1721792903
time=2024-07-24T03:48:23.727Z level=INFO source=server.go:569 msg="llama runner started in 12.03 seconds"
[GIN] 2024/07/24 - 03:50:01 | 404 |      62.831µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 03:52:01 | 404 |       75.95µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 03:54:01 | 404 |      66.773µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 03:56:01 | 404 |      69.565µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 03:58:01 | 404 |      83.394µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:00:01 | 404 |      79.922µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:02:01 | 404 |      70.707µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:04:01 | 404 |       58.75µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:06:01 | 404 |      65.274µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:08:01 | 404 |      69.027µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:10:01 | 404 |      62.119µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:11:50 | 200 |        23m39s |      172.19.0.1 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:12:01 | 404 |      56.012µs |   192.168.86.10 | POST     "/api/chat"

提示:Introduce yourself
Ollama版本:0.2.8
捕获:

日志:

# docker logs -f ollama

time=2024-07-24T04:17:20.189Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[33.3 GiB]" memory.required.full="3.1 GiB" memory.required.partial="0 B" memory.required.kv="768.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="2.8 GiB" memory.weights.repeating="2.7 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="156.0 MiB" memory.graph.partial="175.1 MiB"
time=2024-07-24T04:17:20.190Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama2964803635/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 37349"
time=2024-07-24T04:17:20.190Z level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-24T04:17:20.190Z level=INFO source=server.go:583 msg="waiting for llama runner to start responding"
time=2024-07-24T04:17:20.190Z level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="d94c6e0" tid="128087718434688" timestamp=1721794640
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="128087718434688" timestamp=1721794640 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="5" port="37349" tid="128087718434688" timestamp=1721794640
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-4fed7364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32064
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 96
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32064]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32064]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 32000
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens cache size = 67
llm_load_vocab: token to piece cache size = 0.1691 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32064
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_rot            = 96
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 96
llm_load_print_meta: n_embd_head_v    = 96
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 3072
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.82 B
llm_load_print_meta: model size       = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOT token        = 32007 '<|end|>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  2210.78 MiB
time=2024-07-24T04:17:20.442Z level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   768.00 MiB
llama_new_context_with_model: KV self size  =  768.00 MiB, K (f16):  384.00 MiB, V (f16):  384.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.13 MiB
llama_new_context_with_model:        CPU compute buffer size =   156.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="128087718434688" timestamp=1721794647
time=2024-07-24T04:17:27.468Z level=INFO source=server.go:622 msg="llama runner started in 7.28 seconds"
[GIN] 2024/07/24 - 04:18:01 | 404 |      67.191µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:20:01 | 404 |      64.503µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:22:01 | 404 |      74.046µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:24:01 | 404 |      71.695µs |   192.168.86.10 | POST     "/api/chat"
[GIN] 2024/07/24 - 04:24:40 | 200 |         7m20s |      172.19.0.1 | POST     "/api/chat"

非常感谢帮助,这是一个很棒的项目。

相关问题