ollama Gemma 2运行太慢

qf9go6mv  于 2个月前  发布在  其他
关注(0)|答案(7)|浏览(42)

问题是什么?

在从0.20升级到0.27后,ollama以非常低的速度运行gemma 2 9b。我认为操作系统没有耗尽显存,因为gemma 2仅占用6.8G(q_4_0)显存,而我的笔记本电脑有8G显存。然而,其他9b型号的q4_0运行起来就像glm4一样顺畅。这是ollama的bug还是gemma 2本身的问题?

操作系统

Windows

GPU

Nvidia

CPU

Intel

Ollama版本

0.27

ccrfmcuu

ccrfmcuu1#

服务器日志有助于诊断问题。

iyfjxgzm

iyfjxgzm2#

服务器日志对于诊断问题是有帮助的。

2024/07/21 07:09:47 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\AGI\\ollama_models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T07:09:47.466+08:00 level=INFO source=images.go:778 msg="total blobs: 77"
time=2024-07-21T07:09:47.559+08:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-21T07:09:47.562+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-21T07:09:47.566+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v6.1 cpu cpu_avx cpu_avx2]"
time=2024-07-21T07:09:47.567+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T07:09:48.289+08:00 level=INFO source=gpu.go:287 msg="detected OS VRAM overhead" id=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda compute=8.6 driver=12.1 name="NVIDIA GeForce RTX 3070 Ti Laptop GPU" overhead="918.0 MiB"
time=2024-07-21T07:09:48.294+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda compute=8.6 driver=12.1 name="NVIDIA GeForce RTX 3070 Ti Laptop GPU" total="8.0 GiB" available="6.9 GiB"
[GIN] 2024/07/21 - 07:10:23 | 200 |      1.1431ms |       127.0.0.1 | HEAD     "/"
time=2024-07-21T07:10:23.868+08:00 level=WARN source=routes.go:817 msg="bad manifest filepath" name=hub/bacx/studybuddy:latest error="open D:\\AGI\\ollama_models\\blobs\\sha256-c65468c33ec86e462ef2a5eff135cbe40b4e7179b72806048034ccc9dd671eb6: The system cannot find the file specified."
[GIN] 2024/07/21 - 07:10:23 | 200 |     59.0374ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/07/21 - 07:10:39 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:10:39 | 200 |     41.6721ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:10:39.312+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=40 layers.split="" memory.available="[6.9 GiB]" memory.required.full="7.8 GiB" memory.required.partial="6.8 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.8 GiB]" memory.weights.total="5.3 GiB" memory.weights.repeating="4.6 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T07:10:39.325+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-befd260af00133c21746d65696658a69103b53287fee1a6d544e8f972de05d67 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 40 --no-mmap --parallel 1 --port 49780"
time=2024-07-21T07:10:39.357+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T07:10:39.359+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T07:10:39.360+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="13488" timestamp=1721517040
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="13488" timestamp=1721517040 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="49780" tid="13488" timestamp=1721517040
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-befd260af00133c21746d65696658a69103b53287fee1a6d544e8f972de05d67 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = merged
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_K:  252 tensors
llama_model_loader: - type q6_K:   43 tensors
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.36 GiB (4.98 BPW) 
llm_load_print_meta: general.name     = merged
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
time=2024-07-21T07:10:40.379+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloaded 40/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1677.17 MiB
llm_load_tensors:      CUDA0 buffer size =  4529.00 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    32.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   640.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    16.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 30
INFO [wmain] model loaded | tid="13488" timestamp=1721517046
time=2024-07-21T07:10:46.468+08:00 level=INFO source=server.go:617 msg="llama runner started in 7.11 seconds"
[GIN] 2024/07/21 - 07:10:46 | 200 |    7.2349256s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:17:43 | 200 |         5m40s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:18:01 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:18:01 | 200 |     85.0611ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:18:02.142+08:00 level=INFO source=sched.go:495 msg="updated VRAM based on existing loaded models" gpu=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda total="8.0 GiB" available="48.8 MiB"
time=2024-07-21T07:18:03.339+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=42 layers.split="" memory.available="[6.8 GiB]" memory.required.full="6.8 GiB" memory.required.partial="6.1 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.1 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T07:18:03.350+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-ba678f3760a834f86247d0fd1ad0ff6d62ba9b030774d0c1bf1c38835979b2d4 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 42 --no-mmap --parallel 1 --port 50143"
time=2024-07-21T07:18:03.389+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T07:18:03.389+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T07:18:03.400+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="11612" timestamp=1721517484
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="11612" timestamp=1721517484 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="50143" tid="11612" timestamp=1721517484
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-ba678f3760a834f86247d0fd1ad0ff6d62ba9b030774d0c1bf1c38835979b2d4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = merged
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 12
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2024-07-21T07:18:04.948+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q3_K:  168 tensors
llama_model_loader: - type q4_K:  122 tensors
llama_model_loader: - type q5_K:    4 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q3_K - Medium
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 4.43 GiB (4.12 BPW) 
llm_load_print_meta: general.name     = merged
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloaded 42/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1435.56 MiB
llm_load_tensors:      CUDA0 buffer size =  3817.62 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   672.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    15.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 4
INFO [wmain] model loaded | tid="11612" timestamp=1721517491
time=2024-07-21T07:18:11.866+08:00 level=INFO source=server.go:617 msg="llama runner started in 8.48 seconds"
[GIN] 2024/07/21 - 07:18:11 | 200 |      9.90228s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:22:14 | 200 |         3m58s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:22:25 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:22:25 | 200 |    136.1339ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:22:25.318+08:00 level=INFO source=sched.go:495 msg="updated VRAM based on existing loaded models" gpu=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda total="8.0 GiB" available="640.9 MiB"
time=2024-07-21T07:22:26.450+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=41 layers.split="" memory.available="[6.7 GiB]" memory.required.full="7.5 GiB" memory.required.partial="6.7 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.7 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T07:22:26.462+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --no-mmap --parallel 1 --port 50231"
time=2024-07-21T07:22:26.526+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T07:22:26.526+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T07:22:26.529+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="9252" timestamp=1721517748
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="9252" timestamp=1721517748 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="50231" tid="9252" timestamp=1721517748
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2024-07-21T07:22:28.117+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_0:  294 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW) 
llm_load_print_meta: general.name     = gemma-2-9b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 41 repeating layers to GPU
llm_load_tensors: offloaded 41/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1541.93 MiB
llm_load_tensors:      CUDA0 buffer size =  4361.05 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    16.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   656.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    16.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 17
INFO [wmain] model loaded | tid="9252" timestamp=1721517757
time=2024-07-21T07:22:38.050+08:00 level=INFO source=server.go:617 msg="llama runner started in 11.52 seconds"
[GIN] 2024/07/21 - 07:22:38 | 200 |   12.9035189s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:25:36 | 200 |   59.9902702s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:25:48 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:25:48 | 200 |     62.6399ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/07/21 - 07:26:40 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:26:41 | 200 |    1.3670033s |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2024/07/21 - 07:26:48 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2024-07-21T07:26:48.291+08:00 level=WARN source=routes.go:817 msg="bad manifest filepath" name=hub/bacx/studybuddy:latest error="open D:\\AGI\\ollama_models\\blobs\\sha256-c65468c33ec86e462ef2a5eff135cbe40b4e7179b72806048034ccc9dd671eb6: The system cannot find the file specified."
[GIN] 2024/07/21 - 07:26:48 | 200 |    171.5556ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/07/21 - 07:26:59 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:26:59 | 404 |            0s |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:27:32.046+08:00 level=INFO source=images.go:1047 msg="request failed: Head \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232702Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=058dc757384f0a42d66693feb1e1e2f95cbe7e4925e98b1e2d4e0331b631abf3\": dial tcp 104.18.8.90:443: i/o timeout"
[GIN] 2024/07/21 - 07:27:32 | 200 |   32.7627094s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/07/21 - 07:27:48 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:27:48 | 404 |            0s |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:28:06.136+08:00 level=INFO source=download.go:136 msg="downloading ff1d1fc78170 in 55 100 MB part(s)"
time=2024-07-21T07:28:37.059+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.059+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.061+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 2 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.062+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 31 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 54 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 38 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.075+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 4 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.091+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.091+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 22 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.106+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.106+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 45 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 1 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 35 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 50 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 24 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.122+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 13 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.138+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.138+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 10 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:28:37.196+08:00 level=INFO source=images.go:1047 msg="request failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout"
time=2024-07-21T07:28:37.196+08:00 level=INFO source=download.go:178 msg="ff1d1fc78170 part 25 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ff/ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240720%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240720T232807Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=428592c40b79461663b67f7dfef07ad8b400cc6024fa095eb10494afd744febb\": dial tcp 104.18.8.90:443: i/o timeout, retrying in 1s"
time=2024-07-21T07:31:34.243+08:00 level=INFO source=images.go:1047 msg="request failed: Head \"https://registry.ollama.ai/v2/library/gemma2/blobs/sha256:109037bec39c0becc8221222ae23557559bc594290945a2c4221ab4f303b8871\": dial tcp 172.67.182.229:443: i/o timeout"
[GIN] 2024/07/21 - 07:31:34 | 200 |         3m45s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/07/21 - 07:31:54 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 07:31:54 | 404 |       503.3µs |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:32:11.792+08:00 level=INFO source=download.go:136 msg="downloading 109037bec39c in 1 136 B part(s)"
time=2024-07-21T07:32:15.018+08:00 level=INFO source=download.go:136 msg="downloading 097a36493f71 in 1 8.4 KB part(s)"
time=2024-07-21T07:32:18.187+08:00 level=INFO source=download.go:136 msg="downloading 10aa81da732e in 1 487 B part(s)"
[GIN] 2024/07/21 - 07:32:20 | 200 |   26.1686553s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/07/21 - 07:32:20 | 200 |     80.6173ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T07:32:20.522+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=41 layers.split="" memory.available="[6.7 GiB]" memory.required.full="7.5 GiB" memory.required.partial="6.7 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.7 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T07:32:20.534+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --no-mmap --parallel 1 --port 53069"
time=2024-07-21T07:32:20.538+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T07:32:20.538+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T07:32:20.540+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="3440" timestamp=1721518340
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="3440" timestamp=1721518340 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="53069" tid="3440" timestamp=1721518340
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_0:  294 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-07-21T07:32:20.793+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW) 
llm_load_print_meta: general.name     = gemma-2-9b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 41 repeating layers to GPU
llm_load_tensors: offloaded 41/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1541.93 MiB
llm_load_tensors:      CUDA0 buffer size =  4361.05 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    16.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   656.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    16.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 17
INFO [wmain] model loaded | tid="3440" timestamp=1721518348
time=2024-07-21T07:32:28.977+08:00 level=INFO source=server.go:617 msg="llama runner started in 8.44 seconds"
[GIN] 2024/07/21 - 07:32:28 | 200 |    8.6336192s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:35:37 | 200 |   53.5176442s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 07:35:51 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2024-07-21T07:35:51.696+08:00 level=WARN source=routes.go:817 msg="bad manifest filepath" name=hub/bacx/studybuddy:latest error="open D:\\AGI\\ollama_models\\blobs\\sha256-c65468c33ec86e462ef2a5eff135cbe40b4e7179b72806048034ccc9dd671eb6: The system cannot find the file specified."
[GIN] 2024/07/21 - 07:35:51 | 200 |      21.012ms |       127.0.0.1 | GET      "/api/tags"
v1l68za4

v1l68za43#

并非所有层都在0.2.7版本的GPU上加载:

llm_load_tensors: offloading 41 repeating layers to GPU
llm_load_tensors: offloaded 41/43 layers to GPU

你有0.2.0版本的相应日志吗?

swvgeqrz

swvgeqrz4#

你的卡片上并非所有的8G都可用于ollama: memory.available="[6.8 GiB]"nvidia-smi的输出是什么?

jslywgbw

jslywgbw5#

并非所有层都在0.2.7版本的GPU上加载:

llm_load_tensors: offloading 41 repeating layers to GPU
llm_load_tensors: offloaded 41/43 layers to GPU

你有0.2.0版本的相应日志吗?

2024/07/21 09:23:25 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\AGI\\ollama_models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T09:23:25.512+08:00 level=INFO source=images.go:751 msg="total blobs: 77"
time=2024-07-21T09:23:25.515+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-21T09:23:25.518+08:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.0)"
time=2024-07-21T09:23:25.518+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-07-21T09:23:25.518+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T09:23:26.766+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda compute=8.6 driver=12.1 name="NVIDIA GeForce RTX 3070 Ti Laptop GPU" total="8.0 GiB" available="6.9 GiB"
[GIN] 2024/07/21 - 09:23:39 | 200 |       562.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 09:23:39 | 200 |      52.121ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T09:23:40.138+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=42 layers.split="" memory.available="[7.5 GiB]" memory.required.full="7.5 GiB" memory.required.partial="6.8 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T09:23:40.157+08:00 level=INFO source=server.go:375 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 42 --no-mmap --parallel 1 --port 57976"
time=2024-07-21T09:23:40.225+08:00 level=INFO source=sched.go:477 msg="loaded runners" count=1
time=2024-07-21T09:23:40.225+08:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-21T09:23:40.225+08:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="2304" timestamp=1721525021
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="2304" timestamp=1721525021 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="57976" tid="2304" timestamp=1721525021
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2024-07-21T09:23:41.755+08:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_0:  294 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW) 
llm_load_print_meta: general.name     = gemma-2-9b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloaded 42/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1435.56 MiB
llm_load_tensors:      CUDA0 buffer size =  4467.42 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   672.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    15.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 4
INFO [wmain] model loaded | tid="2304" timestamp=1721525031
time=2024-07-21T09:23:51.769+08:00 level=INFO source=server.go:609 msg="llama runner started in 11.54 seconds"
[GIN] 2024/07/21 - 09:23:51 | 200 |   11.7944928s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 09:25:32 | 200 |   28.1855238s |       127.0.0.1 | POST     "/api/chat"
luaexgnf

luaexgnf6#

并非您卡片上的所有8G都可用于ollama:memory.available="[6.8 GiB]"nvidia-smi的输出是什么?
0.20确实运行得非常顺畅

yv5phkfx

yv5phkfx7#

在我卸载并重新安装0.27版本后,它也能顺利运行gemma2。我认为与其直接升级,还不如先卸载再重新安装ollama。
这是日志:

2024/07/21 09:32:08 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\AGI\\ollama_models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T09:32:08.218+08:00 level=INFO source=images.go:778 msg="total blobs: 77"
time=2024-07-21T09:32:08.221+08:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-21T09:32:08.224+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-21T09:32:08.225+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v6.1 cpu cpu_avx]"
time=2024-07-21T09:32:08.225+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T09:32:08.556+08:00 level=INFO source=gpu.go:287 msg="detected OS VRAM overhead" id=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda compute=8.6 driver=12.1 name="NVIDIA GeForce RTX 3070 Ti Laptop GPU" overhead="641.5 MiB"
time=2024-07-21T09:32:08.557+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda compute=8.6 driver=12.1 name="NVIDIA GeForce RTX 3070 Ti Laptop GPU" total="8.0 GiB" available="6.9 GiB"
[GIN] 2024/07/21 - 09:32:15 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2024/07/21 - 09:32:23 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 09:32:23 | 200 |     19.0269ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T09:32:24.040+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=42 layers.split="" memory.available="[6.9 GiB]" memory.required.full="7.5 GiB" memory.required.partial="6.8 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[6.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T09:32:24.045+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 42 --no-mmap --parallel 1 --port 59469"
time=2024-07-21T09:32:24.108+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T09:32:24.110+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T09:32:24.111+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="3592" timestamp=1721525545
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="3592" timestamp=1721525545 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="59469" tid="3592" timestamp=1721525545
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2024-07-21T09:32:25.884+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_0:  294 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW) 
llm_load_print_meta: general.name     = gemma-2-9b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloaded 42/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1435.56 MiB
llm_load_tensors:      CUDA0 buffer size =  4467.42 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   672.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    15.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 4
INFO [wmain] model loaded | tid="3592" timestamp=1721525550
time=2024-07-21T09:32:30.170+08:00 level=INFO source=server.go:617 msg="llama runner started in 6.06 seconds"
[GIN] 2024/07/21 - 09:32:30 | 200 |    6.1944242s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 09:35:09 | 200 |   21.2028767s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 09:35:19 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2024-07-21T09:35:19.577+08:00 level=WARN source=routes.go:817 msg="bad manifest filepath" name=hub/bacx/studybuddy:latest error="open D:\\AGI\\ollama_models\\blobs\\sha256-c65468c33ec86e462ef2a5eff135cbe40b4e7179b72806048034ccc9dd671eb6: The system cannot find the file specified."
[GIN] 2024/07/21 - 09:35:19 | 200 |      4.9597ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/07/21 - 09:35:30 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/21 - 09:35:30 | 200 |     36.1565ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-21T09:35:30.566+08:00 level=INFO source=sched.go:495 msg="updated VRAM based on existing loaded models" gpu=GPU-59be21cf-1a6f-4733-e579-d85deb64d686 library=cuda total="8.0 GiB" available="241.7 MiB"
time=2024-07-21T09:35:31.526+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=43 layers.offload=41 layers.split="" memory.available="[7.0 GiB]" memory.required.full="7.8 GiB" memory.required.partial="7.0 GiB" memory.required.kv="672.0 MiB" memory.required.allocations="[7.0 GiB]" memory.weights.total="5.3 GiB" memory.weights.repeating="4.6 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-07-21T09:35:31.529+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Raven\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AGI\\ollama_models\\blobs\\sha256-befd260af00133c21746d65696658a69103b53287fee1a6d544e8f972de05d67 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --no-mmap --parallel 1 --port 60108"
time=2024-07-21T09:35:31.534+08:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T09:35:31.534+08:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T09:35:31.534+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="16724" timestamp=1721525731
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="16724" timestamp=1721525731 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="60108" tid="16724" timestamp=1721525731
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from D:\AGI\ollama_models\blobs\sha256-befd260af00133c21746d65696658a69103b53287fee1a6d544e8f972de05d67 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = merged
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_K:  252 tensors
llama_model_loader: - type q6_K:   43 tensors
time=2024-07-21T09:35:31.787+08:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 364
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.36 GiB (4.98 BPW) 
llm_load_print_meta: general.name     = merged
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 41 repeating layers to GPU
llm_load_tensors: offloaded 41/43 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =  1556.37 MiB
llm_load_tensors:      CUDA0 buffer size =  4649.80 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    16.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   656.00 MiB
llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1224.77 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    16.01 MiB
llama_new_context_with_model: graph nodes  = 1690
llama_new_context_with_model: graph splits = 17
INFO [wmain] model loaded | tid="16724" timestamp=1721525737
time=2024-07-21T09:35:37.353+08:00 level=INFO source=server.go:617 msg="llama runner started in 5.82 seconds"
[GIN] 2024/07/21 - 09:35:37 | 200 |    6.8512463s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 09:37:04 | 200 |   32.4115701s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/21 - 09:38:30 | 200 |   35.5102982s |       127.0.0.1 | POST     "/api/chat"

相关问题