llama.cpp 服务器未编译为多模态,或无法加载模型投影仪,

sq1bmfud  于 2个月前  发布在  其他
关注(0)|答案(7)|浏览(20)

发生了什么?
嘿,大家好,
我目前正在尝试使用llama cpp服务器和一个llava视觉模型进行设置。
当使用llama-llava-cli时,一切都运行得很好:

./llama-llava-cli -m ../llava-v1.6-vicuna-7b/Llava-V1.6-Vicuna-7B-F32.gguf --mmproj vit/mmproj-model-f16.gguf --image /root/Gfp-wisconsin-madison-the-nature-boardwalk.jpg -c 4096 -ngl 15000 -p "The image depicts a place. Describe this place. Would my dog, a natural explorer, be happy there?" --temp 0
(...)
encode_image_with_clip: 5 segments encoded in   320.06 ms
encode_image_with_clip: image embedding created: 2880 tokens

encode_image_with_clip: image encoded in   433.94 ms by CLIP (    0.15 ms per image patch)

 The image shows a serene and picturesque landscape. It appears to be a rural area with a long, narrow pathway leading through a field of tall grass. The pathway is flanked by a grassy area on one side and a line of trees or bushes on the other, providing a natural boundary. The sky is partly cloudy, suggesting a pleasant day with some sunshine.

Your dog, as a natural explorer, would likely be happy in such a setting. The open space and the opportunity to explore the grassy areas and the pathway would be appealing to a dog that enjoys running and sniffing around. The presence of trees and bushes would also offer shade and additional exploration opportunities. However, it's important to ensure that your dog is well-behaved and under control, especially if there are other animals or people around.

然而,当我使用llama.cpp服务器时,无法获得相同的功能。运行以下命令...

./llama-server -m ../llava-v1.6-vicuna-7b/Llava-V1.6-Vicuna-7B-F32.gguf --mmproj vit/mmproj-model-f16.gguf --image /root/Gfp-wisconsin-madison-the-nature-boardwalk.jpg -c 4096 -ngl 15000 --host 0.0.0.0 --port 8010

...并尝试在llama.cpp服务器前端使用图像导致错误消息:

The server was not compiled for multimodal or the model projector can't be loaded

尝试使用带有有效负载的补全或OpenAI风格的聊天API与图像一起工作,结果是图像完全被忽略(无论是base64还是url)。
服务器日志也没有显示CLIP的使用迹象。
有人知道可能出了什么问题吗? This reddit thread from months ago 展示了这种功能的工作原理。

名称和版本

./llama-llava-cli --version
版本: 3417 ( 3d0e436 )
使用cc(Ubuntu 13.2.0-23ubuntu4) 13.2.0构建,适用于x86_64-linux-gnu
./llama-server --version
版本: 3417 ( 3d0e436 )
使用cc(Ubuntu 13.2.0-23ubuntu4) 13.2.0构建,适用于x86_64-linux-gnu

您正在看到问题的操作系统是什么?

Linux

相关日志输出

./llama-server -m ../llava-v1.6-vicuna-7b/Llava-V1.6-Vicuna-7B-F32.gguf --mmproj vit/mmproj-model-f16.gguf --image /root/Gfp-wisconsin-madison-the-nature-boardwalk.jpg -c 4096 -ngl 15000 --host 0.0.0.0 --port 8010
INFO [                    main] build info | tid="131625621454848" timestamp=1721418528 build=3417 commit="3d0e4367"
INFO [                    main] system info | tid="131625621454848" timestamp=1721418528 n_threads=4 n_threads_batch=-1 total_threads=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from ../llava-v1.6-vicuna-7b/Llava-V1.6-Vicuna-7B-F32.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Llava v1.6 Vicuna 7b
llama_model_loader: - kv   2:                           general.basename str              = llava-v1.6-vicuna
llama_model_loader: - kv   3:                         general.size_label str              = 7.1B
llama_model_loader: - kv   4:                               general.tags arr[str,1]       = ["image-text-to-text"]
llama_model_loader: - kv   5:                           llama.vocab_size u32              = 32000
llama_model_loader: - kv   6:                       llama.context_length u32              = 4096
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   8:                          llama.block_count u32              = 32
llama_model_loader: - kv   9:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv  10:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  11:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  12:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  13:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  15:                          general.file_type u32              = 0
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  24:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - type  f32:  291 tensors
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 25.10 GiB (32.00 BPW) 
llm_load_print_meta: general.name     = Llava v1.6 Vicuna 7b
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: Quadro RTX 8000, compute capability 7.5, VMM: yes
Device 1: Quadro RTX 8000, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   500.00 MiB
llm_load_tensors:      CUDA0 buffer size = 13124.53 MiB
llm_load_tensors:      CUDA1 buffer size = 12080.48 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =  1088.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   960.00 MiB
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.24 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   352.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   352.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    40.02 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 3
INFO [                    init] initializing slots | tid="131625621454848" timestamp=1721418536 n_slots=1
INFO [                    init] new slot | tid="131625621454848" timestamp=1721418536 id_slot=0 n_ctx_slot=4096
INFO [                    main] model loaded | tid="131625621454848" timestamp=1721418536
INFO [                    main] chat template | tid="131625621454848" timestamp=1721418536 chat_example="<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\nHello<|im_end|>\n<|im_start|>assistant\nHi there<|im_end|>\n<|im_start|>user\nHow are you?<|im_end|>\n<|im_start|>assistant\n" built_in=true
INFO [                    main] HTTP server listening | tid="131625621454848" timestamp=1721418536 n_threads_http="3" port="8010" hostname="0.0.0.0"
INFO [            update_slots] all slots are idle | tid="131625621454848" timestamp=1721418536
INFO [      log_server_request] request | tid="131624169635840" timestamp=1721418628 remote_addr="XXX" remote_port=54157 status=200 method="GET" path="/" params={}
INFO [      log_server_request] request | tid="131624169635840" timestamp=1721418628 remote_addr="XXX" remote_port=54157 status=200 method="GET" path="/index.js" params={}
INFO [      log_server_request] request | tid="131624159150080" timestamp=1721418628 remote_addr="XXX" remote_port=54158 status=200 method="GET" path="/completion.js" params={}
INFO [      log_server_request] request | tid="131624148664320" timestamp=1721418628 remote_addr="XXX" remote_port=54159 status=200 method="GET" path="/json-schema-to-grammar.mjs" params={}
INFO [   launch_slot_with_task] slot is processing task | tid="131625621454848" timestamp=1721418637 id_slot=0 id_task=0
INFO [            update_slots] kv cache rm [p0, end) | tid="131625621454848" timestamp=1721418637 id_slot=0 id_task=0 p0=0
INFO [      log_server_request] request | tid="131624159150080" timestamp=1721418638 remote_addr="10.243.231.194" remote_port=54160 status=200 method="POST" path="/completion" params={}
INFO [            update_slots] slot released | tid="131625621454848" timestamp=1721418638 id_slot=0 id_task=0 n_ctx=4096 n_past=69 n_system_tokens=0 n_cache_tokens=69 truncated=false
INFO [            update_slots] all slots are idle | tid="131625621454848" timestamp=1721418638
k3bvogb1

k3bvogb11#

刚刚意识到llama-server的多模态支持暂时被取消了。
关于这个功能何时会再次可用,有任何更新吗?

6ju8rftf

6ju8rftf3#

我们是否知道哪些提交或PR被支持了?

dba5bblo

dba5bblo4#

支持LLaVA的最新服务器版本是B2356。

oxiaedzo

oxiaedzo5#

所以我不能使用moondream2或Bunny VLM与服务器一起使用吗?有什么替代方案?

p8ekf7hl

p8ekf7hl6#

创建一个名为LMStudio/models/WHATEVER/local/moondream2/的目录。
将moondream2-mmproj-f16.gguf和moondream2-text-model-f16.gguf复制到该目录中。
选择Alpaca作为预设。
完成。

xqk2d5yq

xqk2d5yq7#

感谢,但我需要这个作为服务器,这样可以吗?

相关问题