ludwig "Encounted nan values in tensor. Will be removed.", UserWarning

wwodge7n  于 2个月前  发布在  其他
关注(0)|答案(3)|浏览(35)

你好,团队。

我在使用Ludwig在NVIDIA A 100示例上进行LLM的微调时遇到了问题。我收到了错误信息:Encounted nan values in tensor. Will be removed.(遇到了Tensor中的nan值。将被移除。)我的损失和困惑度显示为NaN。

模型类型:LLM
基础模型:elyza/ELYZA-japanese-Llama-2-7b-instruct

input_features:
  - name: instruction
    type: text

output_features:
  - name: output
    type: text

preprocessing:
  split_probabilities: [0.8, 0.1, 0.1]

prompt:
  template: >-
    Below is an instruction that describes a task, paired with an input
    that provides further context. Write a response that appropriately
    completes the request.

    ### Instruction: {instruction}

    ### Input: {input}

    ### Response:

generation:
  temperature: 0.01
  max_new_tokens: 512

adapter:
  type: lora

quantization:
  bits: 4

trainer:
  type: finetune
  use_gpu: True
  epochs: 1
  batch_size: 8
  eval_batch_size: 8
  gradient_accumulation_steps: 1
  learning_rate: 0.001
  optimizer:
    type: adam
    params:
      eps: 1.e-8
      betas:
        - 0.9
        - 0.999
      weight_decay: 0
  learning_rate_scheduler:
    warmup_fraction: 0.03
    reduce_on_plateau: 0
"""`

{ "evaluation_frequency": { "frequency": 1, "period": "epoch" }, "test": { "combined": { "loss": [ NaN ] }, "output": { "char_error_rate": [ 1.0 ], "loss": [ NaN ], "next_token_perplexity": [ NaN ], "perplexity": [ NaN ], "sequence_accuracy": [ 0.0 ], "token_accuracy": [ 0.0 ] } }, "training": { "combined": { "loss": [ 1.7828550338745117 ] }, "output": { "char_error_rate": [ 0.9905372858047485 ], "loss": [ 1.7828550338745117 ], "next_token_perplexity": [ 16787.67578125 ], "perplexity": [ NaN ], "sequence_accuracy": [ 0.0 ], "token_accuracy": [ 3.948421363020316e-05 ] } }, "validation": { "combined": { "loss": [ NaN ] }, "output": { "char_error_rate": [ 1.0 ], "loss": [ NaN ], "next_token_perplexity": [ NaN ], "perplexity": [ NaN ], "sequence_accuracy": [ 0.0 ], "token_accuracy": [ 0.0 ] } } }

2izufjch

2izufjch1#

你好,@msmmpts,

如果你使用非常高的学习率,NaN值的可能性会更大。我建议你尝试使用一个数量级较小的学习率,例如0.0001

6rqinv9w

6rqinv9w2#

你好,@justinxzhao ,
我尝试使用学习率为0.0001。同样的问题仍然存在。

Training:  18%|█▊        | 719/4000 [22:29<44:32,  1.23it/s]training: completed batch 719 memory used: 2984.25MB
/usr/local/lib/python3.10/dist-packages/torchmetrics/aggregation.py:77: UserWarning: Encounted `nan` values in tensor. Will be removed.
  warnings.warn("Encounted `nan` values in tensor. Will be removed.", UserWarning)```
dsekswqp

dsekswqp3#

我同意这个观点。每次在第一个周期结束时,我都看到这个警告,然后得到以下错误:

Starting with step 0, epoch: 0
Training:  33%|███▎      | 429/1287 [32:07<1:08:57,  4.82s/it, loss=nan]Found NaN or inf values in parameter 'model.base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight' of module 'LLM'
NaN or inf tensors found in the model. Stopping training.
Could not load best checkpoint state from /mnt/disk/AI/ludwig/ludwig-lora/results/experiment_run/model/training_checkpoints/best.ckpt. Best checkpoint may not exist.
Traceback (most recent call last):
  File "/home/constellate/anaconda3/envs/ludwig/bin/ludwig", line 8, in <module>
    sys.exit(main())
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/cli.py", line 197, in main
    CLI()
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/cli.py", line 72, in __init__
    getattr(self, args.command)()
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/cli.py", line 77, in train
    train.cli(sys.argv[2:])
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/train.py", line 395, in cli
    train_cli(**vars(args))
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/train.py", line 185, in train_cli
    model.train(
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/api.py", line 678, in train
    train_stats = trainer.train(
  File "/home/constellate/anaconda3/envs/ludwig/lib/python3.10/site-packages/ludwig/trainers/trainer.py", line 1130, in train
    raise RuntimeError(error_message)
RuntimeError: Training ran into an error. No checkpoint was saved. This is because training was terminated early due to the presence of NaN or Inf values in the model weights before a single valid checkpoint could be saved.

这是我的 model.yaml 文件:

model_type: llm
backend:
  type: local
base_model: mistralai/Mistral-7B-v0.1
quantization:
  bits: 4

adapter:
  type: lora

prompt:
  template: >-
You are given a premise and a hypothesis below. If the premise entails the hypothesis, return 0. If the premise contradicts the hypothesis, return 2. Otherwise, if the premise does neither, return 1. 

### Premise: {premise}

### Hypothesis: {hypothesis}

### Label:

input_features:
  - name: input
    type: text

output_features:
  - name: label
    type: text
    preprocessing:
      max_sequence_length: 1

trainer:
  type: finetune
  batch_size: auto
  gradient_accumulation_steps: 16
  enable_gradient_checkpointing: true
  epochs: 3
  learning_rate: 2.0e-4
  optimizer:
    type: paged_adam

相关问题