pytorch 基于AutoModelForSequenceClassification的Hugginface多类分类

k5ifujac  于 2023-01-20  发布在  其他
关注(0)|答案(1)|浏览(427)

我尝试使用Hugginface的AutoModelForSequenceClassification API进行多类分类,但对它的配置感到困惑。
我的数据集是一个热编码和问题类型是多类(一次一个标签)
我尝试过的:

from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased",
                                                           num_labels=6,
                                                           id2label=id2label,
                                                           label2id=label2id)


batch_size = 8
metric_name = "f1"


from transformers import TrainingArguments, Trainer

args = TrainingArguments(
    f"bert-finetuned-english",
    evaluation_strategy = "epoch",
    save_strategy = "epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    num_train_epochs=10,
    weight_decay=0.01,
    load_best_model_at_end=True,
    metric_for_best_model=metric_name,
    #push_to_hub=True,
)

trainer = Trainer(
    model,
    args,
    train_dataset=encoded_dataset["train"],
    eval_dataset=encoded_dataset["test"],
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)

对不对呀?
我对损失函数感到困惑,当我打印一个正向传递时,损失为BinaryCrossEntropyWithLogitsBackward

SequenceClassifierOutput([('loss',
                           tensor(0.6986, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)),
                          ('logits',
                           tensor([[-0.5496,  0.0793, -0.5429, -0.1162, -0.0551]],
                                  grad_fn=<AddmmBackward0>))])

用于多标签或二元分类任务。它应该使用'nn. CrossEntropyLoss'?如何正确使用此API进行多类分类并定义损失函数?

kkbh8khc

kkbh8khc1#

你有六个类,每个单元格的值为1或0。例如,Tensor[0.,0.,0.,0.,1.,0.]是第五个类的表示。我们的任务是预测六个标签([1.,0.,0.,0.,0.,0.,0.]),并将它们与地面真值([0.,0.,0.,0.,1.,0.])进行比较。

相关问题