pytorch 在Hugging Face Transformers中使用from_pretrained()方法加载自定义ViT模型时出错

ca1c2owp  于 2023-06-06  发布在  其他
关注(0)|答案(1)|浏览(439)

我想微调ViT模型,使其能够预测每个样本的九个连续值,这些值代表大小比例。因此,这九个值的总和等于1。我定义了以下模型类,它继承自父类“ViTPretrainedModel”,以便能够在之后调用from_pretrained()方法:

class ViTForRegression(ViTPreTrainedModel):
    def __init__(self, model_name_or_path, num_labels=NB_SIEVING_SIZE):
        config = ViTConfig.from_pretrained(model_name_or_path)
        super().__init__(config)
        self.model = ViTModel.from_pretrained(model_name_or_path)
        self.regressor = torch.nn.Linear(self.model.config.hidden_size, num_labels)
        self.loss_fn = torch.nn.KLDivLoss(reduction='batchmean')

    def forward(self, pixel_values, labels=None):
        outputs = self.model(pixel_values=pixel_values)
        logits = self.regressor(outputs.last_hidden_state[:, 0])
        logits = torch.nn.functional.softmax(logits, dim=-1)
        log_probs = torch.log(logits) # we log the softmax (probabilities) for the KLDivLoss
        if labels is not None:
            loss = self.loss_fn(log_probs, labels)
        return { 'logits': logits, 'loss': loss }

然后,我使用预训练的ViT模型及其权重创建模型,并使用Transformers库中的Trainer类对其进行训练。

model = ViTForRegression.from_pretrained('google/vit-base-patch16-224-in21k')

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=val_dataset,
    compute_metrics=compute_metrics,
    callbacks=[EarlyStoppingCallback(early_stopping_patience=10)],
)

train_results = trainer.train()

验证集的性能非常好,令人鼓舞,因此我使用save_pretrained()方法保存模型

model.save_pretrained(os.path.join(SAVED_MODELS_PATH, 'regression_vit')

但是现在我不知道如何从另一个脚本加载我的模型。我尝试的是定义与这里相同的类,然后用以下代码加载模型:

model = model = ViTForRegression.from_pretrained(os.path.join(SAVED_MODELS_PATH, 'regression_vit'))

但我无法加载它,并得到以下错误:

HFValidationError                         Traceback (most recent call last)
File ~/anaconda3/envs/pa_orcademo_torch/lib/python3.10/site-packages/transformers/configuration_utils.py:629, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    627 try:
    628     # Load from local folder or from cache or download from model Hub and cache
--> 629     resolved_config_file = cached_file(
    630         pretrained_model_name_or_path,
    631         configuration_file,
    632         cache_dir=cache_dir,
    633         force_download=force_download,
    634         proxies=proxies,
    635         resume_download=resume_download,
    636         local_files_only=local_files_only,
    637         use_auth_token=use_auth_token,
    638         user_agent=user_agent,
    639         revision=revision,
    640         subfolder=subfolder,
    641         _commit_hash=commit_hash,
    642     )
    643     commit_hash = extract_commit_hash(resolved_config_file, commit_hash)

File ~/anaconda3/envs/pa_orcademo_torch/lib/python3.10/site-packages/transformers/utils/hub.py:417, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
    415 try:
    416     # Load from URL or cache if already cached
--> 417     resolved_file = hf_hub_download(
...
  "qkv_bias": true,
  "torch_dtype": "float32",
  "transformers_version": "4.29.2"
}
' is the correct path to a directory containing a config.json file

如果我保存模型的方式错误,或者是加载导致了问题,我就不会这样做。我真的很感激你的帮助。
我试过:

  • 使用model.save_pretrained()方法训练并保存后,加载预训练的自定义ViT模型。

我期望:

  • 模型已加载,我可以用它进行预测

结果是:

  • 模型加载失败
cgyqldqp

cgyqldqp1#

我无法执行您的代码示例,因为from_pretrained已引发错误。
从ViTPreTrainedModel继承的from_pretrained方法将ViTConfig对象传递给ViTForRegression类(代码)的__init__,而不是示例中所示的字符串。您可以将此配置对象直接传递给ViTModel,而无需再次调用from_pretrained。
我还将在配置中包含您的附加参数num_labels和自定义前缀(以避免干扰hf代码)。

import torch
from transformers import ViTPreTrainedModel, ViTConfig, ViTModel

class ViTForRegression(ViTPreTrainedModel):
    def __init__(self, config: ViTConfig):
        #not needed because config is already passed
        #config = ViTConfig.from_pretrained(model_name_or_path)
        super().__init__(config)
        self.vit = ViTModel(config)
        self.regressor = torch.nn.Linear(config.hidden_size, config.my_num_labels)
        self.loss_fn = torch.nn.KLDivLoss(reduction='batchmean')

        self.post_init()

    def forward(self, pixel_values, labels=None):
        outputs = self.vit(pixel_values=pixel_values)
        logits = self.regressor(outputs.last_hidden_state[:, 0])
        logits = torch.nn.functional.softmax(logits, dim=-1)
        log_probs = torch.log(logits) # we log the softmax (probabilities) for the KLDivLoss
        if labels is not None:
            loss = self.loss_fn(log_probs, labels)
        return { 'logits': logits, 'loss': loss }

model_id = 'google/vit-base-patch16-224-in21k'
custom_config = ViTConfig.from_pretrained(model_id)
# adding custom parameter
custom_config.my_num_labels = 5 

model = ViTForRegression.from_pretrained(model_id, config=custom_config)
# Testing it
model.save_pretrained("blabla")
model_saved = ViTForRegression.from_pretrained('blabla')
all([torch.allclose(w1,w1) for w1,w2 in zip(model.parameters(), model_saved.parameters()) ])

输出:

True

相关问题