属性错误:在segmentation_models_pytorch中使用带有aux_params的smp.Unet时,“tuple”对象没有属性“size”

yi0zb3m4  于 2023-10-20  发布在  其他
关注(0)|答案(1)|浏览(148)

我正在做一个项目,涉及使用Python中的smp(segmentation_models_pytorch)库进行语义分割。我正在尝试使用smp.Unet类训练一个带有辅助参数的UNet模型。但是,当我将aux_params参数添加到smp.Unet构造函数时,我遇到一个错误:

  1. File .../python3.11/site-packages/segmentation_models_pytorch/utils/train.py:51, in Epoch.run(self, dataloader)
  2. 49 for x, y in iterator:
  3. 50 x, y = x.to(self.device), y.to(self.device)
  4. ---> 51 loss, y_pred = self.batch_update(x, y)
  5. 53 # update loss logs
  6. 54 loss_value = loss.cpu().detach().numpy()
  7. ...
  8. -> 3162 if not (target.size() == input.size()):
  9. 3163 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
  10. 3165 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
  11. File ".../train_model.py", line 153, in train
  12. train_logs = self.train_epoch.run(self.train_loader)
  13. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  14. File ".../train_model.py", line 173, in main
  15. water_seg_model.train(epoch_number=100)
  16. File ".../train_model.py", line 176, in <module>
  17. main()
  18. AttributeError: 'tuple' object has no attribute 'size'

下面是我的代码的简化版本:

  1. ENCODER = 'resnet34'
  2. ENCODER_WEIGHTS = 'imagenet'
  3. CLASSES = ['cats']
  4. ACTIVATION = None
  5. DROPOUT = 0.5
  6. POOLING = 'avg'
  7. DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
  8. THRESHOLD = 0.9
  9. LEARNING_SPEED = 0.001
  10. AUX_PARAMS = dict(
  11. classes=len(CLASSES),
  12. dropout=DROPOUT,
  13. activation=ACTIVATION,
  14. pooling=POOLING
  15. )
  16. class SegmentationModel():
  17. def __init__(self):
  18. self.model = smp.Unet(
  19. encoder_name=ENCODER,
  20. encoder_weights=ENCODER_WEIGHTS,
  21. in_channels=3,
  22. classes=len(CLASSES),
  23. aux_params=AUX_PARAMS
  24. )
  25. self.preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)
  26. self.loss = smp.losses.SoftBCEWithLogitsLoss()
  27. self.loss.__name__ = 'SoftBCEWithLogitsLoss'
  28. self.metrics = [
  29. smp.utils.metrics.IoU(threshold=THRESHOLD),
  30. ]
  31. self.optimizer = torch.optim.Adam([
  32. dict(params=self.model.parameters(), lr=0.0001),
  33. ])
  34. self.train_epoch = smp.utils.train.TrainEpoch(
  35. self.model,
  36. loss=self.loss,
  37. metrics=self.metrics,
  38. optimizer=self.optimizer,
  39. device=DEVICE,
  40. verbose=True,
  41. )
  42. self.dataset = Dataset(
  43. self.images_train_dir,
  44. self.masks_train_dir,
  45. augmentation=get_training_augmentation(),
  46. preprocessing=get_preprocessing(self.preprocessing_fn),
  47. classes=['cats'],
  48. )
  49. self.train_loader = DataLoader(self.train_dataset, batch_size=16, shuffle=True, num_workers=6)
  50. def train(self, epoch_number: 10):
  51. for i in range(0, epoch_number):
  52. print('\nEpoch: {}'.format(i))
  53. train_logs = self.train_epoch.run(self.train_loader)
  54. def main():
  55. cats_seg_model = SegmentationModel()
  56. cats_seg_model.train(epoch_number=100)

在smp.Unet中使用aux_params参数时,什么原因会导致“tuple”对象没有属性“size”错误?如何使用aux_params字典正确初始化smp.Unet模型以避免此错误?
对这个问题的任何帮助或见解将不胜感激。谢谢你,谢谢!

cbeh67ev

cbeh67ev1#

来自smp docs:所有模型都支持aux_params参数,默认设置为None。如果aux_params = None,则不创建分类辅助输出,否则模型不仅生成掩码,还生成具有形状NC的标签输出。分类头由GlobalPooling->Dropout(可选)->Linear->Activation(可选)层组成,可以通过aux_params配置如下:

  1. aux_params=dict(
  2. pooling='avg', # one of 'avg', 'max'
  3. dropout=0.5, # dropout ratio, default is None
  4. activation='sigmoid', # activation function, default is None
  5. classes=4, # define number of output labels
  6. )
  7. model = smp.Unet('resnet34', classes=4, aux_params=aux_params)
  8. mask, label = model(x)

因此,可能的解决方案,或者至少是变通方案,是创建一个新的Epoch类,其标签为:

  1. class TrainEpochWithAUX(train.Epoch):
  2. def __init__(self, model, loss, metrics, optimizer, device="cpu", verbose=True):
  3. super().__init__(
  4. model=model,
  5. loss=loss,
  6. metrics=metrics,
  7. stage_name="train",
  8. device=device,
  9. verbose=verbose,
  10. )
  11. self.optimizer = optimizer
  12. def on_epoch_start(self):
  13. self.model.train()
  14. def batch_update(self, x, y):
  15. self.optimizer.zero_grad()
  16. prediction, label = self.model.forward(x) # added label here
  17. loss = self.loss(prediction, y)
  18. loss.backward()
  19. self.optimizer.step()
  20. return loss, prediction
展开查看全部

相关问题