pytorch 如何计算测试集评估时的f1得分?

xzlaal3s  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(233)

我试图计算的f1分数在评估我自己的测试集,但我不能解决,因为我是非常没有经验的.我已经尝试使用两个f1分数从Scikit-Learn和从torchmetrics,但他们给予我每次不同的错误.这是我的代码:

  1. # Function to test the model
  2. from sklearn.metrics import f1_score
  3. since = time.time()
  4. total=0
  5. correct=0
  6. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  7. y_pred=[]
  8. y_true=[]
  9. # Iterate over data.
  10. with torch.no_grad():
  11. for inputs, labels in dataloadersTest_dict['Test']:
  12. inputs = inputs.to(device)
  13. labels = labels.to(device)
  14. #outputs = model(inputs)
  15. predicted_outputs = model(inputs)
  16. _, predicted = torch.max(predicted_outputs, 1)
  17. total += labels.size(0)
  18. print(total)
  19. correct += (predicted == labels).sum().item()
  20. print(correct)
  21. #f1 score
  22. temp_true=labels.numpy()
  23. temp_pred=predicted.numpy()
  24. y_true.append(temp_true.tolist())
  25. y_pred.append(temp_pred.tolist())
  26. time_elapsed = time.time() - since
  27. test_acc=100 * correct / total
  28. print('Evaluation completed in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
  29. print('Accuracy: %d %%' % (test_acc))
  30. print('F1 Score:')
  31. f1=f1_score(y_true,y_pred, average = 'macro')
  32. print(f1)
byqmnocz

byqmnocz1#

错误跟踪应该是可用的,以便发现问题,但我猜问题是由于传递一个嵌套列表到f1_score,而不是一个单一的列表。它必须通过改变最终列表的收集策略来修复。

  1. # Iterate over data.
  2. y_true, y_pred = [], []
  3. with torch.no_grad():
  4. for inputs, labels in dataloadersTest_dict['Test']:
  5. inputs = inputs.to(device)
  6. labels = labels.to(device)
  7. #outputs = model(inputs)
  8. predicted_outputs = model(inputs)
  9. _, predicted = torch.max(predicted_outputs, 1)
  10. total += labels.size(0)
  11. print(total)
  12. correct += (predicted == labels).sum().item()
  13. print(correct)
  14. #f1 score
  15. temp_true=labels.numpy()
  16. temp_pred=predicted.numpy()
  17. y_true+=temp_true.tolist()
  18. y_pred+=temp_pred.tolist()
展开查看全部

相关问题