使用PyTorch的可训练参数

91zkwejq  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(111)

如何使用PyTorch集成范围从2到10的可训练参数?此参数不直接参与输出操作,但需要在训练期间维护其更新。

xytpbqjk

xytpbqjk1#

您可以通过两种不同的方式来实现它:
1.箝位方法(硬限制):

a = torch.nn.Parameter(torch.tensor(100.0), requires_grad=True)

# use this in your train loop after optimizer.step() in order to clip it in-place and keep it in your desired range every time it is updated
a.data.clamp_(min=2, max=10)
  1. sigmoid方法(软限幅):
a = torch.tensor(100.0)
a = torch.nn.Parameter(2 + torch.sigmoid(a) * (10 - 2),  requires_grad=True)
# param `a` will be a parameter limited between 2 and 10

参考文献:
https://discuss.pytorch.org/t/set-constraints-on-parameters-or-layers/23620/7
How to use a learnable parameter in pytorch, constrained between 0 and 1?
How can I limit the range of parameters in pytorch?
https://pytorch.org/docs/stable/generated/torch.Tensor.clamp_.html

相关问题