scipy一次优化一个迭代

7cwmlq89  于 2022-11-10  发布在  其他
关注(0)|答案(1)|浏览(153)

我想把优化的目标作为迭代次数的函数来控制,在我的真实的问题中,我有一个复杂的正则化项,我想用迭代次数来控制。
是否可以一次调用一个scipy优化器的一个迭代,或者至少能够访问目标函数中的迭代次数?
以下是我目前为止最好的尝试:

from scipy.optimize import fmin_slsqp
from scipy.optimize import minimize as mini
import numpy as np

# define objective function

# x is the design input

# iteration is the iteration number

# the idea is that I want to control a regularization term using the iteration number

def objective(x, iteration):
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2 + 10 * np.sum(x**2) / iteration

x = np.ones(2) * 5
for ii in range(20):
   x = fmin_slsqp(objective, x, iter=1, args=(ii,), iprint=0)

   if ii == 5: print('at iteration 5, I expect to get ~ [0, 0], but I get', x)

truex = mini(objective, np.ones(2) * 5, args=(200,)).x
print('the final result is ', x, 'instead of the correct answer, which is close to [1, 1] (', truex, ')')

输出:

at iteration 5, I expect to get ~ [0, 0], but I get [5. 5.]
the final result is  [5. 5.] instead of the correct answer, [1, 1] ([0.88613989 0.78485145])
wb1gzix0

wb1gzix01#

,我认为scipy不提供此选项。

有趣的是,pytorch也是如此。请看这个一次优化一个迭代的例子:

import numpy as np

# define rosenbrock function and gradient

a = 1
b = 5
def f(x):
   return (a - x[0])**2 + b * (x[1] - x[0]**2)**2

# create stochastic rosenbrock function and gradient

def f_rand(x):
   return f(x) * np.random.uniform(0.5, 1.5)

x = np.array([0.1, 0.1])
x0 = x.copy()

import torch
x_tensor = torch.tensor(x0, requires_grad=True)
optimizer = torch.optim.Adam([x_tensor], lr=learning_rate)

def closure():
   optimizer.zero_grad()
   loss = f_rand(x_tensor)
   loss.backward()
   return loss

# optimize one iteration at a time

for ii in range(200):
   optimizer.step(closure)

print('optimal solution found: ', x_tensor, f(x_tensor))

如果你真的需要使用scipy,你可以创建一个类来计算迭代次数,但是当你把它和一个近似逆海森矩阵的算法混合在一起时,你应该小心。

from scipy.optimize import fmin_slsqp
from scipy.optimize import minimize as mini
import numpy as np

# define objective function

# x is the design input

# iteration is the iteration number

# the idea is that I want to control a regularization term using the iteration number

def objective(x):
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2 + 10 * np.sum(x**2) 

class myclass:
    def __init__(self):
        self.iteration = 0

    def call(self, x):
       self.iteration += 1
       return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2 + 10 * np.sum(x**2) / self.iteration

x = np.ones(2) * 5
obj = myclass()
x = fmin_slsqp(obj.call, x, iprint=0)

truex = mini(objective, np.ones(2) * 5).x
print('the final result is ', x, ', which is not the correct answer, and is not close to [1, 1] (', truex, ')')

相关问题