如何显示scipy.optimize函数的进度?

yduiuuwa  于 2022-11-10  发布在  其他
关注(0)|答案(8)|浏览(162)

我使用scipy.optimize来最小化一个有12个参数的函数。
我开始优化前一段时间,仍在等待结果.
有没有办法强制scipy.optimize显示它的进度(比如已经完成了多少,当前最好的点是什么)?

eqfvzcg8

eqfvzcg81#

正如mg007所建议的,一些scipy.optimize例程允许回调函数(不幸的是,leastsq目前不允许这样做)。下面是一个使用“fmin_bfgs”例程的示例,我使用回调函数来显示参数的当前值和每次迭代时目标函数的值。

import numpy as np
from scipy.optimize import fmin_bfgs

Nfeval = 1

def rosen(X): #Rosenbrock function
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

def callbackF(Xi):
    global Nfeval
    print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
    Nfeval += 1

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              callback=callbackF, 
              maxiter=2000, 
              full_output=True, 
              retall=False)

输出如下所示:

Iter    X1          X2          X3         f(X)      
   1    1.031582    1.062553    1.130971    0.005550
   2    1.031100    1.063194    1.130732    0.004973
   3    1.027805    1.055917    1.114717    0.003927
   4    1.020343    1.040319    1.081299    0.002193
   5    1.005098    1.009236    1.016252    0.000739
   6    1.004867    1.009274    1.017836    0.000197
   7    1.001201    1.002372    1.004708    0.000007
   8    1.000124    1.000249    1.000483    0.000000
   9    0.999999    0.999999    0.999998    0.000000
  10    0.999997    0.999995    0.999989    0.000000
  11    0.999997    0.999995    0.999989    0.000000
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 11
         Function evaluations: 85
         Gradient evaluations: 17

至少通过这种方式,您可以看到优化器跟踪最小

d8tt03nd

d8tt03nd2#

下面的例子展示了如何去掉global变量、call_back函数以及多次重新计算目标函数的值

import numpy as np
from scipy.optimize import fmin_bfgs

def rosen(X, info): #Rosenbrock function
    res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

    # display information
    if info['Nfeval']%100 == 0:
        print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
    info['Nfeval'] += 1
    return res

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              args=({'Nfeval':0},), 
              maxiter=1000, 
              full_output=True, 
              retall=False,
              )

这将生成如下输出

Iter    X1          X2          X3         f(X)     
   0    1.100000    1.100000    1.100000    2.440000
 100    1.000000    0.999999    0.999998    0.000000
 200    1.000000    0.999999    0.999998    0.000000
 300    1.000000    0.999999    0.999998    0.000000
 400    1.000000    0.999999    0.999998    0.000000
 500    1.000000    0.999999    0.999998    0.000000
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.000000
         Iterations: 12
         Function evaluations: 502
         Gradient evaluations: 98

但是,没有免费启动,这里我使用function evaluation times而不是algorithmic iteration times作为计数器。一些算法可能在一次迭代中多次计算目标函数。

c9qzyr3d

c9qzyr3d3#

请尝试使用:

options={'disp': True}

以强制scipy.optimize.minimize打印中间结果。

flseospp

flseospp4#

scipy中的许多优化器确实缺乏详细的输出(scipy.optimize.minimize的'trust-constr'方法是个例外)。我遇到过类似的问题,并通过在目标函数周围创建一个 Package 器和使用回调函数解决了它。这里没有执行额外的函数求值,因此这应该是一个有效的解决方案。

import numpy as np

class Simulator:
def __init__(self, function):
    self.f = function # actual objective function
    self.num_calls = 0 # how many times f has been called
    self.callback_count = 0 # number of times callback has been called, also measures iteration count
    self.list_calls_inp = [] # input of all calls
    self.list_calls_res = [] # result of all calls
    self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
    self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
    self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
    self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations

def simulate(self, x, *args):
    """Executes the actual simulation and returns the result, while
    updating the lists too. Pass to optimizer without arguments or
    parentheses."""
    result = self.f(x, *args) # the actual evaluation of the function
    if not self.num_calls: # first call is stored in all lists
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
        self.list_callback_inp.append(x)
        self.list_callback_res.append(result)
    elif result < self.decreasing_list_calls_res[-1]:
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
    self.list_calls_inp.append(x)
    self.list_calls_res.append(result)
    self.num_calls += 1
    return result

def callback(self, xk, *_):
    """Callback function that can be used by optimizers of scipy.optimize.
    The third argument "*_" makes sure that it still works when the
    optimizer calls the callback function with more than one argument. Pass
    to optimizer without arguments or parentheses."""
    s1 = ""
    xk = np.atleast_1d(xk)
    # search backwards in input list for input corresponding to xk
    for i, x in reversed(list(enumerate(self.list_calls_inp))):
        x = np.atleast_1d(x)
        if np.allclose(x, xk):
            break

    for comp in xk:
        s1 += f"{comp:10.5e}\t"
    s1 += f"{self.list_calls_res[i]:10.5e}"

    self.list_callback_inp.append(xk)
    self.list_callback_res.append(self.list_calls_res[i])

    if not self.callback_count:
        s0 = ""
        for j, _ in enumerate(xk):
            tmp = f"Comp-{j+1}"
            s0 += f"{tmp:10s}\t"
        s0 += "Objective"
        print(s0)
    print(s1)
    self.callback_count += 1

可以定义简单的测试

from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})

print(f"Number of calls to Simulator instance {ros_sim.num_calls}")

从而导致:

Comp-1          Comp-2          Objective
1.76348e-01     -1.31390e-07    7.75116e-01
2.85778e-01     4.49433e-02     6.44992e-01
3.14130e-01     9.14198e-02     4.75685e-01
4.26061e-01     1.66413e-01     3.52251e-01
5.47657e-01     2.69948e-01     2.94496e-01
5.59299e-01     3.00400e-01     2.09631e-01
6.49988e-01     4.12880e-01     1.31733e-01
7.29661e-01     5.21348e-01     8.53096e-02
7.97441e-01     6.39950e-01     4.26607e-02
8.43948e-01     7.08872e-01     2.54921e-02
8.73649e-01     7.56823e-01     2.01121e-02
9.05079e-01     8.12892e-01     1.29502e-02
9.38085e-01     8.78276e-01     4.13206e-03
9.73116e-01     9.44072e-01     1.55308e-03
9.86552e-01     9.73498e-01     1.85366e-04
9.99529e-01     9.98598e-01     2.14298e-05
9.99114e-01     9.98178e-01     1.04837e-06
9.99913e-01     9.99825e-01     7.61051e-09
9.99995e-01     9.99989e-01     2.83979e-11
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 96
         Gradient evaluations: 24
Number of calls to Simulator instance 96

当然,这只是一个模板,它可以根据您的需要进行调整。它并不提供关于优化器状态的所有信息(例如,在MATLAB的优化工具箱中),但至少您对优化的进度有一些了解。
在这里可以找到一个类似的方法,但不使用回调函数。在我的方法中,回调函数用于在优化器完成一次迭代时打印输出,而不是在每次函数调用时打印输出。

z4iuyo4d

z4iuyo4d5#

您使用的是哪个最小化函数?
大多数函数都构建了进度报告,包括通过使用disp标志(例如,请参见scipy.optimize.fmin_l_bfgs_b)准确显示所需数据的多级报告。

vql8enpb

vql8enpb6#

也可以在要最小化的函数中包含一个简单的print()语句。如果你导入了这个函数,你就可以创建一个wapper。

import numpy as np
from scipy.optimize import minimize

def rosen(X): #Rosenbrock function
    print(X)
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen, 
         x0)
8xiog9wr

8xiog9wr7#

下面是一个适合我的解决方案:

def f_(x):   # The rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def conjugate_gradient(x0, f):
    all_x_i = [x0[0]]
    all_y_i = [x0[1]]
    all_f_i = [f(x0)]
    def store(X):
        x, y = X
        all_x_i.append(x)
        all_y_i.append(y)
        all_f_i.append(f(X))
    optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
    return all_x_i, all_y_i, all_f_i

且通过示例:

conjugate_gradient([2, -1], f_)

来源

omqzjyyz

omqzjyyz8#

好了!(当心:大多数情况下,全局变量是不好的做法。)
第一个

相关问题