scipy优化器中abs(w)之和约束

fslejnso  于 2022-11-10  发布在  其他
关注(0)|答案(2)|浏览(256)

我想给scipy优化问题中abs(w)的和设置一个上限,这可以在线性规划中通过使用虚拟变量来实现,例如y〉w,y〉-w,sum(y)〈K,但我无法在scipy优化框架中用公式表示它。
代码示例如下。该代码运行,但总投资组合总值不是固定的。这是一个多/空投资组合优化,其中w的和为0,我希望abs(w)的和为1.0。有没有办法在scipy的框架中添加第二个约束?

import numpy as np
import scipy.optimize as sco

def optimize(alphas, cov, maxRisk):
    def _calcRisk(w):
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(w):
        alpha = np.dot(alphas, w)
        return(-alpha)
    constraints = (
            {'type': 'eq', 'fun': lambda w:  np.sum(w)},
            {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w)} )
    n = len(alphas)
    bounds = tuple((-1, 1) for x in range(n))
    initw = n * [0.00001 / n]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)
cdmah0mi

cdmah0mi1#

一个简单的代数技巧就可以了。因为等式约束默认地意味着约束函数的结果是零,所以你只需要将函数的输出移动1.0。因为np.sum(w)-1.0=0.0等价于np.sum(w)=1.0。参见scipy.optimize.minimize的文档。反过来,只需要将行

{'type': 'eq', 'fun': lambda w:  np.sum(w)},

{'type': 'eq', 'fun': lambda w:  np.sum(w) - 1.0}
fivyi3re

fivyi3re2#

感谢那些回复的人。答案是让自由变量向量变大,然后从中切片以获得所需的变量(我想这是显而易见的:-)。下面的工作原理(当然,使用风险自担):

import numpy as np
import scipy.optimize as sco

# make the required lambda function "final" so it does not change when param i (or n) changes

def makeFinalLambda(i, n, op):
    if op == '+':
        return(lambda w:  w[n+i] + w[i])
    else:
        return(lambda w:  w[n+i] - w[i])    

def optimize(alphas, cov, maxRisk):
    n = len(alphas)
    def _calcRisk(x):
        w = x[:n]
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(x):
        w = x[:n]
        alpha = np.dot(alphas, w)
        return(-alpha)

    constraints = []
    # make the constraints to create abs value variables 
    for i in range(n):
        # note that this doesn't work; all the functions will refer to current i value
        # constraints.append({'type': 'ineq', 'fun': lambda w:  w[n+i] - w[i] })
        # constraints.append({'type': 'ineq', 'fun': lambda w:  w[n+i] + w[i] })
        constraints.append({'type': 'ineq', 'fun': makeFinalLambda(i, n, '-') })
        constraints.append({'type': 'ineq', 'fun': makeFinalLambda(i, n, '+') })
    # add neutrality, gross value, and risk constraints
    constraints = constraints + \
        [{'type': 'eq', 'fun': lambda w:  np.sum(w[:n]) },
         {'type': 'eq', 'fun': lambda w:  np.sum(w[n:]) - 1.0 },
         {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w)}]

    bounds = tuple((-1, 1) for x in range(n))
    bounds = bounds + tuple((0, 1) for x in range(n))
    # try to choose a nice, feasible starting vector
    initw = n * [0.001 / n]
    initw = initw + [abs(w)+0.001 for w in initw]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)

这会为每个权重变量迭代创建2个约束来计算绝对值变量。最好将其作为向量(每个元素)约束来执行,如下所示:

def optimize(alphas, cov, maxRisk):
    n = len(alphas)
    def _calcRisk(x):
        w = x[:n]
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(x):
        w = x[:n]
        alpha = np.dot(alphas, w)
        return(-alpha)
    absfunpos = lambda x : [x[n+i] - x[i] for i in range(n)] 
    absfunneg = lambda x : [x[n+i] + x[i] for i in range(n)] 
    constraints = (
            sco.NonlinearConstraint(absfunpos, [0.0]*n, [2.0]*n),
            sco.NonlinearConstraint(absfunneg, [0.0]*n, [2.0]*n),
            {'type': 'eq', 'fun': lambda w:  np.sum(w[:n]) },
            {'type': 'eq', 'fun': lambda w:  np.sum(w[n:]) - 1.0 },
            {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w) } )
    bounds = tuple((-1, 1) for x in range(n))
    bounds = bounds + tuple((0, 3) for x in range(n))
    initw = n * [0.01 / n]
    initw = initw + [abs(w) for w in initw]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)

相关问题