tensorflow 梯度计算的自动差异实现

dddzy1tm  于 2023-02-16  发布在  其他
关注(0)|答案(1)|浏览(106)
    • bounty将于明天到期**。回答此问题可获得+100的声望奖励。Frobeniusnorm希望吸引更多人关注此问题。

我已经完成了一些关于autodiff算法的论文来实现它(出于学习的目的)。我比较了我的算法在测试用例中的输出与tensorflow和他们的输出不匹配,在大多数情况下。因此,我通过教程从这方面工作,并实现了它与tensorflow操作只是为矩阵乘法操作,因为这是其中一个操作,没有工作:
matmul的梯度和非广播方法:

def gradient_matmul(node, dx, adj):
    # dx is needed to know which of both parents should be derived
    a = node.parents[0]
    b = node.parents[1]
    # the operation was node.tensor = tf.matmul(a.tensor, b,tensor)
    if a == dx or b == dx:
        # result depends on which of the parents is the derivative
        mm = tf.matmul(adj, tf.transpose(b.tensor)) if a == dx else \
                tf.matmul(tf.transpose(a.tensor), adj)
        return mm
    else: 
        return None

def unbroadcast(adjoint, node):
    dim_a = len(adjoint.shape)
    dim_b = len(node.shape)
    if dim_a > dim_b:
        sum = tuple(range(dim_a - dim_b))
        res = tf.math.reduce_sum(adjoint, axis = sum)
        return res
    return adjoint

最后是梯度计算自动差异算法:

def gradient(y, dx):
    working = [y]
    adjoints = defaultdict(float)
    adjoints[y] = tf.ones(y.tensor.shape)
    while len(working) != 0:
        curr = working.pop(0)
        if curr == dx:
            return adjoints[curr]
        if curr.is_store:
            continue
        adj = adjoints[curr]
        for p in curr.parents:
            # for testing with matrix multiplication as only operation
            local_grad = gradient_matmul(curr, p, adj)
            adjoints[p] = unbroadcast(tf.add(adjoints[p], local_grad), p.tensor)
            if not p in working:
                working.append(p)

但是它产生的输出与我最初的实现相同。我构造了一个矩阵乘法测试用例:

x = tf.constant([[[1.0, 1.0], [2.0, 3.0]], [[4.0, 5.0], [6.0, 7.0]]])
y = tf.constant([[3.0, -7.0], [-1.0, 5.0]])
z = tf.constant([[[1, 1], [2.0, 2]], [[3, 3], [-1, -1]]])
w = tf.matmul(tf.matmul(x, y), z)

其中w应针对每个变量导出。Tensorflow计算梯度:

[<tf.Tensor: shape=(2, 2, 2), dtype=float32, numpy=
array([[[-22.,  18.],
        [-22.,  18.]],

       [[ 32., -16.],
        [ 32., -16.]]], dtype=float32)>, <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[66., -8.],
       [80., -8.]], dtype=float32)>, <tf.Tensor: shape=(2, 2, 2), dtype=float32, numpy=
array([[[  5.,   5.],
        [ -1.,  -1.]],

       [[ 18.,  18.],
        [-10., -10.]]], dtype=float32)>]

我的实现计算:

[[[-5.  7.]
  [-5.  7.]]

 [[-5.  7.]
  [-5.  7.]]]
[[33. 22.]
 [54. 36.]]
[[[ 9.  9.]
  [14. 14.]]

 [[-5. -5.]
  [-6. -6.]]]

也许问题出在numpys dot和tensorflowsmatmul之间的差异上?但是我不知道如何修正梯度或者不广播tensorflow方法......谢谢你花时间看我的代码!:)

hgqdbh6s

hgqdbh6s1#

我发现了错误,梯度matmul应该是:

def gradient_matmul(node, dx, adj):
    a = node.parents[0]
    b = node.parents[1]
    if a == dx:
        return tf.matmul(adj, b.tensor, transpose_b=True)
    elif b == dx:
        return tf.matmul(a.tensor, adj, transpose_a=True)
    else:
        return None

相关问题