NJU1healer的个人博客分享 http://blog.sciencenet.cn/u/NJU1healer

博文

Pytorch中torch.autograd.grad()函数用法示例

已有 8850 次阅读 2020-11-7 10:43 |个人分类:Pytorch|系统分类:科研笔记

一、函数解释

如果输入x,输出是y,则求y关于x的导数(梯度):result = \frac{dy}{dx}

def grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False,

         only_inputs=True, allow_unused=False):

    r"""Computes and returns the sum of gradients of outputs w.r.t. the inputs.

    ``grad_outputs`` should be a sequence of length matching ``output``

    containing the pre-computed gradients w.r.t. each of the outputs. If an

    output doesn't require_grad, then the gradient can be ``None``).

    If ``only_inputs`` is ``True``, the function will only return a list of gradients

    w.r.t the specified inputs. If it's ``False``, then gradient w.r.t. all remaining

    leaves will still be computed, and will be accumulated into their ``.grad``

    attribute.

    Arguments:

        outputs (sequence of Tensor): outputs of the differentiated function.

        inputs (sequence of Tensor): Inputs w.r.t. which the gradient will be

            returned (and not accumulated into ``.grad``).

        grad_outputs (sequence of Tensor): Gradients w.r.t. each output.

            None values can be specified for scalar Tensors or ones that don't require

            grad. If a None value would be acceptable for all grad_tensors, then this

            argument is optional. Default: None.

        retain_graph (bool, optional): If ``False``, the graph used to compute the grad

            will be freed. Note that in nearly all cases setting this option to ``True``

            is not needed and often can be worked around in a much more efficient

            way. Defaults to the value of ``create_graph``.

        create_graph (bool, optional): If ``True``, graph of the derivative will

            be constructed, allowing to compute higher order derivative products.

            Default: ``False``.

        allow_unused (bool, optional): If ``False``, specifying inputs that were not

            used when computing outputs (and therefore their grad is always zero)

            is an error. Defaults to ``False``.

    """

    if not only_inputs:

        warnings.warn("only_inputs argument is deprecated and is ignored now "

                      "(defaults to True). To accumulate gradient for other "

                      "parts of the graph, please use torch.autograd.backward.")

 

    outputs = (outputs,) if isinstance(outputs, torch.Tensor) else tuple(outputs)

    inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs)

    if grad_outputs is None:

        grad_outputs = [None] * len(outputs)

    elif isinstance(grad_outputs, torch.Tensor):

        grad_outputs = [grad_outputs]

    else:

        grad_outputs = list(grad_outputs)

 

    grad_outputs = _make_grads(outputs, grad_outputs)

    if retain_graph is None:

        retain_graph = create_graph

 

    return Variable._execution_engine.run_backward(

        outputs, grad_outputs, retain_graph, create_graph,

        inputs, allow_unused)

二、代码范例(y=x^2)

import torch

 

x = torch.randn(3, 4).requires_grad_(True)

for i in range(3):

    for j in range(4):

        x[i][j] = i + j

y = x ** 2

print(x)

print(y)

weight = torch.ones(y.size())

print(weight)

dydx = torch.autograd.grad(outputs=y,

                           inputs=x,

                           grad_outputs=weight,

                           retain_graph=True,

                           create_graph=True,

                           only_inputs=True)

"""(x**2)' = 2*x """

print(dydx[0]) #代表求梯度后输出的结果【存在索引0处】,或者可以直接torch.autograd.grad()输出结果

d2ydx2 = torch.autograd.grad(outputs=dydx[0],

                             inputs=x,

                             grad_outputs=weight,

                             retain_graph=True,

                             create_graph=True,

                             only_inputs=True)

print(d2ydx2[0]) #代表求梯度后输出的结果


输出结果

image.png

image.png

【参考】

https://blog.csdn.net/qq_36556893/article/details/91982925

点滴分享,福泽你我!Add oil!



https://wap.sciencenet.cn/blog-3428464-1257419.html

上一篇:理解1D、2D、3D卷积核
下一篇:Python中@staticmethod、@classmethod、@property
收藏 IP: 211.162.81.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...
扫一扫,分享此博文

全部作者的其他最新博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-3-29 06:13

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部