当前位置 博文首页 > Pytorch中Softmax和LogSoftmax的使用详解

    Pytorch中Softmax和LogSoftmax的使用详解

    作者:悲恋花丶无心之人 时间:2021-08-10 18:39

    一、函数解释

    1.Softmax函数常用的用法是指定参数dim就可以:

    (1)dim=0:对每一列的所有元素进行softmax运算,并使得每一列所有元素和为1。

    (2)dim=1:对每一行的所有元素进行softmax运算,并使得每一行所有元素和为1。

    class Softmax(Module):
        r"""Applies the Softmax function to an n-dimensional input Tensor
        rescaling them so that the elements of the n-dimensional output Tensor
        lie in the range [0,1] and sum to 1.
        Softmax is defined as:
        .. math::
            \text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
        Shape:
            - Input: :math:`(*)` where `*` means, any number of additional
              dimensions
            - Output: :math:`(*)`, same shape as the input
        Returns:
            a Tensor of the same dimension and shape as the input with
            values in the range [0, 1]
        Arguments:
            dim (int): A dimension along which Softmax will be computed (so every slice
                along dim will sum to 1).
        .. note::
            This module doesn't work directly with NLLLoss,
            which expects the Log to be computed between the Softmax and itself.
            Use `LogSoftmax` instead (it's faster and has better numerical properties).
        Examples::
            >>> m = nn.Softmax(dim=1)
            >>> input = torch.randn(2, 3)
            >>> output = m(input)
        """
        __constants__ = ['dim']
     
        def __init__(self, dim=None):
            super(Softmax, self).__init__()
            self.dim = dim
     
        def __setstate__(self, state):
            self.__dict__.update(state)
            if not hasattr(self, 'dim'):
                self.dim = None
     
        def forward(self, input):
            return F.softmax(input, self.dim, _stacklevel=5)
     
        def extra_repr(self):
            return 'dim={dim}'.format(dim=self.dim)

    2.LogSoftmax其实就是对softmax的结果进行log,即Log(Softmax(x))

    class LogSoftmax(Module):
        r"""Applies the :math:`\log(\text{Softmax}(x))` function to an n-dimensional
        input Tensor. The LogSoftmax formulation can be simplified as:
        .. math::
            \text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right)
        Shape:
            - Input: :math:`(*)` where `*` means, any number of additional
              dimensions
            - Output: :math:`(*)`, same shape as the input
        Arguments:
            dim (int): A dimension along which LogSoftmax will be computed.
        Returns:
            a Tensor of the same dimension and shape as the input with
            values in the range [-inf, 0)
        Examples::
            >>> m = nn.LogSoftmax()
            >>> input = torch.randn(2, 3)
            >>> output = m(input)
        """
        __constants__ = ['dim']
     
        def __init__(self, dim=None):
            super(LogSoftmax, self).__init__()
            self.dim = dim
     
        def __setstate__(self, state):
            self.__dict__.update(state)
            if not hasattr(self, 'dim'):
                self.dim = None
     
        def forward(self, input):
            return F.log_softmax(input, self.dim, _stacklevel=5)

    二、代码示例

    输入代码

    import torch
    import torch.nn as nn
    import numpy as np
     
    batch_size = 4
    class_num = 6
    inputs = torch.randn(batch_size, class_num)
    for i in range(batch_size):
        for j in range(class_num):
            inputs[i][j] = (i + 1) * (j + 1)
     
    print("inputs:", inputs)

    得到大小batch_size为4,类别数为6的向量(可以理解为经过最后一层得到)

    tensor([[ 1., 2., 3., 4., 5., 6.],
    [ 2., 4., 6., 8., 10., 12.],
    [ 3., 6., 9., 12., 15., 18.],
    [ 4., 8., 12., 16., 20., 24.]])

    接着我们对该向量每一行进行Softmax

    Softmax = nn.Softmax(dim=1)
    probs = Softmax(inputs)
    print("probs:\n", probs)

    得到

    tensor([[4.2698e-03, 1.1606e-02, 3.1550e-02, 8.5761e-02, 2.3312e-01, 6.3369e-01],
    [3.9256e-05, 2.9006e-04, 2.1433e-03, 1.5837e-02, 1.1702e-01, 8.6467e-01],
    [2.9067e-07, 5.8383e-06, 1.1727e-04, 2.3553e-03, 4.7308e-02, 9.5021e-01],
    [2.0234e-09, 1.1047e-07, 6.0317e-06, 3.2932e-04, 1.7980e-02, 9.8168e-01]])

    此外,我们对该向量每一行进行LogSoftmax

    LogSoftmax = nn.LogSoftmax(dim=1)
    log_probs = LogSoftmax(inputs)
    print("log_probs:\n", log_probs)

    得到

    tensor([[-5.4562e+00, -4.4562e+00, -3.4562e+00, -2.4562e+00, -1.4562e+00, -4.5619e-01],
    [-1.0145e+01, -8.1454e+00, -6.1454e+00, -4.1454e+00, -2.1454e+00, -1.4541e-01],
    [-1.5051e+01, -1.2051e+01, -9.0511e+00, -6.0511e+00, -3.0511e+00, -5.1069e-02],
    [-2.0018e+01, -1.6018e+01, -1.2018e+01, -8.0185e+00, -4.0185e+00, -1.8485e-02]])

    验证每一行元素和是否为1

    # probs_sum in dim=1
    probs_sum = [0 for i in range(batch_size)]
     
    for i in range(batch_size):
        for j in range(class_num):
            probs_sum[i] += probs[i][j]
        print(i, "row probs sum:", probs_sum[i])

    得到每一行的和,看到确实为1

    0 row probs sum: tensor(1.)
    1 row probs sum: tensor(1.0000)
    2 row probs sum: tensor(1.)
    3 row probs sum: tensor(1.)

    验证LogSoftmax是对Softmax的结果进行Log

    # to numpy
    np_probs = probs.data.numpy()
    print("numpy probs:\n", np_probs)
     
    # np.log()
    log_np_probs = np.log(np_probs)
    print("log numpy probs:\n", log_np_probs)

    得到

    numpy probs:
    [[4.26977826e-03 1.16064614e-02 3.15496325e-02 8.57607946e-02 2.33122006e-01 6.33691311e-01]
    [3.92559559e-05 2.90064461e-04 2.14330270e-03 1.58369839e-02 1.17020354e-01 8.64669979e-01]
    [2.90672347e-07 5.83831024e-06 1.17265590e-04 2.35534250e-03 4.73083146e-02 9.50212955e-01]
    [2.02340233e-09 1.10474026e-07 6.03167746e-06 3.29318427e-04 1.79801770e-02 9.81684387e-01]]
    log numpy probs:
    [[-5.4561934e+00 -4.4561934e+00 -3.4561934e+00 -2.4561932e+00 -1.4561933e+00 -4.5619333e-01]
    [-1.0145408e+01 -8.1454077e+00 -6.1454072e+00 -4.1454072e+00 -2.1454074e+00 -1.4540738e-01]
    [-1.5051069e+01 -1.2051069e+01 -9.0510693e+00 -6.0510693e+00 -3.0510693e+00 -5.1069155e-02]
    [-2.0018486e+01 -1.6018486e+01 -1.2018485e+01 -8.0184851e+00 -4.0184855e+00 -1.8485421e-02]]

    验证完毕

    三、整体代码

    import torch
    import torch.nn as nn
    import numpy as np
     
    batch_size = 4
    class_num = 6
    inputs = torch.randn(batch_size, class_num)
    for i in range(batch_size):
        for j in range(class_num):
            inputs[i][j] = (i + 1) * (j + 1)
     
    print("inputs:", inputs)
    Softmax = nn.Softmax(dim=1)
    probs = Softmax(inputs)
    print("probs:\n", probs)
     
    LogSoftmax = nn.LogSoftmax(dim=1)
    log_probs = LogSoftmax(inputs)
    print("log_probs:\n", log_probs)
     
    # probs_sum in dim=1
    probs_sum = [0 for i in range(batch_size)]
     
    for i in range(batch_size):
        for j in range(class_num):
            probs_sum[i] += probs[i][j]
        print(i, "row probs sum:", probs_sum[i])
     
    # to numpy
    np_probs = probs.data.numpy()
    print("numpy probs:\n", np_probs)
     
    # np.log()
    log_np_probs = np.log(np_probs)
    print("log numpy probs:\n", log_np_probs)

    基于pytorch softmax,logsoftmax 表达

    import torch
    import numpy as np
    input = torch.autograd.Variable(torch.rand(1, 3))
    
    print(input)
    print('softmax={}'.format(torch.nn.functional.softmax(input, dim=1)))
    print('logsoftmax={}'.format(np.log(torch.nn.functional.softmax(input, dim=1))))

    以上为个人经验,希望能给大家一个参考,也希望大家多多支持站长博客。

    jsjbwy
    下一篇:没有了