剪枝PRUNING TUTORIAL
最新的深度学习技术依赖于难以部署的过度参数化模型。相反,已知生物神经网络使用有效的稀疏连通性。为了减少内存,电池和硬件消耗,同时又不牺牲精度,在设备上部署轻量级模型并通过私有设备上计算来确保私密性,确定通过减少模型中的参数数量来压缩模型的最佳技术很重要。在研究方面,剪枝用于研究参数过度配置和参数不足网络之间学习动态的差异,以研究幸运稀疏子网络和初始化(“lottery tickets”)作为破坏性神经体系结构搜索技术的作用,以及更多。
在本教程中,将学习如何用于torch.nn.utils.prune
稀疏神经网络,以及如何扩展它以实现自己的自定义剪枝技术。
要求Requirements
"torch>=1.4.0a0+8e8a5e0" import torch from torch import nn import torch.nn.utils.prune as prune import torch.nn.functional as F
Create a model
在本教程中,我们使用LeCun等人(1998年)的LeNet体系结构。
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() # 1 input image channel, 6 output channels, 3x3 square conv kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5x5 image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, int(x.nelement() / x.shape[0])) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x model = LeNet().to(device=device)
检查模块Inspect a Module
让我们检查conv1
LeNet模型中的(未剪枝)层。现在它将包含两个参数weight
和bias
,并且没有缓冲区。
module = model.conv1 print(list(module.named_parameters()))
输出:
[('weight', Parameter containing: tensor([[[[-5.7063e-02, -2.4630e-01, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -1.9451e-01]]], [[[-2.3636e-01, -1.9035e-01, 6.9974e-02], [-7.7690e-02, 4.9759e-02, -6.2006e-02], [-1.7095e-01, 7.3741e-02, 3.1901e-01]]], [[[-1.6287e-02, -3.1315e-01, -2.6263e-01], [-1.1699e-01, -7.4603e-02, 9.0671e-03], [-1.0678e-01, 1.8641e-01, -2.0640e-01]]], [[[-2.2785e-01, 6.2033e-04, -7.2417e-02], [-2.5378e-01, -2.4691e-01, -3.4585e-03], [-5.4172e-02, 2.9494e-01, -1.7844e-01]]], [[[ 1.6865e-01, -2.9616e-01, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 1.6621e-01], [-2.3458e-01, 3.0958e-01, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -1.2893e-01], [-2.1332e-01, 3.0710e-01, 1.8194e-01]]]], device='cuda:0', requires_grad=True)), ('bias', Parameter containing: tensor([ 0.2454, 0.0883, -0.2114, -0.0138, 0.0932, -0.1112], device='cuda:0', requires_grad=True))]
print(list(module.named_buffers()))
[]
模块剪枝Pruning a Module
要剪枝模块(在此示例中,是conv1
LeNet体系结构的层),请首先从中可用的剪枝技术中选择剪枝技术torch.nn.utils.prune
(或 通过子类化implement 自己的剪枝技术 BasePruningMethod
)。然后,指定模块和该模块中要剪枝的参数的名称。最后,使用所选剪枝技术所需的适当关键字参数,指定剪枝参数。
在此示例中,我们将weight
在conv1
层中命名的参数中随机剪枝30%的连接。模块作为第一个参数传递给函数;name
使用其字符串标识符在该模块中标识参数;并 amount
指示与剪枝的连接百分比(如果它是介于0和1之间的浮点数),或者指示与剪枝的连接的绝对数量(如果它是一个非负整数)。
prune.random_unstructured(module, name="weight", amount=0.3)
剪枝通过weight
从参数中删除并将其替换为新参数weight_orig
(即追加"_orig"
到初始参数name
)来进行。weight_orig
存储未剪枝的张量版本。在bias
没有剪枝,因此将保持不变。
print(list(module.named_parameters()))
[('bias', Parameter containing: tensor([ 0.2454, 0.0883, -0.2114, -0.0138, 0.0932, -0.1112], device='cuda:0', requires_grad=True)), ('weight_orig', Parameter containing: tensor([[[[-5.7063e-02, -2.4630e-01, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -1.9451e-01]]], [[[-2.3636e-01, -1.9035e-01, 6.9974e-02], [-7.7690e-02, 4.9759e-02, -6.2006e-02], [-1.7095e-01, 7.3741e-02, 3.1901e-01]]], [[[-1.6287e-02, -3.1315e-01, -2.6263e-01], [-1.1699e-01, -7.4603e-02, 9.0671e-03], [-1.0678e-01, 1.8641e-01, -2.0640e-01]]], [[[-2.2785e-01, 6.2033e-04, -7.2417e-02], [-2.5378e-01, -2.4691e-01, -3.4585e-03], [-5.4172e-02, 2.9494e-01, -1.7844e-01]]], [[[ 1.6865e-01, -2.9616e-01, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 1.6621e-01], [-2.3458e-01, 3.0958e-01, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -1.2893e-01], [-2.1332e-01, 3.0710e-01, 1.8194e-01]]]], device='cuda:0', requires_grad=True))]
通过上面选择的剪枝技术生成的剪枝掩码被保存为名为weight_mask
(即附加"_mask"
到初始参数name
)的模块缓冲区。
print(list(module.named_buffers()))
[('weight_mask', tensor([[[[1., 0., 1.], [1., 1., 1.], [1., 1., 0.]]], [[[0., 1., 0.], [1., 1., 0.], [1., 1., 1.]]], [[[1., 0., 1.], [1., 0., 0.], [0., 1., 1.]]], [[[0., 1., 1.], [1., 1., 1.], [1., 0., 1.]]], [[[1., 0., 1.], [1., 1., 0.], [1., 0., 1.]]], [[[1., 1., 1.], [1., 1., 0.], [1., 1., 0.]]]], device='cuda:0'))]
为了使前向传递不做任何修改就可以使用,该weight
属性必须存在。在torch.nn.utils.prune
计算权重的剪枝版本中实现的剪枝技术 (通过将mask与原始参数组合)并将其存储在attribute中weight
。请注意,这不再是的参数,module
现在只是一个属性
print(module.weight)
输出:
tensor([[[[-5.7063e-02, -0.0000e+00, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -0.0000e+00]]], [[[-0.0000e+00, -1.9035e-01, 0.0000e+00], [-7.7690e-02, 4.9759e-02, -0.0000e+00], [-1.7095e-01, 7.3741e-02, 3.1901e-01]]], [[[-1.6287e-02, -0.0000e+00, -2.6263e-01], [-1.1699e-01, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 1.8641e-01, -2.0640e-01]]], [[[-0.0000e+00, 6.2033e-04, -7.2417e-02], [-2.5378e-01, -2.4691e-01, -3.4585e-03], [-5.4172e-02, 0.0000e+00, -1.7844e-01]]], [[[ 1.6865e-01, -0.0000e+00, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 0.0000e+00], [-2.3458e-01, 0.0000e+00, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -0.0000e+00], [-2.1332e-01, 3.0710e-01, 0.0000e+00]]]], device='cuda:0', grad_fn=<MulBackward0>)
最后,在每次前向传递之前使用PyTorch's进行剪枝 forward_pre_hooks
。具体来说,module
如我们在此处所做的那样,当剪枝时,它将forward_pre_hook
为与之关联的每个要剪枝的参数获取一个。在这种情况下,由于到目前为止我们只剪枝了名为的原始参数weight
,所以只会出现一个钩子。
print(module._forward_pre_hooks)
输出:
OrderedDict([(0, <torch.nn.utils.prune.RandomUnstructured object at 0x7f9cf8062208>)])
为了完整起见,我们现在也可以剪枝一下bias
,以查看module
更改的参数,缓冲区,挂钩和属性。只是为了尝试另一种剪枝技术,这里我们剪枝L1范数中的偏差中的3个最小条目,如在l1_unstructured
实现的那样 。
prune.l1_unstructured(module, name="bias", amount=3)
现在,我们希望命名的参数同时包含weight_orig
(来自之前)和bias_orig
。缓冲区将包括weight_mask
和 bias_mask
。两个张量的剪枝后的版本将作为模块属性存在,并且模块现在将具有两个forward_pre_hooks
。
print(list(module.named_parameters()))
输出:
[('weight_orig', Parameter containing: tensor([[[[-5.7063e-02, -2.4630e-01, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -1.9451e-01]]], [[[-2.3636e-01, -1.9035e-01, 6.9974e-02], [-7.7690e-02, 4.9759e-02, -6.2006e-02], [-1.7095e-01, 7.3741e-02, 3.1901e-01]]], [[[-1.6287e-02, -3.1315e-01, -2.6263e-01], [-1.1699e-01, -7.4603e-02, 9.0671e-03], [-1.0678e-01, 1.8641e-01, -2.0640e-01]]], [[[-2.2785e-01, 6.2033e-04, -7.2417e-02], [-2.5378e-01, -2.4691e-01, -3.4585e-03], [-5.4172e-02, 2.9494e-01, -1.7844e-01]]], [[[ 1.6865e-01, -2.9616e-01, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 1.6621e-01], [-2.3458e-01, 3.0958e-01, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -1.2893e-01], [-2.1332e-01, 3.0710e-01, 1.8194e-01]]]], device='cuda:0', requires_grad=True)), ('bias_orig', Parameter containing: tensor([ 0.2454, 0.0883, -0.2114, -0.0138, 0.0932, -0.1112], device='cuda:0', requires_grad=True))]
print(list(module.named_buffers()))
输出:
[('weight_mask', tensor([[[[1., 0., 1.], [1., 1., 1.], [1., 1., 0.]]], [[[0., 1., 0.], [1., 1., 0.], [1., 1., 1.]]], [[[1., 0., 1.], [1., 0., 0.], [0., 1., 1.]]], [[[0., 1., 1.], [1., 1., 1.], [1., 0., 1.]]], [[[1., 0., 1.], [1., 1., 0.], [1., 0., 1.]]], [[[1., 1., 1.], [1., 1., 0.], [1., 1., 0.]]]], device='cuda:0')), ('bias_mask', tensor([1., 0., 1., 0., 0., 1.], device='cuda:0'))]
print(module.bias)
输出:
tensor([ 0.2454, 0.0000, -0.2114, -0.0000, 0.0000, -0.1112], device='cuda:0', grad_fn=<MulBackward0>)
print(module._forward_pre_hooks)
输出:
OrderedDict([(0, <torch.nn.utils.prune.RandomUnstructured object at 0x7f9cf8062208>), (1, <torch.nn.utils.prune.L1Unstructured object at 0x7f9d5f893f60>)])
迭代剪枝Iterative Pruning
一个模块中的同一参数可以被多次剪枝,各种剪枝调用的效果等于串联应用的各种蒙版的组合。新的蒙版与旧蒙版的组合由PruningContainer
的compute_mask
方法处理 。
举例来说,假设我们现在要进一步剪枝module.weight
,这一次是基于张量的第0轴(第0轴对应于卷积层的输出通道,对于,具有维数6 conv1
)使用结构化剪枝。L2规范。可以使用ln_structured
带有n=2
和的功能来实现dim=0
prune.ln_structured(module, name="weight", amount=0.5, n=2, dim=0) # As we can verify, this will zero out all the connections corresponding to # 50% (3 out of 6) of the channels, while preserving the action of the # previous mask. print(module.weight)
输出:
tensor([[[[-5.7063e-02, -0.0000e+00, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, 0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[ 1.6865e-01, -0.0000e+00, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 0.0000e+00], [-2.3458e-01, 0.0000e+00, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -0.0000e+00], [-2.1332e-01, 3.0710e-01, 0.0000e+00]]]], device='cuda:0', grad_fn=<MulBackward0>)
现在,对应的钩子将为类型 torch.nn.utils.prune.PruningContainer
,并将存储应用于该weight
参数的剪枝历史记录。
for hook in module._forward_pre_hooks.values(): if hook._tensor_name == "weight": # select out the correct hook break print(list(hook)) # pruning history in the container
输出:
[<torch.nn.utils.prune.RandomUnstructured object at 0x7f9cf8062208>, <torch.nn.utils.prune.LnStructured object at 0x7f9cf8059390>]
序列化剪枝的模型Serializing a pruned model
所有相关的张量,包括掩码缓冲区和用于计算剪枝的张量的原始参数,都存储在模型中state_dict
,因此可以根据需要轻松地序列化和保存。
print(model.state_dict().keys())
输出:
odict_keys(['conv1.weight_orig', 'conv1.bias_orig', 'conv1.weight_mask', 'conv1.bias_mask', 'conv2.weight', 'conv2.bias', 'fc1.weight', 'fc1.bias', 'fc2.weight', 'fc2.bias', 'fc3.weight', 'fc3.bias'])
删除剪枝重新参数化Remove pruning re-parametrization
要使剪枝永久化,请根据weight_orig
和删除重新参数化weight_mask
,然后删除forward_pre_hook
,我们可以使用中的remove
功能torch.nn.utils.prune
。请注意,这不会撤消剪枝,好像从未发生过。而是通过将参数重新分配weight
给模型参数(剪枝后的版本)来使其永久化。
删除重新参数化之前:
print(list(module.named_parameters()))
输出:
[('weight_orig', Parameter containing: tensor([[[[-5.7063e-02, -2.4630e-01, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -1.9451e-01]]], [[[-2.3636e-01, -1.9035e-01, 6.9974e-02], [-7.7690e-02, 4.9759e-02, -6.2006e-02], [-1.7095e-01, 7.3741e-02, 3.1901e-01]]], [[[-1.6287e-02, -3.1315e-01, -2.6263e-01], [-1.1699e-01, -7.4603e-02, 9.0671e-03], [-1.0678e-01, 1.8641e-01, -2.0640e-01]]], [[[-2.2785e-01, 6.2033e-04, -7.2417e-02], [-2.5378e-01, -2.4691e-01, -3.4585e-03], [-5.4172e-02, 2.9494e-01, -1.7844e-01]]], [[[ 1.6865e-01, -2.9616e-01, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 1.6621e-01], [-2.3458e-01, 3.0958e-01, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -1.2893e-01], [-2.1332e-01, 3.0710e-01, 1.8194e-01]]]], device='cuda:0', requires_grad=True)), ('bias_orig', Parameter containing: tensor([ 0.2454, 0.0883, -0.2114, -0.0138, 0.0932, -0.1112], device='cuda:0', requires_grad=True))] print(list(module.named_buffers()))
输出:
[('weight_mask', tensor([[[[1., 0., 1.], [1., 1., 1.], [1., 1., 0.]]], [[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]], [[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]], [[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]], [[[1., 0., 1.], [1., 1., 0.], [1., 0., 1.]]], [[[1., 1., 1.], [1., 1., 0.], [1., 1., 0.]]]], device='cuda:0')), ('bias_mask', tensor([1., 0., 1., 0., 0., 1.], device='cuda:0'))]
print(module.weight)
输出:
tensor([[[[-5.7063e-02, -0.0000e+00, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, 0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[ 1.6865e-01, -0.0000e+00, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 0.0000e+00], [-2.3458e-01, 0.0000e+00, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -0.0000e+00], [-2.1332e-01, 3.0710e-01, 0.0000e+00]]]], device='cuda:0', grad_fn=<MulBackward0>)
删除重新参数化后:
prune.remove(module, 'weight') print(list(module.named_parameters()))
输出:
[('bias_orig', Parameter containing: tensor([ 0.2454, 0.0883, -0.2114, -0.0138, 0.0932, -0.1112], device='cuda:0', requires_grad=True)), ('weight', Parameter containing: tensor([[[[-5.7063e-02, -0.0000e+00, -3.3144e-01], [-2.9158e-01, -2.4055e-01, -3.3132e-01], [-2.8848e-01, -3.2727e-01, -0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, 0.0000e+00]]], [[[-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, 0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[-0.0000e+00, 0.0000e+00, -0.0000e+00], [-0.0000e+00, -0.0000e+00, -0.0000e+00], [-0.0000e+00, 0.0000e+00, -0.0000e+00]]], [[[ 1.6865e-01, -0.0000e+00, -2.2503e-01], [-8.8503e-02, -2.1696e-01, 0.0000e+00], [-2.3458e-01, 0.0000e+00, 2.4339e-01]]], [[[ 2.7446e-01, -3.2808e-02, 6.3390e-03], [ 1.4047e-04, -2.1429e-01, -0.0000e+00], [-2.1332e-01, 3.0710e-01, 0.0000e+00]]]], device='cuda:0', requires_grad=True))]
print(list(module.named_buffers()))
输出:
[('bias_mask', tensor([1., 0., 1., 0., 0., 1.], device='cuda:0'))]
剪枝模型中的多个参数Pruning multiple parameters in a model
通过指定所需的剪枝技术和参数,我们可以轻松地剪枝网络中的多个张量,也许根据它们的类型,如在本示例中将看到的那样。
new_model = LeNet() for name, module in new_model.named_modules(): # prune 20% of connections in all 2D-conv layers if isinstance(module, torch.nn.Conv2d): prune.l1_unstructured(module, name='weight', amount=0.2) # prune 40% of connections in all linear layers elif isinstance(module, torch.nn.Linear): prune.l1_unstructured(module, name='weight', amount=0.4) print(dict(new_model.named_buffers()).keys()) # to verify that all masks exist
输出:
dict_keys(['conv1.weight_mask', 'conv2.weight_mask', 'fc1.weight_mask', 'fc2.weight_mask', 'fc3.weight_mask'])
全局剪枝Global pruning
到目前为止,我们仅查看了通常称为“局部”剪枝的情况,即通过比较每个条目的统计信息(权重,**度,梯度等)来逐个剪枝模型中的张量的做法。到该张量中的其他条目。但是,一种常见且可能更强大的技术是通过删除(例如)删除整个模型中最低的20%的连接,而不是删除每一层中最低的20%的连接来一次剪枝模型。这很可能导致每个层的剪枝百分比不同。让我们看看如何使用global_unstructured
from 来做到这一点torch.nn.utils.prune
。
model = LeNet() parameters_to_prune = ( (model.conv1, 'weight'), (model.conv2, 'weight'), (model.fc1, 'weight'), (model.fc2, 'weight'), (model.fc3, 'weight'), ) prune.global_unstructured( parameters_to_prune, pruning_method=prune.L1Unstructured, amount=0.2, )
现在,我们可以检查每个剪枝参数中引起的稀疏性,该稀疏性将不等于每层中的20%。但是,全局稀疏度将(大约)为20%。
print( "Sparsity in conv1.weight: {:.2f}%".format( 100. * float(torch.sum(model.conv1.weight == 0)) / float(model.conv1.weight.nelement()) ) ) print( "Sparsity in conv2.weight: {:.2f}%".format( 100. * float(torch.sum(model.conv2.weight == 0)) / float(model.conv2.weight.nelement()) ) ) print( "Sparsity in fc1.weight: {:.2f}%".format( 100. * float(torch.sum(model.fc1.weight == 0)) / float(model.fc1.weight.nelement()) ) ) print( "Sparsity in fc2.weight: {:.2f}%".format( 100. * float(torch.sum(model.fc2.weight == 0)) / float(model.fc2.weight.nelement()) ) ) print( "Sparsity in fc3.weight: {:.2f}%".format( 100. * float(torch.sum(model.fc3.weight == 0)) / float(model.fc3.weight.nelement()) ) ) print( "Global sparsity: {:.2f}%".format( 100. * float( torch.sum(model.conv1.weight == 0) + torch.sum(model.conv2.weight == 0) + torch.sum(model.fc1.weight == 0) + torch.sum(model.fc2.weight == 0) + torch.sum(model.fc3.weight == 0) ) / float( model.conv1.weight.nelement() + model.conv2.weight.nelement() + model.fc1.weight.nelement() + model.fc2.weight.nelement() + model.fc3.weight.nelement() ) ) )
输出:
Sparsity in conv1.weight: 5.56% Sparsity in conv2.weight: 7.75% Sparsity in fc1.weight: 22.05% Sparsity in fc2.weight: 12.17% Sparsity in fc3.weight: 10.12% Global sparsity: 20.00%
torch.nn.utils.prune
使用自定义剪枝功能扩展
要实现自己的剪枝功能,可以nn.utils.prune
通过对BasePruningMethod
基类进行子类化来扩展 模块,就像其他所有剪枝方法一样。该基类为您实现下面的方法:__call__
,apply_mask
, apply
,prune
,和remove
。除了一些特殊情况外,您不必为新的剪枝技术重新实现这些方法。但是,您将必须实现__init__
(构造函数)和compute_mask
(有关如何根据剪枝技术的逻辑为给定张量计算掩码的说明)。此外,你将不得不指定剪枝这一技术工具的类型(支持的选项包括global
, structured
,和unstructured
)。需要确定在迭代应用剪枝的情况下如何组合蒙版。换句话说,当剪枝预剪枝的参数时,当前的剪枝技术应作用于参数的未剪枝部分。指定PRUNING_TYPE
将会启用PruningContainer
(处理剪枝掩码的迭代应用程序)正确识别要剪枝的参数的范围。
例如,假设您要实现一种剪枝技术,该技术可以剪枝张量中的所有其他条目(或者-如果以前已剪枝过张量,则可以剪枝张量的其余未剪枝部分)。这是PRUNING_TYPE='unstructured'
因为它作用于层中的各个连接,而不作用于整个单元/通道('structured'
)或不同参数('global'
)。
class FooBarPruningMethod(prune.BasePruningMethod): """Prune every other entry in a tensor """ PRUNING_TYPE = 'unstructured' def compute_mask(self, t, default_mask): mask = default_mask.clone() mask.view(-1)[::2] = 0 return mask
现在,要将其应用到中的参数nn.Module
,还应该提供一个简单的函数来实例化该方法并将其应用。
def foobar_unstructured(module, name): """Prunes tensor corresponding to parameter called `name` in `module` by removing every other entry in the tensors. Modifies module in place (and also return the modified module) by: 1) adding a named buffer called `name+'_mask'` corresponding to the binary mask applied to the parameter `name` by the pruning method. The parameter `name` is replaced by its pruned version, while the original (unpruned) parameter is stored in a new parameter named `name+'_orig'`. Args: module (nn.Module): module containing the tensor to prune name (string): parameter name within `module` on which pruning will act. Returns: module (nn.Module): modified (i.e. pruned) version of the input module Examples: >>> m = nn.Linear(3, 4) >>> foobar_unstructured(m, name='bias') """ FooBarPruningMethod.apply(module, name) return module
让我们尝试一下!
model = LeNet() foobar_unstructured(model.fc3, name='bias') print(model.fc3.bias_mask)
输出:
tensor([0., 1., 0., 1., 0., 1., 0., 1., 0., 1.])
接下来,给大家介绍一下租用GPU做实验的方法,我们是在智星云租用的GPU,使用体验很好。具体大家可以参考:智星云官网: http://www.ai-galaxy.cn/,淘宝店:https://shop36573300.taobao.com/公众号: 智星AI