当前位置:首页 » 《随便一记》 » 正文

Pytorch提取预训练模型特定中间层的输出

23 人参与  2023年04月02日 09:29  分类 : 《随便一记》  评论

点击全文阅读


如果是你自己构建的模型,那么可以再forward函数中,返回特定层的输出特征图。

下面是介绍针对预训练模型,获取指定层的输出的方法。

如果你只想得到模型最后全连接层之前的输出,那么只需要将最后一个全连接层去掉:

import torchvisionimport torchnet = torchvision.models.resnet18(pretrained=False)print("model ", net)net.fc = nn.Sequential([])

当然,对于vgg19网络,如果你想获得vgg19, classifier子模块中第一个全连接层的输出,则可以只更改其classifier子模块。

import torchvisionimport torchnet = models.vgg19_bn(pretrained=False).cuda()net.classifier = nn.Sequential(*list(net.classifier.children())[:-6])       # 只保留第一个全连接层, 输出特征为4096

接下来是一些通用方法:

方法1:

对于简单的模型,可以采用直接遍历子模块的方法

import torchvisionimport torchnet = torchvision.models.resnet18(pretrained=False)print("model ", net)out = []x = torch.randn(1, 3, 224, 224)return_layer = "maxpool"for name, module in net.named_children():    print(name)    # print(module)    x = module(x)    print(x.shape)    if name == return_layer:        out.append(x.data)        breakprint(out[0].shape)

该方法的缺点在于,只能得到其子模块的输出,而对于使用nn.Sequensial()中包含很多层的模型,无法获得其指定层的输出。

方法2: 

使用torchvison提供内置方法,参考:Pytorch获取中间层输出的几种方法 - 知乎 (zhihu.com)

该方法与方法1 存在一样的问题。 不能获得其子模块内部特定层的输出。

from collections import OrderedDict import torchfrom torch import nn  class IntermediateLayerGetter(nn.ModuleDict):    """    Module wrapper that returns intermediate layers from a model    It has a strong assumption that the modules have been registered    into the model in the same order as they are used.    This means that one should **not** reuse the same nn.Module    twice in the forward if you want this to work.    Additionally, it is only able to query submodules that are directly    assigned to the model. So if `model` is passed, `model.feature1` can    be returned, but not `model.feature1.layer2`.    Arguments:        model (nn.Module): model on which we will extract the features        return_layers (Dict[name, new_name]): a dict containing the names            of the modules for which the activations will be returned as            the key of the dict, and the value of the dict is the name            of the returned activation (which the user can specify).    """        def __init__(self, model, return_layers):        if not set(return_layers).issubset([name for name, _ in model.named_children()]):            raise ValueError("return_layers are not present in model")         orig_return_layers = return_layers        return_layers = {k: v for k, v in return_layers.items()}        layers = OrderedDict()        for name, module in model.named_children():            layers[name] = module            if name in return_layers:                del return_layers[name]            if not return_layers:                break         super(IntermediateLayerGetter, self).__init__(layers)        self.return_layers = orig_return_layers     def forward(self, x):        out = OrderedDict()        for name, module in self.named_children():            x = module(x)            if name in self.return_layers:                out_name = self.return_layers[name]                out[out_name] = x        return out# examplem = torchvision.models.resnet18(pretrained=True)# extract layer1 and layer3, giving as names `feat1` and feat2`new_m = torchvision.models._utils.IntermediateLayerGetter(m,{'layer1': 'feat1', 'layer3': 'feat2'})out = new_m(torch.rand(1, 3, 224, 224))print([(k, v.shape) for k, v in out.items()])# [('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]

补充:

使用 create_feature_extractor方法,创建一个新的模块,该模块将给定模型中的中间节点作为字典返回,用户指定的键作为字符串,请求的输出作为值。

该方法比 IntermediateLayerGetter方法更通用, 不局限于获得模型第一层子模块的输出。因此推荐使用create_feature_extractor方法。

# Feature extraction with resnetfrom torchvision.models.feature_extraction import create_feature_extractormodel = torchvision.models.resnet18()# extract layer1 and layer3, giving as names `feat1` and feat2`model = create_feature_extractor(model, {'layer1': 'feat1', 'layer3': 'feat2'})out = model(torch.rand(1, 3, 224, 224))print([(k, v.shape) for k, v in out.items()])

 提取vgg16,features子模块下的特征层:

# vgg16backbone = torchvision.models.vgg16_bn(pretrained=True)# print(backbone)backbone = create_feature_extractor(backbone, return_nodes={"features.42": "0"})        #“0”字典的keyout = backbone(torch.rand(1, 3, 224, 224))print(out["0"].shape)

方法3:

使用hook函数,获取任意层的输出。

import torchvisionimport torchfrom torch import nnfrom torchvision.models import resnet50, resnet18resnet = resnet18()print(resnet)features_in_hook = []features_out_hook = []# 使用 hook 函数def hook(module, fea_in, fea_out):    features_in_hook.append(fea_in.data)         # 勾的是指定层的输入    # 只取前向传播的数值    features_out_hook.append(fea_out.data)      # 勾的是指定层的输出    return Nonelayer_name = 'avgpool'for (name, module) in resnet.named_modules():    print(name)    if name == layer_name:        module.register_forward_hook(hook=hook)# 测试x = torch.randn(1, 3, 224, 224)resnet(x)# print(features_in_hook)  # 勾的是指定层的输入print(features_out_hook[0].shape)  # 勾的是指定层的输出  # 1, 64, 56, 56print(features_out_hook[0])

方法3的优点在于:

通过遍历resnet.named_modules()可以获取任意中间层的输入和输出。

比较通过方法2和方法3获得的指定层输出是否相等。,结果为True,说明两种方法获得的结果相同。

new_m = torchvision.models._utils.IntermediateLayerGetter(resnet, {'avgpool': "feat1"})out = new_m(x)print(out['feat1'].data)# print([(k, v.shape) for k, v in out.items()])print(torch.equal(features_out_hook[0], out['feat1'].data))    # True

补充:使用net._modules可以获得子模块内的层,但对于复杂模型,使用起来太过繁琐。

for name, module in resnet._modules['layer1']._modules.items():    print(name)

冻结模型指定子模块的权重:

net = torchvision.models.vgg19_bn(pretrained=False)for param in net.features.parameters():    param.requires_grad=False    # define optimizerparams = [p for p in net.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005,                                momentum=0.9, weight_decay=0.0005)

获取模型的子模块,并保存其权重:

import torchvisionimport torchfrom torch import nnfrom torchvision.models import resnet50, resnet18resnet = resnet18()layer1 = resnet.get_submodule("layer1")torch.save(layer1.state_dict(), './layer1.pth')# 子模块载入相应权重layer1.load_state_dict(torch.load("./layer1.pth"))


点击全文阅读


本文链接:http://zhangshiyu.com/post/57911.html

<< 上一篇 下一篇 >>

  • 评论(0)
  • 赞助本站

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。

关于我们 | 我要投稿 | 免责申明

Copyright © 2020-2022 ZhangShiYu.com Rights Reserved.豫ICP备2022013469号-1