当前位置: 首页 > news >正文

公司网站维护教程天津做网站需要多少钱

公司网站维护教程,天津做网站需要多少钱,网站的建设原始代码,成都 网站学习参考来自 CNN可视化Convolutional Featureshttps://github.com/wmn7/ML_Practice/blob/master/2019_05_27/filter_visualizer.ipynb 文章目录 filter 的激活值 filter 的激活值 原理:找一张图片,使得某个 layer 的 filter 的激活值最大&#xff0c…

在这里插入图片描述

学习参考来自

  • CNN可视化Convolutional Features
  • https://github.com/wmn7/ML_Practice/blob/master/2019_05_27/filter_visualizer.ipynb

文章目录

  • filter 的激活值


filter 的激活值

原理:找一张图片,使得某个 layer 的 filter 的激活值最大,这张图片就是能被这个 filter 所检测的对象。

来个案例,流程:

  1. 初始化一张图片, 56X56
  2. 使用预训练好的 VGG16 网络,固定网络参数;
  3. 若想可视化第 40 层 layer 的第 k 个 filter 的 conv, 我们设置 loss 函数为 (-1*神经元激活值);
  4. 梯度下降, 对初始图片进行更新;
  5. 对得到的图片X1.2, 得到新的图片,重复上面的步骤;

其中第五步比较关键,我们可以看到初始化的图片不是很大,只有56X56. 这是因为原文作者在实际做的时候发现,若初始图片较大,得到的特征的频率会较高,即没有现在这么好的显示效果。

import torch
from torch.autograd import Variable
from PIL import Image, ImageOps
import torchvision.transforms as transforms
import torchvision.models as modelsimport numpy as np
import cv2
from cv2 import resize
from matplotlib import pyplot as pltdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")"initialize input image"
sz = 56
img = np.uint(np.random.uniform(150, 180, (3, sz, sz))) / 255  # (3, 56, 56)
img = torch.from_numpy(img[None]).float().to(device)  # (1, 3, 56, 56)"pretrained model"
model_vgg16 = models.vgg16_bn(pretrained=True).features.to(device).eval()
# downloading /home/xxx/.cache/torch/hub/checkpoints/vgg16_bn-6c64b313.pth, 500M+
# print(model_vgg16)
# print(len(list(model_vgg16.children())))  # 44
# print(list(model_vgg16.children()))"get the filter's output of one layer"
# 使用hook来得到网络中间层的输出
class SaveFeatures():def __init__(self, module):self.hook = module.register_forward_hook(self.hook_fn)def hook_fn(self, module, input, output):self.features = output.clone()def close(self):self.hook.remove()layer = 42
activations = SaveFeatures(list(model_vgg16.children())[layer])"backpropagation, setting hyper-parameters"
lr = 0.1
opt_steps = 25 # 迭代次数
filters = 265 # layer 42 的第 265 个 filter,使其激活值最大
upscaling_steps = 13 # 图像放大次数
blur = 3
upscaling_factor = 1.2 # 放大倍率"preprocessing of datasets"
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).view(-1, 1, 1).to(device)
cnn_normalization_std = torch.tensor([0.299, 0.224, 0.225]).view(-1, 1, 1).to(device)"gradient descent"
for epoch in range(upscaling_steps):  # scale the image up up_scaling_steps timesimg = (img - cnn_normalization_mean) / cnn_normalization_stdimg[img > 1] = 1img[img < 0] = 0print("Image Shape1:", img.shape)img_var = Variable(img, requires_grad=True)  # convert image to Variable that requires grad"optimizer"optimizer = torch.optim.Adam([img_var], lr=lr, weight_decay=1e-6)for n in range(opt_steps):optimizer.zero_grad()model_vgg16(img_var)  # forwardloss = -activations.features[0, filters].mean()  # max the activationsloss.backward()optimizer.step()"restore the image"print("Loss:", loss.cpu().detach().numpy())img = img_var * cnn_normalization_std + cnn_normalization_meanimg[img>1] = 1img[img<0] = 0img = img.data.cpu().numpy()[0].transpose(1,2,0)sz = int(upscaling_factor * sz)  # calculate new image sizeimg = cv2.resize(img, (sz, sz), interpolation=cv2.INTER_CUBIC)  # scale image upif blur is not None:img = cv2.blur(img, (blur, blur))  # blur image to reduce high frequency patternsprint("Image Shape2:", img.shape)img = torch.from_numpy(img.transpose(2, 0, 1)[None]).to(device)print("Image Shape3:", img.shape)print(str(epoch), ", Finished")print("="*10)activations.close()  # remove the hookimage = img.cpu().clone()
image = image.squeeze(0)
unloader = transforms.ToPILImage()image = unloader(image)
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
cv2.imwrite("res1.jpg", image)
torch.cuda.empty_cache()"""
Image Shape1: torch.Size([1, 3, 56, 56])
Loss: -6.0634975
Image Shape2: (67, 67, 3)
Image Shape3: torch.Size([1, 3, 67, 67])
0 , Finished
==========
Image Shape1: torch.Size([1, 3, 67, 67])
Loss: -7.8898916
Image Shape2: (80, 80, 3)
Image Shape3: torch.Size([1, 3, 80, 80])
1 , Finished
==========
Image Shape1: torch.Size([1, 3, 80, 80])
Loss: -8.730318
Image Shape2: (96, 96, 3)
Image Shape3: torch.Size([1, 3, 96, 96])
2 , Finished
==========
Image Shape1: torch.Size([1, 3, 96, 96])
Loss: -9.697872
Image Shape2: (115, 115, 3)
Image Shape3: torch.Size([1, 3, 115, 115])
3 , Finished
==========
Image Shape1: torch.Size([1, 3, 115, 115])
Loss: -10.190881
Image Shape2: (138, 138, 3)
Image Shape3: torch.Size([1, 3, 138, 138])
4 , Finished
==========
Image Shape1: torch.Size([1, 3, 138, 138])
Loss: -10.315895
Image Shape2: (165, 165, 3)
Image Shape3: torch.Size([1, 3, 165, 165])
5 , Finished
==========
Image Shape1: torch.Size([1, 3, 165, 165])
Loss: -9.73861
Image Shape2: (198, 198, 3)
Image Shape3: torch.Size([1, 3, 198, 198])
6 , Finished
==========
Image Shape1: torch.Size([1, 3, 198, 198])
Loss: -9.503629
Image Shape2: (237, 237, 3)
Image Shape3: torch.Size([1, 3, 237, 237])
7 , Finished
==========
Image Shape1: torch.Size([1, 3, 237, 237])
Loss: -9.488493
Image Shape2: (284, 284, 3)
Image Shape3: torch.Size([1, 3, 284, 284])
8 , Finished
==========
Image Shape1: torch.Size([1, 3, 284, 284])
Loss: -9.100454
Image Shape2: (340, 340, 3)
Image Shape3: torch.Size([1, 3, 340, 340])
9 , Finished
==========
Image Shape1: torch.Size([1, 3, 340, 340])
Loss: -8.699549
Image Shape2: (408, 408, 3)
Image Shape3: torch.Size([1, 3, 408, 408])
10 , Finished
==========
Image Shape1: torch.Size([1, 3, 408, 408])
Loss: -8.90135
Image Shape2: (489, 489, 3)
Image Shape3: torch.Size([1, 3, 489, 489])
11 , Finished
==========
Image Shape1: torch.Size([1, 3, 489, 489])
Loss: -8.838546
Image Shape2: (586, 586, 3)
Image Shape3: torch.Size([1, 3, 586, 586])
12 , Finished
==========Process finished with exit code 0
"""

得到特征图

请添加图片描述
网上找个图片测试下,看响应是不是最大

测试图片

请添加图片描述

import torch
from torch.autograd import Variable
from PIL import Image, ImageOps
import torchvision.transforms as transforms
import torchvision.models as modelsimport numpy as np
import cv2
from cv2 import resize
from matplotlib import pyplot as pltdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")class SaveFeatures():def __init__(self, module):self.hook = module.register_forward_hook(self.hook_fn)def hook_fn(self, module, input, output):self.features = output.clone()def close(self):self.hook.remove()size = (224, 224)
picture = Image.open("./bird.jpg").convert("RGB")
picture = ImageOps.fit(picture, size, Image.ANTIALIAS)loader = transforms.ToTensor()
picture = loader(picture).to(device)
print(picture.shape)cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).view(-1, 1, 1).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).view(-1, 1, 1).to(device)picture = (picture-cnn_normalization_mean) / cnn_normalization_stdmodel_vgg16 = models.vgg16_bn(pretrained=True).features.to(device).eval()
print(list(model_vgg16.children())[40])  # Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
print(list(model_vgg16.children())[41])  # BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
print(list(model_vgg16.children())[42])  # ReLU(inplace=True)layer = 42
filters = 265
activations = SaveFeatures(list(model_vgg16.children())[layer])with torch.no_grad():picture_var = Variable(picture[None])model_vgg16(picture_var)
activations.close()print(activations.features.shape)  # torch.Size([1, 512, 14, 14])# 画出每个 filter 的平均值
mean_act = [activations.features[0, i].mean().item() for i in range(activations.features.shape[1])]
plt.figure(figsize=(7,5))
act = plt.plot(mean_act, linewidth=2.)
extraticks = [filters]
ax = act[0].axes
ax.set_xlim(0, 500)
plt.axvline(x=filters, color="gray", linestyle="--")
ax.set_xlabel("feature map")
ax.set_ylabel("mane activation")
ax.set_xticks([0, 200, 400] + extraticks)
plt.show()"""
torch.Size([3, 224, 224])
Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
ReLU(inplace=True)
torch.Size([1, 512, 14, 14])
"""

请添加图片描述

可以看到,265 特征图对该输入的相应最高

总结:实测了其他 layer 和 filter,画出来的直方图中,对应的 filter 相应未必是最高的,不过也很高,可能找的待测图片并不是最贴合设定 layer 的某个 filter 的特征。

http://www.yayakq.cn/news/262033/

相关文章:

  • 苏州外贸公司网站建设流程图好网站建设公司地址
  • wordpress站点切换为中文十大免费自媒体素材网站
  • 自己免费建设网站西安网吧
  • 注册网站费属于什么费用seo综合查询爱站
  • asp网站开发环境搭建自己做soho需要做网站吗
  • 上海模板建站源码建立网站的价格
  • 通用模板做的网站不收录工程公司税率是多少
  • 国外建站vps电商网站开发难点
  • 界首网站优化公司ic手机网站开发平台
  • 网站开发工具安全性能贵州省和城乡建设厅官方网站
  • 快照网站产品查询展示型网站
  • 哈尔滨网站建设17网一起做网店普宁站
  • 做网站每年交服务费网站建设哪家好首选万维科技
  • 网站301跳转有坏处吗申请域名费用
  • 全自动三次元网站建设安徽中擎建设公司网站
  • 深圳h5网站制作自己做网站有名
  • 定制家具网站建设桂林优化公司
  • 具有品牌的广州做网站网站经营方案 备案
  • 婚恋网站模板浙江义乌外发加工网
  • 手机网站营销方案爱站工具包官网下载
  • 网站付的保证金怎么做会计凭证网站建设公司擅自关闭客户网络
  • 网上做预算的网站网站建设的成功之处有哪些
  • 网站建设推广公司需要哪些岗位wordpress重新安装删除哪个文件
  • 深圳网站制作公司深圳app开发博客网站
  • 好的文化网站模板下载广告设计与制作工作内容
  • 网站被挂黑链排名降权做网站后台都要自己写吗
  • 做一回最好的网站电子商务公司门头照片
  • 工程信息平台有哪些上海网络优化服务
  • 学网站建设好么邢台网站开发
  • 网站怎么做响应式布局全部游戏免费(试玩)不用下载