【Interview】 ResNet系列 Inception系列 MobileNet系列 ShuffleNet系列 网络结构图

mac2025-04-24  11

文章目录

VGGResNetPreAct-ResNetGoogLeNetInception V1Inception V2Inception V3Inception V4 XceptionResNeXtMobileNetMobileNet V1MobileNet V2MobileNet V3 ShuffleNetShuffleNet V1 ShuffleNet V2DenseNetDPNSENet

VGG

2014 Very Deep Convolutional Networks for Large-Scale Image Recognition

ResNet

2015 Deep Residual Learning for Image Recognition

Residual Representations / Shortcut Connections

PreAct-ResNet

2016 Identity Mappings in Deep Residual Networks

为了构造identity mapping f(y) = y,因此作者对activation functions(BN和reLU)进行更改.那么在forward或者backward的时候,信号都能直接propagate from 一个unit to other unit。

GoogLeNet

Inception V1

2014 Going deeper with convolutions

利用1x1的卷积解决维度爆炸

Inception V2

2015 v2:Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift

Batch Normalization用 2 个 3x3 的 conv 替代 Inception v1 模块中的5x5

Inception V3

2015 v3:Rethinking the InceptionArchitecture for Computer Vision

Asymmetric Convolutions 将7x7分解成两个一维的卷积(1x7,7x1),3x3也是一样(1x3,3x1)优化v1的auxiliary classifiers新的pooling层 Label smooth

Inception V4

2016 v4:Inception-v4,Inception-ResNet and the Impact of Residual Connections on Learning

Inception模块结合ResNet Inception module来替换resnet shortcut中的bootlenect

Xception

2017 Xception: DeepLearning with Depthwise Separable Convolutions

Xception就是在 spatial dimensions , channel dimension 这2个变换上做文章。

depth-wise convolution <img src=”https://img-blog.csdnimg.cn/20190924094637463.png">

借鉴(非采用)depth-wise convolution 改进 Inception V3(卷积的时候要将通道的卷积与空间的卷积进行分离)

原版 Depth-wise convolution,先逐通道 3×3 卷积,再 1×1 卷积

而 Xception 是反过来,先 1*1 卷积,再逐通道卷积.

ResNeXt

2017 Aggregated ResidualTransformations for Deep Neural Networks

MobileNet

MobileNet V1

2017 MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

Depthwise Separable Convolution

MobileNet V2

2019 Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation

Inverted residuals Linear bottlenecks

MobileNet V3

2019 CVPR Searching for MobileNetV3

优化激活函数(可用于其他网络结构) 引入的基于squeeze and excitation结构的轻量级注意力模型

ShuffleNet

ShuffleNet V1

2017 ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

借鉴ResNet单元channel shuffle解决了多个group convolution叠加出现的边界效应pointwise group convolution 和 depthwise separable convolution主要减少了计算量。

ShuffleNet V2

2018 ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

弃用了1x1的group convolutionChannel Split:把特征图分成两组A和BA组 认为是short-cut;B组经过 bottleneck 输入输出channel一样最后concat A和Bconcat后进行Channel Shuffle

DenseNet

2017 Densely Connected Convolutional Networks

DPN

2017 Dual Path Networks

High Order RNN结构(HORNN)

SENet

2017 Squeeze-and-Excitation Networks

最新回复(0)