2014 Very Deep Convolutional Networks for Large-Scale Image Recognition
2015 Deep Residual Learning for Image Recognition
Residual Representations / Shortcut Connections2016 Identity Mappings in Deep Residual Networks
为了构造identity mapping f(y) = y,因此作者对activation functions(BN和reLU)进行更改.那么在forward或者backward的时候,信号都能直接propagate from 一个unit to other unit。2014 Going deeper with convolutions
利用1x1的卷积解决维度爆炸2015 v2:Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift
Batch Normalization用 2 个 3x3 的 conv 替代 Inception v1 模块中的5x52015 v3:Rethinking the InceptionArchitecture for Computer Vision
Asymmetric Convolutions 将7x7分解成两个一维的卷积(1x7,7x1),3x3也是一样(1x3,3x1)优化v1的auxiliary classifiers新的pooling层 Label smooth2016 v4:Inception-v4,Inception-ResNet and the Impact of Residual Connections on Learning
Inception模块结合ResNet Inception module来替换resnet shortcut中的bootlenect2017 Xception: DeepLearning with Depthwise Separable Convolutions
Xception就是在 spatial dimensions , channel dimension 这2个变换上做文章。
depth-wise convolution <img src=”https://img-blog.csdnimg.cn/20190924094637463.png">
借鉴(非采用)depth-wise convolution 改进 Inception V3(卷积的时候要将通道的卷积与空间的卷积进行分离)
原版 Depth-wise convolution,先逐通道 3×3 卷积,再 1×1 卷积
而 Xception 是反过来,先 1*1 卷积,再逐通道卷积.
2017 Aggregated ResidualTransformations for Deep Neural Networks
2017 MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Depthwise Separable Convolution
2019 Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
Inverted residuals Linear bottlenecks
2019 CVPR Searching for MobileNetV3
优化激活函数(可用于其他网络结构) 引入的基于squeeze and excitation结构的轻量级注意力模型
2017 ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
借鉴ResNet单元channel shuffle解决了多个group convolution叠加出现的边界效应pointwise group convolution 和 depthwise separable convolution主要减少了计算量。2018 ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
弃用了1x1的group convolutionChannel Split:把特征图分成两组A和BA组 认为是short-cut;B组经过 bottleneck 输入输出channel一样最后concat A和Bconcat后进行Channel Shuffle2017 Densely Connected Convolutional Networks
2017 Dual Path Networks
High Order RNN结构(HORNN)
2017 Squeeze-and-Excitation Networks