[ComputerVision] ResNet 3mins Overview

Johnny Chang
4 min readMay 7, 2023

作為ComputerVision必讀論文基礎的ResNet,解決深度網路退化問題,在此之後的CNN相關網路深度重大的突破

  1. Paper: https://arxiv.org/abs/1512.03385
  2. Pytorch Code: vision/resnet.py at main · pytorch/vision · GitHub

深度網路退化

  1. 在ResNet提出以前,明顯看出56-Layer的表現比20-Layer更糟糕
  2. 因此ResNet提出了殘差學習來解決深度網路退化的問題

殘差學習 Basic Block

  1. 在其他論文上訓練神經網路上的目的是要擬合F(x)目標函數
  2. Residual Learning a building block => 將原先擬合的目標F(x) = Y(x)更動為F(x) + x = Y(x),因此只對Y(x) -x 進行訓練
  3. 如果網路已經最優化了繼續加深網路,residual mapping將轉為0,只剩下identity mapping,這樣不會因為網路深度而達到退化。

網路架構

  1. 以下架構34 Layers,其中34-Layer Residual的虛線是由於上層Input以及最後的Output Dimension不同,使用1x1的Convolution Layer做縮放讓其Dimension一樣可以直接Concate

Pytorch

class BasicBlock(nn.Module):
expansion: int = 1

def __init__(
self,
inplanes: int,
planes: int,
stride: int = 1,
downsample: Optional[nn.Module] = None,
groups: int = 1,
base_width: int = 64,
dilation: int = 1,
norm_layer: Optional[Callable[..., nn.Module]] = None,
) -> None:
super().__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
if groups != 1 or base_width != 64:
raise ValueError("BasicBlock only supports groups=1 and base_width=64")
if dilation > 1:
raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
# Both self.conv1 and self.downsample layers downsample the input when stride != 1
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = norm_layer(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = norm_layer(planes)
self.downsample = downsample
self.stride = stride

def forward(self, x: Tensor) -> Tensor:
identity = x

out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)

out = self.conv2(out)
out = self.bn2(out)

if self.downsample is not None:
identity = self.downsample(x)

# Concate original input
out += identity
out = self.relu(out)

return out

--

--