Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Convolutional Neural Networks: Step by Step

implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.

Notation:

  • Superscript \([l]\) denotes an object of the \(l^{th}\) layer.

    • Example: \(a^{[4]}\) is the \(4^{th}\) layer activation. \(W^{[5]}\) and \(b^{[5]}\) are the \(5^{th}\) layer parameters.
  • Superscript \((i)\) denotes an object from the \(i^{th}\) example.

    • Example: \(x^{(i)}\) is the \(i^{th}\) training example input.
  • Lowerscript \(i\) denotes the \(i^{th}\) entry of a vector.

    • Example: \(a^{[l]}_i\) denotes the \(i^{th}\) entry of the activations in layer \(l\), assuming this is a fully connected (FC) layer.
  • \(n_H\), \(n_W\) and \(n_C\) denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer \(l\), you can also write \(n_H^{[l]}\), \(n_W^{[l]}\), \(n_C^{[l]}\).

  • \(n_{H_{prev}}\), \(n_{W_{prev}}\) and \(n_{C_{prev}}\) denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer \(l\), this could also be denoted \(n_H^{[l-1]}\), \(n_W^{[l-1]}\), \(n_C^{[l-1]}\).

1. Packages

import numpy as np
import h5py
import matplotlib.pyplot as plt

%matplotlib inline
plt.rcParams[‘figure.figsize‘] = (5.0, 4.0) # set default size of plots
plt.rcParams[‘image.interpolation‘] = ‘nearest‘
plt.rcParams[‘image.cmap‘] = ‘gray‘

%load_ext autoreload
%autoreload 2

np.random.seed(1)

2. Outline of Assignment

  • Convolution functions, including:

    • Zero Padding

    • Convolve window

    • Convolution forward

    • Convolution backward (optional)

  • Pooling functions, including:

    • Pooling forward

    • Create mask

    • Distribute value

    • Pooling backward (optional)

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Note: 每一步前向传播,都有对应的 反向传播,因此,你需要把每一步前向传播的parameters,存储到 cache中,用于反向传播.

3. Convolutional Neural Networks

一个卷积层(convolutional layer)将一个输入量转换成不同大小的输出量,如图:

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

3.1 Zero-Padding

Zero-padding adds zeros around the border of an image:

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Figure 1 : Zero-Padding:Image (3 channels, RGB) with a padding of 2.

Zero-Padding的两个好处:

  • 允许你使用 CONV layer 而不必要减小 the height and width of the volumes.(尤其是搭建深层网络时)(Same convolutions)

  • 帮助我们保持图片边缘重要的信息. 没有Padding,很少有值,在下一层能够作为图片的边缘被像素值影响

Exercise:实现函数,用0填充一批示例X的所有图像. Note if you want to pad the array "a" of shape \((5,5,5,5,5)\) with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do:

a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), ‘constant‘, constant_values = (..,..))

实现:

# GRADED FUNCTION: zero_pad

def zero_pad(X, pad):
    """
    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, 
    as illustrated in Figure 1.
    
    Argument:
    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
    pad -- integer, amount of padding around each image on vertical and horizontal dimensions
    
    Returns:
    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
    """
    
    ### START CODE HERE ### (≈ 1 line)
    # X_pad: (m, n_H + 2*pad, n_W + 2*pad, n_C)
    X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), ‘constant‘, constant_values=0)
    ### END CODE HERE ###
    
    return X_pad

测试:

np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])

fig, axarr = plt.subplots(1, 2)
axarr[0].set_title(‘x‘)
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title(‘x_pad‘)
axarr[1].imshow(x_pad[0,:,:,0])

输出:
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]]
Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

3.2 Single step of convolution

在这一部分中,实现一个卷积的步骤,在该步骤中,将过滤器应用到输入的单个位置中。这将构建卷积单元:

  • 需要一个输入volume

  • 将滤波器应用到输入的每个位置

  • 输出一个不同大小的volume

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Figure 2 : Convolution operation 2x2的滤波器(filter) 和 步长(stride)为1 (stride = amount you move the window each time you slide)

计算机图像应用中,左边矩阵中的每个值对应于单个像素值,我们通过3x3滤波器与图像卷积,将其值元素与原始矩阵相乘,然后将它们求和并添加偏差。将实现一个卷积步骤,对应于将滤波器应用于其中一个位置以获得单个实值输出

稍后将将此函数应用于输入的多个位置,以实现完全卷积操作。

Exercise:实现 conv_single_step().

# GRADED FUNCTION: conv_single_step

def conv_single_step(a_slice_prev, W, b):
    """
    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation 
    of the previous layer.
    
    Arguments:
    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
    
    Returns:
    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    # Element-wise product between a_slice and W. Do not add the bias yet.
    s = np.multiply(a_slice_prev, W)
    # Sum over all entries of the volume s.
    Z = np.sum(s)
    # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
    Z = Z + float(b)
    ### END CODE HERE ###

    return Z

测试:

np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)

Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)

输出:
Z = -6.999089450680221

3.3 Convolutional Neural Networks - Forward pass

在前向传播中,你需要很多filters,并在输入上卷积,每次卷积,给你一个2D的矩阵输出,你将stack这些输出,组成一个3D volume:

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Exercise: 函数实现 在 input activation A_prev 上卷积filter W.

  • A_prev作为输(上一层m inputs激活的输出). 由W表示F filters/weights,b表示bias vector

  • 其中,每个filter都有自己的bias. 你可以访问包含 stride 和 padding的超参数字典

Hint:

  1. 在matrix "a_prev"(shape(5,5,3))的左上角,选择一个2x2的slice,你需要: a_slice_prev = a_prev[0:2,0:2,:]

    • 你将使用 start/end indexes 定义 a_slice_prev
  2. 要定义 a_slice,你需要首先定义他的corners: vert_start, vert_end, horiz_start and horiz_end,下图展示每个Corner如何用 h,w,f,s 定义:

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

Figure 3 : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) (This figure shows only a single channel)

Reminder:
卷积后的shape与input shape 有关的公式:

\[n_H = \lfloor \frac{n_{H_{prev}} - f + 2?\times pad}{stride} \rfloor +1 \]

\[n_W = \lfloor \frac{n_{W_{prev}} - f + 2?\times pad}{stride} \rfloor +1 \]

\[n_C = \text{number of filters used in the convolution} \]

使用for-loop实现:

# GRADED FUNCTION: conv_forward

def conv_forward(A_prev, W, b, hparameters):
    """
    Implements the forward propagation for a convolution function
    
    Arguments:
    A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
    b -- Biases, numpy array of shape (1, 1, 1, n_C)
    hparameters -- python dictionary containing "stride" and "pad"
        
    Returns:
    Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache of values needed for the conv_backward() function
    """
    
    ### START CODE HERE ###
    # Retrieve dimensions from A_prev‘s shape (≈1 line)  
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve dimensions from W‘s shape (≈1 line)
    (f, f, n_C_prev, n_C) = W.shape    # n_C: n_C个filter
    
    # Retrieve information from "hparameters" (≈2 lines)
    stride = hparameters[‘stride‘]
    pad = hparameters[‘pad‘]
    
    # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
    n_H = int((n_H_prev - f + 2 * pad) / stride) + 1
    n_W = int((n_W_prev - f + 2 * pad) / stride) + 1
    
    # Initialize the output volume Z with zeros. (≈1 line)
    Z = np.zeros((m, n_H, n_W, n_C))   #  n_C: n_C个filter
    
    # Create A_prev_pad by padding A_prev
    A_prev_pad = zero_pad(A_prev, pad)
    
    for i in range(m):                               # loop over the batch of training examples
        a_prev_pad = A_prev_pad[i]                   # Select ith training example‘s padded activation
        for h in range(n_H):                           # loop over vertical axis of the output volume
            for w in range(n_W):                       # loop over horizontal axis of the output volume
                for c in range(n_C):                   # loop over channels (= #filters) of the output volume
                    
                    # Find the corners of the current "slice" (≈4 lines)              
                    vert_start = h * stride
                    vert_end = vert_start + f
                    horiz_start = w * stride
                    horiz_end = horiz_start + f
                    
                    # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
                    a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
                    
                    # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
                    Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])  # 第c个filter的全部W,b
                                        
    ### END CODE HERE ###
    
    # Making sure your output shape is correct
    assert(Z.shape == (m, n_H, n_W, n_C))
    
    # Save information in "cache" for the backprop
    cache = (A_prev, W, b, hparameters)
    
    return Z, cache

输出:

np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
               "stride": 2}

Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z‘s mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])

Z‘s mean = 0.048995203528855794
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]

Finally, CONV layer should also contain an activation, in which case we would add the following line of code:

# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])

You don‘t need to do it here.

Convolutional Neural Network-week1编程题(一步步搭建CNN模型)

上一篇:CSS文本部分之字体样式[1]


下一篇:Android应用开发基本流程