CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

1. Feedforward and cost function;

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

2.Regularized cost function:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

3.Sigmoid gradient

The gradient for the sigmoid function can be computed as:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

where:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

4.Random initialization

randInitializeWeights.m

 function W = randInitializeWeights(L_in, L_out)
%RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
%incoming connections and L_out outgoing connections
% W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
% of a layer with L_in incoming connections and L_out outgoing
% connections.
%
% Note that W should be set to a matrix of size(L_out, + L_in) as
% the column row of W handles the "bias" terms
% % You need to return the following variables correctly
W = zeros(L_out, + L_in); % ====================== YOUR CODE HERE ======================
% Instructions: Initialize W randomly so that we break the symmetry while
% training the neural network.
%
% Note: The first row of W corresponds to the parameters for the bias units
%
epsilon_init = 0.12;
W = rand(L_out, + L_in) * * epsilon_init - epsilon_init; % ========================================================================= end

5.Backpropagation(using a for-loop for t=1:m and place steps 1-4 below inside the for-loop), with the tth iteration perfoming the calculation on the tth training example(x(t),y(t)).Step 5 will divide the accumulated gradients by m to obtain the gradients for the neural network cost function.

(1) Set the input layer's values(a(1)) to the t-th training example x(t). Perform a feedforward pass, computing the activations(z(2),a(2),z(3),a(3)) for layers 2 and 3.

(2) For each output unit k in layer 3(the output layer), set :

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

where yk = 1 or 0.

(3)For the hidden layer l=2, set:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

(4) Accumulate the gradient from this example using the following formula. Note that you should skip or remove δ0(2).

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

(5) Obtain the(unregularized) gradient for the neural network cost function by dividing the accumulated gradients by 1/m:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

nnCostFunction.m

 function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
% % Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our layer neural network
Theta1 = reshape(nn_params(:hidden_layer_size * (input_layer_size + )), ...
hidden_layer_size, (input_layer_size + )); Theta2 = reshape(nn_params(( + (hidden_layer_size * (input_layer_size + ))):end), ...
num_labels, (hidden_layer_size + )); % Setup some useful variables
m = size(X, ); % You need to return the following variables correctly
J = ;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2)); % ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part : Feedforward the neural network and return the cost in the
% variable J. After implementing Part , you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% Part : Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively. After implementing Part , you can check
% that your implementation is correct by running checkNNGradients
%
% Note: The vector y passed into the function is a vector of labels
% containing values from ..K. You need to map this vector into a
% binary vector of 's and 0's to be used with the neural network
% cost function.
%
% Hint: We recommend implementing backpropagation using a for-loop
% over the training examples if you are implementing it for the
% first time.
%
% Part : Implement regularization with the cost function and gradients.
%
% Hint: You can implement this around the code for
% backpropagation. That is, you can compute the gradients for
% the regularization separately and then add them to Theta1_grad
% and Theta2_grad from Part .
% %Part
%Theta1 has size *
%Theta2 has size *
%y hase size *
K = num_labels;
Y = eye(K)(y,:); %[ ]
a1 = [ones(m,),X];%[ ]
a2 = sigmoid(a1*Theta1'); %[5000 25]
a2 = [ones(m,),a2];%[ ]
h = sigmoid(a2*Theta2');%[5000 10] costPositive = -Y.*log(h);
costNegtive = (-Y).*log(-h);
cost = costPositive - costNegtive;
J = (/m)*sum(cost(:));
%Regularized
Theta1Filtered = Theta1(:,:end); %[ ]
Theta2Filtered = Theta2(:,:end); %[ ]
reg = (lambda/(*m))*(sumsq(Theta1Filtered(:))+sumsq(Theta2Filtered(:)));
J = J + reg; %Part
Delta1 = ;
Delta2 = ;
for t=:m,
%step
a1 = [ X(t,:)]; %[ ]
z2 = a1*Theta1'; %[1 25]
a2 = [ sigmoid(z2)];%[ ]
z3 = a2*Theta2'; %[1 10]
a3 = sigmoid(z3); %[ ]
%step
yt = Y(t,:);%[ ]
d3 = a3-yt; %[ ]
%step
% [ ] [ ] [ ]
d2 = (d3*Theta2Filtered).*sigmoidGradient(z2); %[ ]
%step
Delta1 = Delta1 + (d2'*a1);%[25 401]
Delta2 = Delta2 + (d3'*a2);%[10 26]
end; %step
Theta1_grad = (/m)*Delta1;
Theta2_grad = (/m)*Delta2; %Part
Theta1_grad(:,:end) = Theta1_grad(:,:end) + ((lambda/m)*Theta1Filtered);
Theta2_grad(:,:end) = Theta2_grad(:,:end) + ((lambda/m)*Theta2Filtered); % ------------------------------------------------------------- % ========================================================================= % Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)]; end

6.Gradient checking

Let

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

and

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

for each i, that:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

computeNumericalGradient.m

 function numgrad = computeNumericalGradient(J, theta)
%COMPUTENUMERICALGRADIENT Computes the gradient using "finite differences"
%and gives us a numerical estimate of the gradient.
% numgrad = COMPUTENUMERICALGRADIENT(J, theta) computes the numerical
% gradient of the function J around theta. Calling y = J(theta) should
% return the function value at theta. % Notes: The following code implements numerical gradient checking, and
% returns the numerical gradient.It sets numgrad(i) to (a numerical
% approximation of) the partial derivative of J with respect to the
% i-th input argument, evaluated at theta. (i.e., numgrad(i) should
% be the (approximately) the partial derivative of J with respect
% to theta(i).)
% numgrad = zeros(size(theta));
perturb = zeros(size(theta));
e = 1e-;
for p = :numel(theta)
% Set perturbation vector
perturb(p) = e;
loss1 = J(theta - perturb);
loss2 = J(theta + perturb);
% Compute Numerical Gradient
numgrad(p) = (loss2 - loss1) / (*e);
perturb(p) = ;
end end

7.Regularized Neural Networks

for j=0:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

for j>=1:

CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

别人的代码:

https://github.com/jcgillespie/Coursera-Machine-Learning/tree/master/ex4

上一篇:利用NSUserdefaults来存储自定义的NSObject类及自定义类数组


下一篇:c# 调用外部exe程序