ufldl学习笔记与编程作业:Softmax Regression(vectorization加速)

ufldl学习笔记与编程作业:Softmax Regression(vectorization加速)

ufldl出了新教程,感觉比之前的好。从基础讲起。系统清晰,又有编程实践。

在deep learning高质量群里面听一些前辈说,不必深究其它机器学习的算法,能够直接来学dl。

于是近期就開始搞这个了。教程加上matlab编程,就是完美啊。

新教程的地址是:http://ufldl.stanford.edu/tutorial/

本节是对ufldl学习笔记与编程作业:Softmax Regression(softmax回归)版本号的改进。

哈哈,把向量化的写法给写出来了,尼玛好快啊。

仅仅须要2分钟。200迭代就跑完了。

昨晚的for循环写法跑了我1个半小时。

事实上实现向量化写法,要把各种矩阵给在纸上写出来。

ufldl学习笔记与编程作业:Softmax Regression(vectorization加速)

watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvbGluZ2VybGFubGFu/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast" alt="" style="color:rgb(51,51,51); font-family:Arial; font-size:14px; line-height:26px">

1 感谢tornadomeet,尽管他做的是旧教程的实验,可是从他那里学了几个matlab函数。http://www.cnblogs.com/tornadomeet/archive/2013/03/23/2977621.html

比方sparse和full。‘

2 还有从旧教程http://deeplearning.stanford.edu/wiki/index.php/Exercise:Softmax_Regression

学了

% M is the matrix as described in the text
M = bsxfun(@rdivide, M, sum(M))

3 新教程学到了

I=sub2ind(size(A), 1:size(A,1), y);
values = A(I);

下面是softmax_regression_vec.m代码:

function [f,g] = softmax_regression_vec(theta, X,y)
%
% Arguments:
% theta - A vector containing the parameter values to optimize.
% In minFunc, theta is reshaped to a long vector. So we need to
% resize it to an n-by-(num_classes-1) matrix.
% Recall that we assume theta(:,num_classes) = 0.
%
% X - The examples stored in a matrix.
% X(i,j) is the i'th coordinate of the j'th example.
% y - The label for each example. y(j) is the j'th example's label.
%
m=size(X,2);
n=size(X,1); %theta本来是矩阵,传參的时候,theta(:)这样进来的。是一个vector。仅仅有一列,如今我们得把她变为矩阵
% theta is a vector; need to reshape to n x num_classes.
theta=reshape(theta, n, []);
num_classes=size(theta,2)+1; % initialize objective value and gradient.
f = 0;
g = zeros(size(theta)); h = theta'*X;%h(k,i)第k个theta。第i个样本
a = exp(h);
a = [a;ones(1,size(a,2))];%加1行
p = bsxfun(@rdivide,a,sum(a));
c = log2(p);
i = sub2ind(size(c), y,[1:size(c,2)]);
values = c(i);
f = -sum(values); d = full(sparse(1:m,y,1));
d = d(:,1:(size(d,2)-1));
p = p(1:(size(p,1)-1),:);%减1行
g = X*(p'.-d); %
% TODO: Compute the softmax objective function and gradient using vectorized code.
% Store the objective function value in 'f', and the gradient in 'g'.
% Before returning g, make sure you form it back into a vector with g=g(:);
%
%%% YOUR CODE HERE %%% g=g(:); % make gradient a vector for minFunc

本文作者:linger

本文链接:http://blog.csdn.net/lingerlanlan/article/details/38425929

上一篇:安卓项目-利用Sqlite数据库,开发新闻发布系统


下一篇:解决spring boot中rest接口404,500等错误返回统一的json格式