梯度下降算法
repeat until convergence {
\(\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0,\theta_1)\) (for \(j = 1\) and \(j = 0\))
}
线性回归模型
\[h_\theta(x) = \theta_0 + \theta_1x \] \[J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2 \]将线性回归损失函数及其假设函数代入梯度下降算法:
\[\frac{\partial \alpha}{\partial \theta_j}J(\theta_0,\theta_1)=\frac{\partial \alpha}{\partial \theta_j}\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2\\=\frac{\partial \alpha}{\partial \theta_j}\frac{1}{2m}\sum_{i=1}^m(\theta_0 + \theta_1x^{(i)}-y^{(i)})^2 \]当\(j=0\)时: \(\frac{\partial \alpha}{\partial \theta_0}J(\theta_0,\theta_1) = \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})\)
当\(j=1\)时: \(\frac{\partial \alpha}{\partial \theta_1}J(\theta_0,\theta_1) = \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x^{(i)}\)
所以,对于一元线性回归模型,梯度下降算法化简如下:
repeat until convergence {
\(\theta_0 := \theta_0 - \alpha \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})\)
\(\theta_1 := \theta_1 - \alpha \frac1m\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x^{(i)}\)
} update \(\theta_0\) and \(\theta_1\) simultaneously 同时更新 \(\theta_0\)、\(\theta_1\)
对于一元线性回归模型,其损失函数平方差函数是"Convex function" - 凸函数:Bowl-shaped (碗形),所以只有全局最优解。
批量梯度下降("Batch" Gradient Descent)
"Batch": 梯度下降的每一步使用所有训练样本
\[\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)}) \]