Machine learning week 1 notes

New Vocabularies:
algorithm: 算法
regression:回归
cluster: 聚类, clustering algorithm:聚类算法
contour plots/figure:轮廓图
assignment:赋值

Definition:
“A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”
随着学习进程E的进行,由评测标准P打分所完成任务T的得分有着显著的提高。
In general, any machine learning problem can be assigned to one of two broad classifications:
Supervised learning and Unsupervised learning.

Content:
Supervised Learning:

  1. Regression: For continuous valued output
  2. Classification: For discrete valued output (usually to show different types like 0 for benign and 1 for harmful.

Unsupervised Learning:

  1. Unsupervised learning allows us to approach problems with little or no idea what our results should look like.
  2. Can derive this structure by clustering the data based on relationships among the variables in the data.
  3. no feedback based on the prediction results.
  4. Example:
    Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.
    Non-clustering: The “Cocktail Party Algorithm”, allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).

1.Model Representation:
Presentation: Machine learning week 1 notes
i means the sequences of x or y in training set.
m is called training set presenting the number of samples.
Machine learning week 1 notes

2. Cost Function
Machine learning week 1 notes

  1. Definition:
    Machine learning week 1 notesApplying cost function to measure the accuracy of hypothesis.表示拟合的贴近程度。
    Why 2m not m?
    Because the result of optimisation of cost function is the same, that means we can get same optimisation parameter regard less 2m or m.
    Applying gradient descent method to differentiate it we get:
    Machine learning week 1 notes
    1/m is better for calculation.

  2. Graphic Presentation
    Machine learning week 1 notes

  3. Details
    one parameter:
    Machine learning week 1 notes
    two parameters:
    Machine learning week 1 notes
    inner part of contour plots means the effect is better with small number J.

3. Gradient Descent
Machine learning week 1 notes

DETAILS:
Machine learning week 1 notes
a: learning rate(length of steps)

Machine learning week 1 notes
When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to:Machine learning week 1 notes

Batch Gradient Descent:
Using all data in the set.

上一篇:【Notes_6】现代图形学入门——图形渲染管线


下一篇:CFA三级学习经验 -- 复习方法