机器学习算法( 二、K - 近邻算法)

一、概述  

  k-近邻算法采用测量不同特征值之间的距离方法进行分类。

  工作原理:首先有一个样本数据集合(训练样本集),并且样本数据集合中每条数据都存在标签(分类),即我们知道样本数据中每一条数据与所属分类的对应关系,输入没有标签的数据之后,将新数据的每个特征与样本集的数据对应的特征进行比较(欧式距离运算),然后算出新数据与样本集中特征最相似(最近邻)的数据的分类标签,一般我们选择样本数据集中前k个最相似的数据,然后再从k个数据集中选出出现分类最多的分类作为新数据的分类。

二、优缺点

  优点:精度高、对异常值不敏感、无数据输入假定。

  缺点:计算度复杂、空间度复杂。

  适用范围:数值型和标称型

三、数学公式

  欧式距离:欧氏距离是最易于理解的一种距离计算方法,源自欧氏空间中两点间的距离公式。

  (1)二维平面上两点a(x1,y1)与b(x2,y2)间的欧氏距离:

    aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAPQAAAAlCAIAAAD6L1MjAAAC9klEQVR4nO2a3aGDIAyFnYuBmIdpXIZheh+kFSEghOBP7vkeq23C4QgxdPkAoJTl7gQAmAXMDdQCcwO1wNxALTA3UAvMDdQCcwO16DC3d2YB/4dGW6gwt3fGOH93FuBpqDD3auFtkKPB3PA2IFFg7tUuds0/9s4s5IVnsdplIZ/N1YYCU/bJfbYssmr0mTuEeJQ2lLe9M8KemEqe7v4WsVo5vd8hi5wavSu3d+ZZ3vbOpPk8LscGirNGjC+/o8mxL5JlQI2YXnOv9lnPfj7adyxPGXTaLZPZZu53ycJXI6bT3E9ruuX5vGsSI2q78dkXz297myxsNWJazB0V845+eWumetrCED9vlNA72jaEqHCbO9OscMfM4zqzLnnTrFOyJG9Qk16omFG4asScmXtzY/i94rvsXeTeplaobTcLVwq9FUG44eLUj4vAuLkJWVYbstx/fkLNyY7CVSPmxNxHO99ecCdFF+Gcyva7SdQ8gNFNpjNcV5OuO7eiLImHzsOzZOmOItOyrJs72xtGw/EdQzQ6KZlqtaVkV62B3nDcspi5cgf2LGeehfVHkXhJqJr7GICYrOSjSecOScD6ElDrIhUym2L6QriKRNw02DX34Yp3Nv4RaU2IKHW3SCRwau7fA/etd4Knttcks6/lc84dyikV5rTYRDLOf6/tEmejEMuSCleViL1UDXZLvDOLtfa4iIlrkkY5c4tId6eh5o5cnT5ndKHS247s47t4l/a3VJa4WUEM4SN8unEeLlwRa8+P9rnJNoH4iU+xGUG6RaZzOfbfEkqCudb+fBdvW67dOMXupUd3hEQDu11rA7gQgp4vaU1KruisLfsQNvc1hzzbflIZfd+Df625CYmuOmHpOBkR1aQUhf5cTg1Jc/M67eNxy/e0zc+F5s4luvrsIMhSrpj228Y1qUUh3SKrBt/cv3I8ZM/ttN9LOoqpvESiKzS5RAoF/+cGgAbmBmqBuYFaYG6gFpgbqAXmBmqBuYFaYG6gFpgbqOUPiO8pqysuU9cAAAAASUVORK5CYII=" alt="" />

  (2)三维空间两点a(x1,y1,z1)与b(x2,y2,z2)间的欧氏距离:

    aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAVkAAAAgCAIAAAA3ypwxAAADrElEQVR4nO2cXZasIAyEXRcLYj2sxs24GOYBe0QI/wmKp76nOW2PiWWogcC9mwUAAGu3pxMAALwCeAEAwFp4AQDAAS8AAFj7IS84jNoAANvWOai/4gWHUcocT2cBwLp8xQt2DSsAYISPeAGsAIBBvuEFu970Hn98GLWRF97FrreNtLJdn+s/XqN7tyyz1VgXbkGaveBM4FWlRFnBYdRKRROne3VAds2n9xqyzFLjF+td5RxAm6OAIB3zgsOod2l3GBXm87ocK0i+U+L54m9UDfCFZJmgxirOaPOJlgWppMMLdv0u/WItlnnFd+i0a151XfWvJYu0Gks5o02aI5sT9HjB23bv4nzWqnmP3NS49Ivlr60mi6way8lBJcw7Fiu9wGtTGLpPV032UFDHk8VbCLSDukfw1liyldAV7p65vyTMS15VE5QsQfdHqBnUGUVUjaIV/Jf8D2ZV2u8fpFwvSCUVXuAG7xkt2eV9itgKqLfsZlLnlcSuAyO94fzU7545Xv2ELLs+s7xuL7AA7I4iqUax4+bfRKJgeu7v59wiSCVlL7iP/sebBcH6iJAx4/hOwOoHGJ3CNIZr6mk355aUJRhy5fBdsjRHeUiNiLqB2l8qnX8qBCh6QTRRG/WgIdXCb1FVlVOMe0OqQGu43pfdOS84ubKUPLLVHuUZNW5ITyHb7i+8/VnygrtmRG0HH8kfCPHnKfQfmNx2VCIzEY9IhMtI1JvG4Kz4vHIY7d+EWxMiSr5aJNWo8QJ6oPLJkr5/QpOH5wWeFV0tn3MIuo6YUv4aRuhACJ1S4qUnd6OUOX7XroqMnoItSypcVqLudz3YLTuM2rTWd89n1ySMUqoWWTVKFeoP1N/PnLJQ9y9pIjyrresXeCYQOha9amDc9UykpMyRnGyGVeS38YlHsMxbzeVw5xW2YxGju2hkR5h9+z3ZdyarRViN7O2jHv+9fT8sS+7+V5iGRhgLw/8egZJG1gnsb2qg0+vOnoX61GMnhEQDrl+7z9x0XoVbk1RVNC70qgIJni+YUiqtfTAW+L2A9/xDCuesmXfSptxcLyAkmnXyJY6TfF+smqSi0J/PUaPPbuRLhdZEvu3N7AXs5x8q46a/U6ffRC+IJZp9ZuOUJb18ub42rkkuClktM9Xo8RzhUqFH0BRzHPICf9njOmTs5x8mED6FKItINEOTl0jRslEnLgulyTRz/Mb/XwAAGAVeAACwFl4AAHDACwAA1sILAAAOeAEAwFp4AQDA8Qdi6OYV0zqIhQAAAABJRU5ErkJggg==" alt="" />

  (3)两个n维向量a(x11,x12,…,x1n)与 b(x21,x22,…,x2n)间的欧氏距离:

    aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMAAAABRCAIAAAALhD62AAAEF0lEQVR4nO2c25mtIAxGrcuCqMdqaMZinAcvGxQQSJCE+dfT+c7cAiwhQNzTBgCBqXcAQDcQCJCAQIAEBAIkIBAgoVigdZknwEfdKOgWyNjeQfx7FAtkzbysvYP490AgQAICARJ6BVqXGQL1R7VAyKH7A4EACb0CWTNBoP5oFggpkAAgECABgQAJtQKlc+j9nszYzZppmiao1owxBbLLsl4OYa5qyZgC7V8/dmnYrrVErUBv08qlzf4Pu2ASasKoAllzJj57FoQ5qBGjCgQ+QqtAuEoVgmKBsCpJAAIBEuoFqi6th38saBXIyaEvhTKVOL4fArEwgECVCkEgFoYQyFEob2v2O6YGRJQKFMih9wPDTIVwvcHFOAIVKgR4GEmg8mQIkFEqUHQJgkIfo1ag+Dp1rmRQ6I2zpyiLfkqg4/dLHIj0VSohGRpxh29NqCt+t4nWEJqcnoGkbndfr1KrFFqXedgEPNU20rVQWqBfVY0sMppcnAxJfVjYiEw0xFvFpEBiayayGl2k0Mizz0mgjfQRfgrkZFYLy3Fb8rqzLvzcpyY7nw77473SYU2vIya2MG6tdLMgphzo9ybDFsu9RJBfjpi5EQhM77ukR5/3O7lmDcNtpv9c8wjkKyM1AdpKBFqXOeMxSCxgez+TO4I6DfOE0WChdgXyH0O2pLLBEpYrUJ4+6Y4lbXL5YAmjwSGFI5Dfi4GAb//FcQxVR2YKlN9dcYHiT77fG40zpEgYjyF4CaPtDOT09y9zOMZqD2ye3RWU4xiqjvxNWG5g4UbsjTx7fV2MN0yzP0O3S5MiYYSHIBlGg7F65ECOOffZJbyodahOzviT1hRlho9H032UA12xPXrjiIm53PE9DOdvv4bR4qyi5C4sJFCX4vbXh71Qn62mb/3eOKeDDkdn7hAkw2hy1kUSqNdB4/tbqRUdVTq9+73RqxbpNgSpMNrkGvUCsRxD1ZESKHfjFfnR/J/0emM/8rjSlPUbix5DEA+j2Ul7rkBXenTozXQMVUdcIII+zi94bU6gN/Y/+uXHET2HIBJG0xNhjfVA0byLrA8oZiCBRizlkc8wApVtvP7D5fs3aBQokAKV7tuF3E8MwBAClerjFR0AEvoFqv10BaxgLCgUyDs7q/5wDgjEg06BsPqIAQIBEhAIkFAoUHgTVpjSYB/PxAgChes7Aum1c1F0LwcDlQwhUEUx1/CvEX6FPoGeKVBNMRcEYmIEgcJVVNEl7PoqBGJgAIGqirkgEBPyBboP9T0FKi/mupeDAQKiBXJG2q0+xqALQrRAB36hIQQShQaB/KNCCCQKHQI5JT+4yJCFFoGuIjAIJAs1Ap3rmDEQSBKKBPr2rSuQhyqB8OKXPHQJJPpj0/4n2gQCwoBAgAQEAiQgECDxB+ARsx9qAs7EAAAAAElFTkSuQmCC" alt="" />

三、算法实现

k-近邻算法的伪代码

对未知类型属性的数据集中的每个点依次执行以下操作:
(1) 计算已知类别数据集中的点与当前点之间的距离;
(2) 按照距离增序排序;
(3) 选取与当前点距离最近的k个点;
(4) 决定这k个点所属类别的出现频率;
(5) 返回前k个点出现频率最高的类别作为当前点的预测分类。

  1、构造数据

 def createDataSet():
group = array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]])
labels = ['A','A','B','B']
return group, labels

  这里有4组数据,每组数据的列代表不同属性的特征值,向量labels包含了每个数据点的标签信息,也可以叫分类。这里有两类数据,A和B。

  2、实施算法

    tile:重复某个数组。比如tile(A,n),功能是将数组A重复n次,构成一个新的数组.

 >>> tile([1,2],(4))
array([1, 2, 1, 2, 1, 2, 1, 2])
>>> tile([1,2],(4,1))
array([[1, 2],
[1, 2],
[1, 2],
[1, 2]])
>>> tile([1,2],(4,2))
array([[1, 2, 1, 2],
[1, 2, 1, 2],
[1, 2, 1, 2],
[1, 2, 1, 2]])

  欧式距离算法实现:

 def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1)) - dataSet #新数据与样本数据每一行的值相减 [[x-x1,y-y1],[x-x2,y-y2],[x-x3,y-y3],.....]
sqDiffMat = diffMat**2 #数组每一项进行平方[[(x-x1)^2,(y-y1)^2],........]
sqDistances = sqDiffMat.sum(axis=1)#数组每个特证求和[[(x-xi)^2+(y-yi)^2],......]
distances = sqDistances**0.5 #数组每个值 开根号 ,,欧式距离公式 完成。。。。
sortedDistIndicies = distances.argsort() #argsort函数返回的是数组值从小到大的索引值
classCount={} #以下是选取 距离最小的前k个值的索引,从k个中选取分类最多的一个作为新数据的分类
for i in range(k):# 统计前k个点所属的类别
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]# 返回前k个点中频率最高的类别

  其中 inX:需要分类的新数据,dataSet:样本数据特征,labels:样本数据分类,k:选取前k个最近的距离

   测试算法:

 >>> group,labels = kNN.createDataSet()
>>> group,labels
(array([[ 1. , 1.1],
[ 1. , 1. ],
[ 0. , 0. ],
[ 0. , 0.1]]), ['A', 'A', 'B', 'B'])
>>> kNN.classify0([0,0],group,labels,3)
'B'
>>>

  测试结果:[0,0]属于分类B.

  3、如何测试分类器

四、 示例:使用k-近邻算法改进约会网站的配对效果  

  我的朋友海伦一直使用在线约会网站寻找适合自己的约会对象。尽管约会网站会推荐不同的人选,但她并不是喜欢每一个人。经过一番总结,她发现曾交往过三种类型的人:

  • 不喜欢的人
  • 魅力一般的人
  • 极具魅力的人

  海伦希望我们的分类软件可以更好地帮助她将匹配对象划分到确切的分类中。此外海伦还收集了一些约会网站未曾记录的数据信息,她认为这些数据更有助于匹配对象的归类。

  1、准备数据:从文本文件中解析数据

  数据存放在文本文件datingTestSet.txt中,每个样本数据占据一行,总共有1000行。

  海伦的样本主要包含以下3种特征:

  • 每年获得的飞行常客里程数
  • 玩视频游戏所耗时间百分比
  • 每周消费的冰淇淋公升数

  2、分析数据:使用Matplotlib创建散点图

  散点图使用datingDataMat矩阵的第一、第二列数据,分别表示特征值“每年获得的飞行常客里程数”和“玩视频游戏所耗时间百分比”。

机器学习算法( 二、K - 近邻算法)

每年赢得的飞行常客里程数与玩视频游戏所占百分比的约会数据散点图

  3、准备数据:归一化数值

   不同特征值有不同的均值和取值范围,如果直接使用特征值计算距离,取值范围较大的特征将对距离计算的结果产生绝对得影响,而使较小的特征值几乎没有作用,近乎没有用到该属性。如两组特征:{0, 20000, 1.1}和{67, 32000, 0.1},计算距离的算式为:

aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAUoAAAAbCAIAAACV7lG0AAAH/klEQVR4nO2cT2gbVx7Hf7d3EEWIMuQwmCEMognCDIt1EBKhyJikpkFEEGIMEQ0YLXbAkBrSBR2qywqqEHxobCK3xBctiGDDzqFut05FMYLVYSVYsYIYgbLU7mHYwxyKD3N7e5gZzd8nj6SRZJn3OWX+aOb3+/7e7837vfccwBQK5ZoC0zaAQqGMC5reFMq1haY3hXJtoelNoVxbaHpTKNeWAdNb+Wfus8WHWRee3I/fuiUsPXK59GhJcL+QfbQk3Irff+LTlSf3432uuJtAvuKz2dQ4atwoxmWz2ey3J3+MNb2lcopZOZQGfAeFQpkKA6V3I88LhZaiH0q1XDyeq12BbFdaxejtp+8slkjiemxxcT4YXNg69s/Eq+PzbEAj44WxqTRAesuVNJMqf9APu6VkIFrUk13pHK4vsOHF1eV5Jvy43FFITxmAi/dv12NcZDG1GOHS++qL67k5YNIvDsz8oyVj3C0lEZ9v9H5c3eQ3jhUsV9IIULoi+2CP3eeL5v7jGMexQRRgYls/nus+S7XigzC7kFqN3WSW8kaDI5wnSee/pNLxViKcSK3eYYNmA0h+kAwY0I8pRAZjjBX5P2/XY3de/rvfzy7++0vxQeTP4qCDXndIgbfjZtp4VPKe3o08zz/7VX+NVE4h41Cp53jg1e5Hqed4+PjpuxFbo9IqRlEo9d37C9z4Zj4AkBExxsrhCthh0pUPmoEgFNvqz9tFAdDdN2fqL0Kb1b4vk6svX1YvVdDqs1ROBaK5mqSZCoAyoqLdhZK7Hf3fkNj9gMnnSdL5L2m3lEQ3Hv/9HGPc3UkAaGKR/CAZMKAfGE88MvjnLzmWZZkAAMzl6u4/OX29zLEsE0QAkCr70ceQAm+hj2kjqETEa3rLYoZJ7vXMbeR5MI3Tq5shCGVE2ThSs3F4uqUkAi1i7VfxYPDTv9ZljFuFyNzTd3ow5KM1NmqMx7o7Cfj42a/qgdT84YemhLFcTl0uT7soENtBD6vPZ6UkAATWjhSMNZchuXeunu9FyTginSdJ57+kR2sIANQvhJgBrVmT/CAZMKgfGOMJR0ZHzPRL797j/UpvkjBuuJs2vEpEPKZ3dyfBm2qDem4O5p7XdD3bRcFsre1wGNpFAdzGJP/6PtvrzCUxY05ujHGrEIGPnr4z3a+0CgJi18RLihcvjcjms3y0zqLeo9VmMperq5Ez2kvvkHCeJJ3/kmKlc3JwcHDSUfS+SSi0FKIfJAMG9ENjkpHRmXB6kwJPvNdp2tAqEfGU3srxBmMubM5KSYvpNmtH10wdg38kxBOJh9nlT4JMes9eekqHK4y12NIMMbpM5fwwczPepwbq4aER2X222itmEEAoI8p233VpSOdJ0vkuqdnaVkEAxK5UHGNHww9STOsD+oFN9004MpNNb1KAXW8mXRxWJSJe0rtbSlqXw6qbIUt36XtbVJ+gl3Iu0wtKdZNBpmJBw9z9SWJmPl3uKFgupy5rIR4akd1nM92dBACTLncUYpSvSHp3/7b2aTQcRGzyxYnzWSY//E7vKURmBtN7WJWIXJ7eSj3H2wobMWM1zktbPD34irRa71yw18aIepWtHppmKuRKGrlVNu2ioL347M1dZMy+OeL3x8m35tc/XAgBv9R3B4HdZ0OfVjGKAnonO8n0HkhSi8nnb1dCgG7n6+aZZosffqf3FCIzcnoP0WRHTG/PKnnFSO+L5s5dJrl3brvBbSeLXVDfC0V1cN57hFERqsjllGtlbpJnMDx8IwiNSBIzDJPebUrm+65o7S3//rusr2OqKxBGuB1++Ft7TyEyM1h7D60SETW9lU55JcwEERgzdxqNPC/k6rb31XNzcON5rXeslmzpitadHa0hQKnyKBsMzkpJMGYZ2kUBIGR0MkdryD1wrULENH3rHQ+NyO4zxuoHj03vv7/QbkjsflBt7S1gnm5HAfhcXSGdJ0nnu6Ta5yC6fYox1qfOtQGQmx8kAwb0Q2fCkem5OLn0JgXejX6D82FUIqJ9vaVmtXl+vBECgJVDwyS5kg44K1znZIZjwZbJDD3Z13sz0isqaf8eoGi+18mcbkcJgbNOTXhnqAkcScwwbDJXUvfW7H7Bq4FxrHVrE4CE8yTp/JZUqW4ycOPei5qEMVZqz+cAAvf2OgrRD5IBA/qhMdHIGK+0tJLum8+DKBB/ZbLCzylLkjDVv9wMoOD6kSnTvU2t+YCl9m7kecuIrV0UTJsFTDg7TOX8x60YczO2urzAxtb3m6MlN8baXiqWXYglwonHxZ9MM+faso5ThlYhAu4GX4LX5ReTz6ev4qbiCACMrvuiufMgzMwvr94Jhx/kDcsJ50nS+S2p0jncWvxTJHovdSfMxlJfq0/s4wfJgAH9wHiykcH45y85jmODCAAABVmOUzeItbejqPedOH29zHEcEwAAgADDcdzoe9dchZGP1hhj7xXBNJWhVSJinVpTdzNFCi2M1eUwbROOg0ae12+bBhf/+61XRhp0dxJDqnO6HTVvCHRnyj7PMjQyXhheJSK2mXPleCOkFeC2nSx2U0pJZK/Tp0wjzyNSd+QLV9Dn2YBGxgvjUMmxMKYO0FP7hxvOXSMW2ttR5GudMBrOvykYA1fM59mARsYL41HJue6tDtAtM9UEpFoueps0OThRlFYxHn/20xi/DzpXx+fZgEbGC2NTyWVbi1xJI3DZok+hUGYLt11rSv119mvR+cdsFAplpqD/lSKFcm35P69v270Na0L+AAAAAElFTkSuQmCC" alt="" />

显然第二个特征将对结果产生绝对得影响,第一个特征和第三个特征几乎不起作用。

然而,对于识别的过程,我们认为这不同特征是同等重要的,因此作为三个等权重的特征之一,飞行常客里程数并不应该如此严重地影响到计算结果。

在处理这种不同取值范围的特征值时,我们通常采用的方法是将数值归一化,如将取值范围处理为0到1或者1到1之间。下面的公式可以将任意取值范围的特征值转化为0到1区间内的值:

newValue = (oldValue – min) / (max – min)

其中min和max分别是数据集中的最小特征值和最大特征值。

添加autoNorm()函数,用于将数字特征值归一化:

 def autoNorm(dataSet):
minVals = dataSet.min(0)# 分别求各个特征的最小值
maxVals = dataSet.max(0)# 分别求各个特征的最大值
ranges = maxVals - minVals# 各个特征的取值范围
normDataSet = zeros(shape(dataSet))
m = dataSet.shape[0]
normDataSet = dataSet - tile(minVals, (m,1)) # oldValue - min
normDataSet = normDataSet/tile(ranges, (m,1)) #element wise divide (oldValue-min)/(max-min) 数据归一化处理
return normDataSet, ranges, minVals

  对这个函数,要注意返回结果除了归一化好的数据,还包括用来归一化的范围值ranges和最小值minVals,这将用于对测试数据的归一化。

  注意,对测试数据集的归一化过程必须使用和训练数据集相同的参数(ranges和minVals),不能针对测试数据单独计算ranges和minVals,否则将造成同一组数据在训练数据集和测试数据集中的不一致。

  4、测试算法:作为完整程序验证分类器

  机器学习算法一个很重要的工作就是评估算法的正确率,通常我们只提供已有数据的90%作为训练样本来训练分类器,而使用其余的10%数据去测试分类器,检测分类器的正确率。需要注意的是,10%的测试数据应该是随机选择的。由于海伦提供的数据并没有按照特定目的来排序,所以我们可以随意选择10%数据而不影响其随机性。

  创建分类器针对约会网站的测试代码:利用样本集数据进行测试算法

 def datingClassTest():
hoRatio = 0.50 #hold out 10%
datingDataMat,datingLabels = file2matrix('datingTestSet2.txt') #load data setfrom file
normMat, ranges, minVals = autoNorm(datingDataMat)
m = normMat.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3)
print "the classifier came back with: %d, the real answer is: %d" % (classifierResult, datingLabels[i])
if (classifierResult != datingLabels[i]): errorCount += 1.0
print "the total error rate is: %f" % (errorCount/float(numTestVecs))
print errorCount

执行分类器测试程序:

 >>> kNN.datingClassTest()

 the classifier came back with: 2, the real answer is: 1

 the classifier came back with: 2, the real answer is: 2

 the classifier came back with: 1, the real answer is: 1

 the classifier came back with: 1, the real answer is: 1

 the classifier came back with: 2, the real answer is: 2

 .................................................

 the total error rate is: 0.064000

 32.0

  分类器处理约会数据集的错误率是6.4%,这是一个相当不错的结果。我们可以改变函数datingClassTest内变量hoRatio和变量k的值,检测错误率是否随着变量值的变化而增加。

这个例子表明我们可以正确地预测分类,错误率仅仅是2.4%。海伦完全可以输入未知对象的属性信息,由分类软件来帮助她判定某一对象的可交往程度:讨厌、一般喜欢、非常喜欢。

  5、使用算法:构建完整可用系统

  综合上述代码,我们可以构建完整的约会网站预测函数:对输入的数据需要 归一化处理

 def classifyPerson():
resultList = ['not at all', 'in small doses', 'in large doses']
percentTats = float(raw_input("Percentage of time spent playing video game?"))
ffMiles = float(raw_input("Frequent flier miles earned per year?"))
iceCream = float(raw_input("Liters of ice cream consumed per year?"))
datingDataMat, datingLabels = file2matrix('datingTestSet.txt')
normMat, ranges, minVals = autoNorm(datingDataMat)
inArr = array([ffMiles, percentTats, iceCream]) #新数据 需要归一化处理
classifierResult = classify((inArr - minVals) / ranges, normMat, datingLabels, 3)
print "You will probably like this person: ", resultList[classifierResult - 1]

  目前为止,我们已经看到如何在数据上构建分类器。

完整代码:

 '''
Created on Sep 16, 2010
kNN: k Nearest Neighbors Input: inX: vector to compare to existing dataset (1xN)
dataSet: size m data set of known vectors (NxM)
labels: data set labels (1xM vector)
k: number of neighbors to use for comparison (should be an odd number) Output: the most popular class label @author: pbharrin
'''
from numpy import *
import operator
from os import listdir
import matplotlib
import matplotlib.pyplot as plt
def show(d,l):
#d,l=kNN.file2matrix('datingTestSet2.txt')
fig=plt.figure()
ax=fig.add_subplot(111)
ax.scatter(d[:,0],d[:,1],15*array(l),15*array(l))
plt.show()
def show2():
datingDataMat,datingLabels=file2matrix('datingTestSet2.txt')
fig = plt.figure()
ax = fig.add_subplot(111)
l=datingDataMat.shape[0]
X1=[]
Y1=[]
X2=[]
Y2=[]
X3=[]
Y3=[]
for i in range(l):
if datingLabels[i]==1:
X1.append(datingDataMat[i,0]);Y1.append(datingDataMat[i,1])
elif datingLabels[i]==2:
X2.append(datingDataMat[i,0]);Y2.append(datingDataMat[i,1])
else:
X3.append(datingDataMat[i,0]);Y3.append(datingDataMat[i,1])
type1=ax.scatter(X1,Y1,c='red')
type2=ax.scatter(X2,Y2,c='green')
type3=ax.scatter(X3,Y3,c='blue')
#ax.axis([-2,25,-0.2,2.0])
ax.legend([type1, type2, type3], ["Did Not Like", "Liked in Small Doses", "Liked in Large Doses"], loc=2)
plt.xlabel('Percentage of Time Spent Playing Video Games')
plt.ylabel('Liters of Ice Cream Consumed Per Week')
plt.show() def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1)) - dataSet
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
classCount={}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0] def createDataSet():
group = array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]])
labels = ['A','A','B','B']
return group, labels def file2matrix(filename):
fr = open(filename)
numberOfLines = len(fr.readlines()) #get the number of lines in the file
returnMat = zeros((numberOfLines,3)) #prepare matrix to return
classLabelVector = [] #prepare labels return
fr = open(filename)
index = 0
for line in fr.readlines():
line = line.strip()
listFromLine = line.split('\t')
returnMat[index,:] = listFromLine[0:3]
classLabelVector.append(int(listFromLine[-1]))
index += 1
return returnMat,classLabelVector def autoNorm(dataSet):
minVals = dataSet.min(0)
maxVals = dataSet.max(0)
ranges = maxVals - minVals
normDataSet = zeros(shape(dataSet))
m = dataSet.shape[0]
normDataSet = dataSet - tile(minVals, (m,1))
normDataSet = normDataSet/tile(ranges, (m,1)) #element wise divide
return normDataSet, ranges, minVals def datingClassTest():
hoRatio = 0.50 #hold out 10%
datingDataMat,datingLabels = file2matrix('datingTestSet2.txt') #load data setfrom file
normMat, ranges, minVals = autoNorm(datingDataMat)
m = normMat.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3)
print "the classifier came back with: %d, the real answer is: %d" % (classifierResult, datingLabels[i])
if (classifierResult != datingLabels[i]): errorCount += 1.0
print "the total error rate is: %f" % (errorCount/float(numTestVecs))
print errorCount def img2vector(filename):
returnVect = zeros((1,1024))
fr = open(filename)
for i in range(32):
lineStr = fr.readline()
for j in range(32):
returnVect[0,32*i+j] = int(lineStr[j])
return returnVect def handwritingClassTest():
hwLabels = []
trainingFileList = listdir('trainingDigits') #load the training set
m = len(trainingFileList)
trainingMat = zeros((m,1024))
for i in range(m):
fileNameStr = trainingFileList[i]
fileStr = fileNameStr.split('.')[0] #take off .txt
classNumStr = int(fileStr.split('_')[0])
hwLabels.append(classNumStr)
trainingMat[i,:] = img2vector('trainingDigits/%s' % fileNameStr)
testFileList = listdir('testDigits') #iterate through the test set
errorCount = 0.0
mTest = len(testFileList)
for i in range(mTest):
fileNameStr = testFileList[i]
fileStr = fileNameStr.split('.')[0] #take off .txt
classNumStr = int(fileStr.split('_')[0])
vectorUnderTest = img2vector('testDigits/%s' % fileNameStr)
classifierResult = classify0(vectorUnderTest, trainingMat, hwLabels, 3)
print "the classifier came back with: %d, the real answer is: %d" % (classifierResult, classNumStr)
if (classifierResult != classNumStr): errorCount += 1.0
print "\nthe total number of errors is: %d" % errorCount
print "\nthe total error rate is: %f" % (errorCount/float(mTest))
上一篇:【Nginx】Linux常用命令------启动、停止、重启


下一篇:搞懂:前端跨域问题JS解决跨域问题VUE代理解决跨域问题原理