Photo by Maarten van den Heuvel on Unsplash
本文在腾讯云+社区人工智能专栏首发, 为原创翻译文章.
文章正文部分以注释格式给出正文
导读
英文原文发布于2015年9月, 当时
DeepMind
公司 的Alpha Go
(即阿尔法围棋
)尚未战胜人类的职业围棋选手.今天, 我们已经知道旧版战胜李世石和柯洁的
Alpha Go
(包括后来的AlphaGo Master
)都以深度学习作为技术基础, 而新版的AlphaGo Zero
是在没有人类知识的条件下以神经网络为技术基础自学成才的机器.本文介绍的下棋机器原理与Alpha Go相似, 在此翻译出来供各位读者了解其中原理.
"世界首个以评估棋局下棋, 而非以蛮力找出所有可能落子方式的下棋机器"
"In a world first, a machine plays chess by evaluating the board rather than using brute force to work out every possible move"
距离IBM深蓝(Deep Blue
)超级计算机在国际标准锦标赛规则下首次击败国际象棋世界冠军加里·卡斯帕罗夫(Gary Kasparov)已经有差不多20年了. 从那时起, 下象棋计算机的能力变得更加强大, 甚至运行在智能手机上的现代象棋引擎都几乎能让最强的人类毫无招架之力.
It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.
虽然计算机运算能力在变得更快, 但象棋引擎的工作方式并没有发生改变. 他们的运算力依赖于蛮力, 即通过搜索所有可能的未来象棋走向来找到最好的下一步棋.
But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.
当然了, 没有人与它们匹敌或是在某些能力接近. 深蓝超级计算机每秒钟可以搜索到约2亿个位置, 而卡斯帕罗夫的搜索速度可能不会超过5秒. 但是他下棋的表现基本上是一样的. 很明显, 人类有一个计算机尚未掌握技巧.
Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.
这个诀窍是通过评估棋子的位置, 缩小搜索的最优的途径. 这极大地简化了计算任务, 因为它将所有可能的移动树修剪成只剩几个分支.
This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.
电脑未曾在这方面有任何擅长之处, 但现在这个情况就要改变了.这一切都要归功于帝国理工学院(Imperial College London)的马修赖(Matthew Lai)先生的工作. 马修赖先生创造了一种名叫长颈鹿的人工智能机器, 它通过评估更像人类的落子位置, 并以与传统国际象棋引擎完全不同的方式, 自学了下棋.
Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.
开箱即用, 新机器与最好的传统国际象棋引擎在同一水平上运行, 其中许多传统引擎已经过多年的微调. 以人类水平来看, 它相当于FIDE(即Federation Internationale des Echecs, 国际象棋联合会, 来自其法语缩写)大师赛中位居比赛国际象棋选手的前2.2%的地位.
Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.
马修赖新机器背后的技术是神经网络. 这是一种受人脑启发而得来的处理信息的方式. 它由几层节点组成, 这些节点的连接方式随着系统的训练而变化. 训练过程中使用大量的数据来微调节点的连接, 以使神经网络产生给定输入的特定输出, 例如识别图像中面部的存在.
The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.
在过去的几年中, 神经网络由于两个进步而变得非常强大. 首先是更好地了解如何微调这些神经网络的学习, 有一部分是得益于更快的计算机. 此外是有大量可用的注释数据集来训练神经网络.
In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.
这使得计算机科学家能够训练更大型更多层次的网络. 这些所谓的深度神经网络已经变得非常强大. 目前在人脸识别和手写识别等模式识别任务中, 它们的表现常常超越人类.
That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.
所以深度神经网络应该能够发现国际象棋中的模式并不奇怪, 这也正是马修赖先生的做法. 他的网络由四层组成, 以三种不同的方式一起检查棋盘上的每个棋子的位置.
So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.
首先看看整个棋局, 比如每一方的棋子数量和类型, 哪一方准备下棋, 投掷权利在谁手中等等. 其次是以棋子为中心的特征, 例如每一方的每一个棋子的位置, 而最后一个方面是映射每个棋子所攻击和防守的部分.
The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.
马修赖使用精心制作的一系列国际象棋比赛真实数据来训练他的神经网络. 这个数据集必须有正确的棋子位置分布. 他说: "例如, 对于每个队伍有三个皇后位置的数据用来训练系统是没有意义的, 因为这些位置在实际的比赛中并不会出现."
Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.
除了*棋类比赛中通常会出现的位置之外, 还必须有各种不同的不均等的位置. 这是因为尽管在真正的国际象棋比赛中很少出现不均等的位置, 但是它们在计算机内部执行搜索中一直出现.
It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.
这个数据集必须是巨大的. 在训练期间, 神经网络内的大量连接必须进行微调, 而这只能通过大量的数据集来完成. 使用太小的数据集, 神经网络可能陷入无法识别真实世界中出现的各种模式的状态.
And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.
马修赖从计算机国际象棋游戏数据库中随机选择500万个位置生成他的数据集. 然后他创建了更多的变化. 在将这些数据用于训练之前, 为每个位置添加一个随机的符合规则的落子(random legal move). 他总共以这种方式创建了1.75亿个位置.
Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.
通常训练这些机器的方法是手动评估每个位置, 并使用这些信息来教机器识别哪些落子方式是有优势的哪些是相对弱势的.
The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.
但评估1.75亿个位置是一项艰巨的任务. 这可以由另一个国际象棋引擎来完成, 但马修赖的目标更加雄心勃勃. 他想让机器自学.
But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.
相反, 他使用了一种自我引导技术, 让长颈鹿机器自我对弈, 目标是改善自己对未来位置评估的预测. 这是有效的, 因为有固定的参考点, 最终决定一个位置的价值 - 决定比赛后来是赢了输了还是平局.
Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.
这样, 计算机就可以知道哪些位置有优势, 哪些相对弱势.
In this way, the computer learns which positions are strong and which are weak.
最后一步是测试训练过的长颈鹿, 结果让阅读有趣起来. 马修赖在一个名为战略测试套件(Strategic Test Suite
)的标准数据库上测试了他的机器, 这个数据库由1500个位置组成, 这些位置被选中来测试一个引擎识别不同战略思想的能力.
"例如, 其中一个主题测试了对打开文件控制的理解, 另一个主题测试了不同情况下主教和骑士的价值如何相互变化的理解, 也有另一个测试了对中心控制的理解. "他说.
Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.
这个测试的结果超过15,000分.
The results of this test are scored out of 15,000.
马修赖使用它在训练期间的各个阶段测试机器. 随着引导过程的开始, 长颈鹿很快就达到了6,000分, 最终在72小时后达到了9,700的高分. 马修赖说, 这与世界上最好的国际象棋引擎可以一战.
Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.
他补充说: "这一点非常了不起, 因为它们的评估功能都是经过精心设计的手工设计的庞然大物, 拥有数百个参数, 经过数年的手动和自动调整, 其中许多参数已经由人类大师们来完成."
“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.
马修赖继续使用同样的机器学习方法来确定一个给定的某步棋可能值得去继续往下搜索的概率. 这很重要, 因为它可以防止不必要的搜索树的无益分支, 同时显著提高计算效率.
Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.
马修赖说, 这个概率方法46%的时间能预测最好的某步棋, 70%可以预测排名前三佳的某步棋的走法. 所以电脑不必担心其他的落子方式.
Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.
这是一个有趣的工作, 代表了国际象棋引擎工作方式的重大变化. 当然, 它并不完美. 长颈鹿的一个缺点是神经网络比其他类型的数据处理慢得多. 马修赖说, 搜索相同数量的位置, 长颈鹿花费的时间比传统的国际象棋引擎要长10倍.
That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.
但即使有这个劣势, 它也是有竞争力的. 马修赖说: "长颈鹿能够在现代主流个人电脑上以FIDE国际大师级别进行游戏. 相比之下, *引擎是在超级大师的级别.
But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.
这仍然令人印象深刻. "与现在大多数国际象棋引擎不同的是, 长颈鹿不仅能够看得很远, 而且能够准确地评估狡猾的位置. 能够理解对于人类而言是直觉, 但却让象棋引擎琢磨很长一段时间的复杂位置概念, " 马修赖说. "在开场和完赛阶段, 这一点尤其重要. "
That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”
而这仅仅是一个开始. 马修赖说, 应该直截了当地用同样的方法来处理其他游戏. 一个例外是中国的传统围棋(Go), 在这个游戏中, 人类仍然比计算机对手拥有令人印象深刻的优势. 也许马修赖能破除这一优势.
And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.
参考文献: 长颈鹿: 使用深度强化学习下棋.(Giraffe: Using Deep Reinforcement Learning to Play Chess)
翻译人:FesonX
文章链接: https://cloud.tencent.com/developer/article/1036654
原文链接: https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/
原文作者: Emerging Technology from the arXiv