.pdf.zip博客园下载 2,083KB
Nick Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence". In an interview that was held at the TED conference in March 2015, with Baidu's CEO, Robin Li, Gates said he would "highly recommend" Nick Bostrom's recent work, Superintelligence: Paths, Dangers, Strategies.
Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In 2017 he co-signed a list of 23 principles that all AI development should follow.
Born as Niklas Boström in 1973 in Helsingborg, Sweden. He received B.A. degrees in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. He held a teaching position at Yale University (2000–2002), and was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).
The Quine–McCluskey algorithm (QMC), also known as the method of prime implicants, is a method used for minimization of Boolean functions that was developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956.
The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which was a rediscovery of Allan Marquand's 1881 logical diagram aka Marquand diagram but with a focus now set on its utility for switching circuits. Veitch charts are therefore also known as Marquand–Veitch diagrams, and Karnaugh maps as Karnaugh–Veitch maps (KV maps).
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind. Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind. Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists. [貌似能被拔电的不可怕,可怕的是能造东西的:-)]
Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says a common assumption is that high intelligence would entail a "nerdy" unaggressive personality, but notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton AI being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk. Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be. [AI说按红色按钮时三思] Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival. Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.
如果AI不理解什么是死亡,它就无所畏惧。Linux下kill一个process,再重新run它,加载以前的数据不?不加载它一遍遍学没长进。加载的话它觉得死亡不过如此:关键它不知道啥叫疼/痛苦啊。这算不算悖论?再说它也没家人亲戚朋友。
六级/考研单词: author, bias, conference, march, intellect, cognitive, domain, nonetheless, potent, mathematics, logic, physics, academy, fellow, prime, algebra, diagram, utility, thereby, extinct, mankind, compute, initiate, explode, digit, rapid, deliberate, contend, manufacture, humane, entity, entail, advocate, rational, dictate, evolve, diminish, evaluate, confer, avert, prey, accordingly, inevitable, pursuit, offend, hardware, robust, isolate