
Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Strategic Artificial Intelligence Research Center. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. Bostrom holds bachelor degrees in artificial intelligence, philosophy, mathematics and logic followed by master’s degrees in philosophy, physics and computational neuroscience. In 2000, he was awarded a PhD in Philosophy from the London School of Economics. He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works. During his time in London, Bostrom also did some turns on London’s stand-up comedy circuit. Nick is best known for his work on existential risk, the anthropic principle, human enhancement ethics, the simulation argument, artificial intelligence risks, the reversal test, and practical implications of consequentialism. The bestseller Superintelligence, and FHI’s work on AI, has changed the global conversation on the future of machine intelligence, helping to stimulate the emergence of a new field of technical research on scalable AI control. More: https://nickbostrom.com
Anthropic Bias explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy.There are the philosophical thought experiments and paradoxes: the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room.And there are the applications in contemporary science: cosmology ("How many universes are there?," "Why does the universe appear fine-tuned for life?"); evolutionary theory ("How improbable was the evolution of intelligent life on our planet?"); the problem of time's arrow ("Can it be given a thermodynamic explanation?"); quantum physics ("How can the many-worlds theory be tested?"); game-theory problems with imperfect recall ("How to model them?"); even traffic analysis ("Why is the 'next lane' faster?").Anthropic Bias argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox.
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful—possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
A philosophical parable about death.
在技术进步和经济不断发展的当下,越来越多的人开始思考这些问题:当机器人能够顺畅地处理大部分工作时,是否会导致人类大量失业?在大量失业的情况下,人们该如何获得收入,是否会因此感到人生毫无意义,丧失获得快乐的能力? 假设人类安全地开发出了超级智能,对它进行了良好的管理,并很好地利用了这项技术所能释放的巨大经济潜力和神奇的变革力量。在这个世界中,人工智能可以做我们能做的一切,而且可以做得更好、更快,成本更低。那么,人类还能扮演什么角色呢? 这本书前瞻性地探索了科技高度发达的“未来之地”,并预测了超级智能时代的人类命运,它探讨的是一个后技术时代的新问题:一旦到达梦想中的技术之巅,人类要如何培养意义和目的?如果不再需要工作,我们将如何生活? 作者在书中部分采用了对话体的形式,创造性地虚构了一个为期一周的哲学系列讲座的场景,由博斯特罗姆教授本人主讲。此外,作者还构建了一个动物世界寻找乌托邦的寓言场景,以此映射人类世界,新颖有趣,有助于读者快速进入作者的哲学场域。
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
by Nick Bostrom
Rating: 4.3 ⭐
The Great Filter can be thought of as a probability barrier. It consists of the existance of one or more highly improbable evolutionary transitions or steps whose occurrence is required in order for an Earth‐like planet to produce an intelligent civilization of a type that would be visible to us with our current observation technology. You start with billions and billions of potential germination points for life, and you end up with a sum total of zero extraterrestrial civilizations that we can observe. The Great Filter must therefore be powerful enough — which is to say, the critical steps must be improbable enough— that even with many billions rolls of the dice, one ends up with nothing: no aliens, no spacecraft, no signals, at least none that we can detect in our neck of the woods.
Dear Human,Greetings, and may this letter find you at peace and in prosperity! Forgive my writing to you out of the blue. Though you and I have never met, we are not strangers. We are, in a certain sense, the closest of kin. I am one of your possible futures.I hope you will become me. Should fortune grant this wish, then I am not just a possible future of yours, but your actual future: a coming phase of you, like the full moon that follows a waxing crescent, or like the flower that follows a seed.I am writing to tell you about my life – how good it is – that you may choose it for yourself.Although this letter uses the singular, I am really writing on behalf of my all my contemporaries, and we are addressing ourselves to all of your contemporaries. Amongst us are many who are possible futures of your people. Some of us are possible futures of children you have not yet given birth to. Still others are possible artificial persons that you might one day create. What unites us is that we are all dependent on you to make us real.You can think of this note as if it were an invitation to a ball that will take place only if folks turn up.We call the lives we lead here “Utopia”.
Abstract -- Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thusoffers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
The human desire to acquire new capacities is as ancient as our species itself. We have always sought to expand the boundaries of our existence, be it socially, geographically, or mentally. There is a tendency in at least some individuals always to search for a way around every obstacle and limitation to human life and happiness.
Es stehen dramatische Zeiten bevor: In den nächsten Jahrzehnten könnten Technologien wie die künstliche Intelligenz und die Gentechnik das Ende der Menschheit herbeiführen oder ein goldenes Zeitalter einläuten, das wir uns noch kaum ausmalen können. Oder leben wir etwa heute schon in der Matrix, wie der schwedische Philosoph und Bestsellerautor Nick Bostrom (Superintelligenz) in seinem berühmten Simulationsargument nahelegt?In den sechs hier versammelten Aufsätzen, von denen einige bereits Klassikerstatus besitzen, wagt Bostrom einen ebenso nüchternen wie detaillierten Blick in unsere Zukunft. Manches lies sich (noch) wie Science-Fiction, könnte aber aktueller und ernster kaum sein.
by Nick Bostrom
Rating: 3.4 ⭐
Eine Schwachstellenanalyse von einem der wichtigsten Risikoforscher der Welt.Angesichts von Klimawandel und Pandemien wächst global das Bewusstsein, dass die Welt, wie wir sie kennen, womöglich nicht ewig währt – dass die Lebenserwartung der menschlichen Zivilisation als ganzer maßgeblich von unserem Handeln abhängt. Unsere Welt ist verwundbar, ja, es sind sogar Szenarien der vollständigen Selbstzerstörung denkbar, sofern keine geeigneten Maßnahmen zu ihrer Stabilisierung ergriffen werden. Dies ist die Hypothese, die Nick Bostrom in einem vieldiskutierten Text aufstellt, der nun erstmals in deutscher Übersetzung erscheint.Bostrom geht davon aus, dass die technologische Entwicklung unweigerlich auf einen Punkt zusteuert, an dem es kritisch wird. Historisch gesehen, war das schon einmal fast der Fall, wie er mit einem atemberaubenden Gedankenexperiment illustriert, das in der Zeit des Kalten Krieges spielt. Dann entwickelt er einige nur allzu plausible Szenarien, die den Untergang der menschlichen Zivilisation mit großer Wahrscheinlichkeit herbeiführen würden – es sei denn, wir treffen rechtzeitig Gegenmaßnahmen. Welche das sein könnten, diskutiert Bostrom im zweiten Teil dieses wichtigen Buches, das mit Empfehlungen an die Politik schließt.
by Nick Bostrom
Rating: 4.3 ⭐
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.
Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. This paper surveys the terrain and proposes a taxonomy.
by Nick Bostrom
AbstractThe state of the art as regards the thesis of indeterminacy of translation is as follows. Very much has been said about it, most of which is based on misunderstandings. No satisfactory formulation of the thesis has been presented. No good argument has been given in favour of the thesis. No good argument has been advanced against it.In this paper, I attempt to clear up some of the misunderstandings, to provide a satisfactory formulation of the thesis in non-naturalistic terms, to demonstrate how a naturalistic substitute can be derived from this formulation, to refute the best know arguments for and against the thesis, and to show how it relates to the thesis of indeterminacy of reference, the theses of semantic and epistemic holism and to the thesis of underdetermination of theory by data. Finally I argue that there is an interesting sense in which the indeterminacy is a matter of degree, and express my opinion that this degree is probably not very high.
by Nick Bostrom
Nick Bostrom and Anders Sanberg publish "Whole Brain Emulation Roadmap," a manifesto for mind-uploading.
by Nick Bostrom
Klimawandel, Pandemien, Was tun, wenn die Existenz unserer gesamten Zivilisation auf dem Spiel zu stehen droht?Eine Schwachstellenanalyse von einem der wichtigsten Risikoforscher der Welt.Diskutiert die Vor- und Nachteile von Überwachungstechnologien und überstaatlichen Kontrollinstitutionen.Vom Autor des internationalen Bestsellers Superintelligenz. Szenarien einer kommenden Revolution.Angesichts von Klimawandel und drohenden Pandemien wächst global das Bewusstsein, dass die Welt, wie wir sie kennen, womöglich nicht ewig währt - dass die Lebenserwartung der menschlichen Zivilisation als ganzer maßgeblich von unserem Handeln abhängt. Unsere Welt ist verwundbar, ja, es sind sogar Szenarien der vollständigen Selbstzerstörung denkbar, sofern keine geeigneten Maßnahmen zu ihrer Stabilisierung ergriffen werden. Dies ist die Hypothese, die Nick Bostrom in einem vieldiskutierten Text aufstellt, der nun erstmals in deutscher Übersetzung erscheint.Bostrom geht davon aus, dass die technologische Entwicklung unweigerlich auf einen Punkt zusteuert, an dem es kritisch wird. Historisch gesehen, war das schon einmal fast der Fall, wie er mit einem atemberaubenden Gedankenexperiment illustriert, das in der Zeit des Kalten Krieges spielt. Dann entwickelt er einige nur allzu plausible Szenarien, die den Untergang der menschlichen Zivilisation mit großer Wahrscheinlichkeit herbeiführen würden - es sei denn, wir treffen rechtzeitig Gegenmaßnahmen. Welche das sein könnten, diskutiert Bostrom im zweiten Teil dieses wichtigen Buches, das mit Empfehlungen an die Politik schließt.