E-Book, Englisch, 184 Seiten
Indset / Neukart Ex Machina
1. Auflage 2025
ISBN: 978-3-911726-02-3
Verlag: Værøy
Format: EPUB
Kopierschutz: 6 - ePub Watermark
The God Experiment
E-Book, Englisch, 184 Seiten
ISBN: 978-3-911726-02-3
Verlag: Værøy
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Anders Indset is a Norwegian-born philosopher and deep-tech investor and has been recognized by Thinkers50 as one of the leading thinkers in technology and leadership. He is a bridgebuilder between humanity and technology, shaping the economy of tomorrow with his practical philosophy. Anders is the author of four Spiegel bestsellers, with his books translated into over ten languages. He is the founder of the investment and advisory firm Njordis and the Global Institute of Leadership and Technology (GILT), as well as the initiator of numerous projects such as the Quantum Economy Alliance. Dr. Florian Neukart is an Austrian physicist, computer scientist, and business executive specializing in quantum computing (QC) and artificial intelligence (AI). He serves on the Board of Trustees for the International Foundation of AI and QC and co-authored Germany's National Roadmap for Quantum Computing. Currently, he is Chief Product Officer at Terra Quantum AG, following over a decade leading global innovation and research labs at Volkswagen Group. He holds advanced degrees in computer science, physics, and IT, including a Ph.D. in AI and QC. A professor at Leiden University, Florian has authored books and published over 100 articles on topics including space propulsion, materials science, and AI.
Autoren/Hrsg.
Weitere Infos & Material
The Simulation Hypothesis
The simulation hypothesis, first proposed by philosopher Nick Bostrom in 2003 [5, 6], posits that it is highly probable that we are living in a computer-generated reality. This hypothesis is an extension of the “simulation argument” [3, 5, 6], which lays out three possibilities regarding the existence of technologically mature civilizations, at least one of which is considered to be true. According to the simulation hypothesis, most contemporary humans are simulations rather than actual biological entities. This hypothesis is distinguished from the simulation argument by allowing this single assumption. It does not assign a higher or lower probability to the other two possibilities of the simulation argument.
The simulation argument presents three basic possibilities for technically “immature” civilizations – like ours. A mature or post-human civilization is defined as one that possesses the computing power and knowledge to simulate conscious, self-replicating beings at a high level of detail (possibly down to the molecular nanobot level). Immature civilizations do not have this ability. The three choices are as follows [5]:
- Human civilization will likely die out before reaching a post-human stage. If this is true, then it almost certainly follows that human civilizations at our level of technological development will not reach a post-human level.
- The proportion of post-human civilizations interested in running simulations of their evolutionary histories, or variations thereof, is probably close to zero. If this is true, there is a high degree of convergence among technologically advanced civilizations, and none of them contain individuals interested in running simulations of their ancestors (ancestor simulations).
- We most likely live in a computer simulation. If this is true, we almost certainly live in a simulation, and most people do. All three possibilities are similarly likely. If we don’t live in a simulation today, our descendants are less likely to run predecessor simulations. In other words, the belief that we may someday reach a post-human level at which we run computer simulations is wrong unless we already live in a simulation today.
According to the simulation hypothesis, at least one of the three possibilities above is true. It is argued on the additional assumption that the first two possibilities do not occur. For example, if a considerable part of our civilization achieves technological maturity and a significant portion of that civilization remains interested in using resources to develop predecessor simulations, then the number of previous simulations reaches astronomical numbers in a technologically mature civilization. This happens based on an extrapolation of the high computing power and its exponential growth, the possibility that billions of people with their computers can run previous simulations with countless simulated agents, and technological progress with some sort of adaptive artificial intelligence, which an advanced civilization possesses and uses, at least in part, for predecessor simulations. The consequence of the simulation of our existence follows from the assumption of the assumption that the first two possibilities are incorrect. There would be many more simulated people like us in this case than non-simulated ones. For every historical person, there would be millions of simulated people. In other words, almost everyone at our level of experience is more likely to live in simulations than outside of them [3]. The conclusion of the simulation hypothesis is derived from the three basic possibilities and from the assumption that the first two possibilities are not true as the structure of the simulation argument.
The simulation hypothesis that humans are simulations does not follow the simulation argument directly. Instead, the simulation argument presents all three possibilities mentioned side by side, with the assertion that one of them is true. It remains unclear which one that is. It is also possible that the first assumption will come true, and all civilizations, including humankind, will die out for some reason. According to Bostrom, there is no evidence for or against accepting the simulation hypothesis that we are simulated beings, nor the correctness of the other two assumptions [5].
From a scientific standpoint, everything in our perceived reality could be coded out as the foundation of the scientific assumption that the laws of nature are governed by mathematical principles describing some physicality. The fact that an external programmer can control the laws of physics and even play with them has been deemed controversial in the simulation hypothesis. Something “outside of the simulation” - an external programmer - is, therefore, more of a sophisticated and modern view of the foundation of monotheistic religions/belief systems. Swedish technophilosopher Alexander Bard proposed that the theory of creationism be moved to physics [11], suggesting that the development of super (digital) intelligence was the creation of god, turning the intentions of monotheism from the creator to the created. Moving from faith and philosophical contemplation towards progress in scientific explanation is what the advancement of quantum technology might propose.
Critics of Bostrom argue that we do not know how to simulate human consciousness [12–14]. An interesting philosophical problem here is the testability of whether a simulated conscious being – or uploaded consciousness – would remain conscious. The reflection on a simulated superintelligence without perception of its perception was proposed as a thought experiment in the “final narcissistic injury” (reference). Arguments against that include that with complexity, consciousness arises – it is an emergent phenomenon. A counter-argument could easily be given that there seem to be numerous complex organs that seem unconscious, and also – despite reasoned statements by a former Google engineer [15] – that large amounts of information give birth to consciousness. With the rising awareness of the field, studies on quantum physical effects in the brain have also gained strong interest. Although rejected by many scientists, prominent thinkers such as Roger Penrose and Stuart Hameroff have proposed ideas around quantum properties in the brain [16]. Even though the argument has gained some recent experimental support [17], it is not directly relevant to the proposed experiments. A solution to a simulated consciousness still seems far away, even though it belongs to the seemingly easy problems of consciousness [18]. The hard problem of consciousness is why humans perceive to have phenomenal experiences at all [18]. Both don’t tackle the meta-problem of consciousness stating why we believe that is a problem, that we have an issue with the hard problem of consciousness.
German physicist Sabine Hossenfelder has argued against the simulation hypothesis, stating it assumes we can reproduce all observations not employing the physical laws that have been confirmed to high precision but a different underlying algorithm, which the external programmer is running [19]. Hossenfelder does not believe this was what Bostrom intended to do, but it is what he did. He implicitly claimed that it is easy to reproduce the foundations of physics with something else. We can approximate the laws we know with a machine, but if that is what nature worked, we could see the difference. Indeed physicists have looked for signs that natural laws proceed step-by-step, like a computer code. But their search has come up empty-handed. It is possible to tell the difference because attempts to reproduce natural laws algorithmically are usually incompatible with the symmetries with Einstein’s Theories of Special and General Relativity. Hossenfelder has stated that it doesn’t help if you say the simulation would run on a quantum computer. “Quantum computers are special purpose machines. Nobody really knows how to put general relativity on a quantum computer” [19]. Hossenfelder’s criticism of Bostrom’s argument continues with the statement that for it to work, a civilization needs to be able to simulate a lot of conscious beings. And, assuming they would be conscious beings, they would again need to simulate many conscious beings. That means the information we think the universe contains would need to be compressed. Therefore, Bostrom has to assume that it is possible to ignore many of the details in parts of the universe no one is currently looking at and then fill them in case someone looks. So, again, there is a need to explain how this is supposed to work. Hossenfelder asks what kind of computer code can do that. What algorithm can identify conscious subsystems and their intentions and quickly fill in the information without producing an observable inconsistency? According to Hossenfelder, this is a much more critical problem than it seems Bostrom appreciates. She further states that one can not generally ignore physical processes on a short distance and still get the large distances right. Climate models are examples of this - with the currently available computing power models with radii in the range of tens of kilometers can be computed [20]. We can’t ignore the physics below this scale, as the weather is a nonlinear system whose information from...