When watching TV series like “Stranger things” we realize how scientific progress could be dangerous to the human species. Sure, it is a science fiction TV series, but it shows how much we can be threatened when we lose control of what we are inventing. In this context, “Superintelligence: Paths, Dangers, Strategies“ written by Nick Bostrom is a book that raises awareness about the threats of superintelligence in the future by introducing its paths, dangers and strategies. The author is discussing what will happen if machines surpass human intelligence.What are then the most interesting ideas of the book? And how realistic could the prospect of the superintelligence viewed by Nick Bostrom be?The book is a worthwhile read for anyone into Artificial Intelligence (AI) with motivation to face some jargons that could be more difficult to someone out of the field. Assuming that a superintelligence succeeding AI could be reached, one of the most interesting parts that caught my attention are how convincing the perspective of the author of the risks we encounter. In fact, he believes that we will be facing a true severe existential disaster. On the one hand, when the first superintelligence prototype will be introduced he would already have exceeded his competitors. In other words, it will be able to beat anything at least in the field it was created for. The problem is that we could imagine the possibility that it would even exceed all of humanity kind combined making it not only out of control of the small research team that worked on it but also out of reach for all of us. On the other hand, there is no guarantee that superintelligence would adopt human values like humility, self-sacrifice, altruism or general concern for others. The early-days AI systems have been considered as computers. They would have final and orthogonal goals. For instance, Bostrom cites means-ends analysis or the ability to successfully update abstract goals, as the metric for intelligence in this context. Moreover, even if the system was trying to complete a simple final goal such as creating exactly one million paper clips, there is a strong reason to believe that it would try to update specific “convergent instrumental”, as described in the book, goals that make it easier to obtain the final goals. The system would identify two related goals. The first one is about destroying any prospective threat to the final goal and collecting the maximum resources to realize it. The threats could be human beings and they definitely possess resources. In the paper clip scenario, for example, it seems plausible that the superintelligence would try to acquire as many resources as possible to increase its certainty of having produced exactly one million paper clips no more and no less. Giving these important points, we should be aware that a superintelligence could be a curse to the humankind. Fortunately, we can try to avoid the curse or at least make its impact as small as possible when it starts to take place. That leads me to introduce the second fact I liked about the book which could be summarized in a word for me : “semi-optimism”. In fact, Bostrom is not just complaining about the dangers resulting from AI and superintelligence, but he is also trying to give solutions to limit these threats. He admits that it is not as easy as it seems, but he is being a bit optimistic about it as we can see in the following citation from the book:“Some say: “just build a question system!” or “Just build an AI that is like a tool rather than an agent!” But these suggestions do not make all safety concerns go away, and it is in fact a non-trivial question which type of system would offer the best prospects for safety”.The optimistic view comes from the fact that Nick Bostrom believes we should make the goals of superintelligence compatible with our goals and principles. To achieve that, he considers that it is urgent to establish a new science focused on the study of advanced agents of intelligence and artificial awareness. The author joins in this the requests formulated for a long time by some of his colleagues. Trained teams, not only of computer specialists, but of mathematicians and philosophers, should be funded to deal with this issue.But one of the reasons that there are no motivated sponsors for this idea is the fact that superintelligence is still considered unrealistic. That lead us to ask two questions: How far could the machine intelligence go to? Could we consider this type of superintelligence realistic?Before reading the book, we might ask ourselves about the reasons that made the author think about the possibility of reaching superintelligence. However, having a look at the different advancements in Artificial Intelligence (AI) field could make the question of a possible superintelligence very relevant. In fact, the book introduces amazing examples that demonstrate that we are not far from real AI. One of the examples that caught my attention is the example of the drone capable of identifying targets defined as enemies and destroying them without authorization. But these drones must not, in principle, be able to attack undifferentiated humans. In addition to that in the book, Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090. He believes that superintelligence, vastly outstripping ours, would follow. Moreover, in order to give my humble opinion about how realistic a future superintelligence could be, we must precise a bit this notion. In fact, intelligence could be comprehension or competence. Nowadays, researchers are focusing on competence over comprehension. If a computer is competent without comprehension then it is a tool because it still requires a human being to use it or make its use worthwhile, but if a computer comprehends and is competent then it is a colleague because it has its own autonomy. This is the way AI based on machine learning is going to. In other words, more autonomy that could lead to superintelligence. Not to mention the advancements that are going on in Neuronal engineering. Neural networks are a tool for multi-dimensional optimization, often via a gradient descent technique, and nothing more. To Bostrom, this procedure may not be an exact duplication of the brain’s operations, but it is an effective tool for generating a general AI that could be competent in navigating the real world like a human. As computational hardware gets more efficient, the capacity for ‘silicon brains’ to outstrip the performance of ‘organic brain’ that we have inherited from our evolutionary past is consequently unavoidable.In conclusion, even if we are a bit far from getting to the era of superintelligence, I strongly believe it could be possible in the next century for example when we consider the exponential progress in the AI processes.Bostrom ends his book Superintelligence with a heartfelt call to “hold on to our humanity: to maintain our roundedness, common sense, and good-humored decency even in the teeth of this most unnatural and inhuman problem”. This citation joins the citation of the famous French mathematician “Cédric Villani” when saying “Ne craignez pas l’intelligence artificielle, mais les humains qui seront derrière“ meaning that we shall not fear AI but we should more be afraid of the human being intentions when creating it and using it.