⇧ [VIDÉO] You might also like this affiliate content (after advertising)
The last decade has been marked by incredible advances in artificial intelligence (AI) technologies. In particular, they are now so advanced that they can be used in a wide range of fields, such as medicine, art, security management, management, etc. However, some events show that artificial intelligence, as purely logical systems, can sometimes make decisions that do not necessarily correspond to our moral I value. In a preliminary study examining the risks of artificial intelligence, 36% of experts believe that humanity could be overtaken by this technology this century, with a significant risk of a global nuclear catastrophe. Moreover, given the speed of AI development, the magnitude of these risks could be underestimated.
Thanks to machine learning algorithms, artificial intelligences can learn by assimilating vast amounts of information and are used in many fields today. Artificial intelligence can do work in a few weeks that would take human experts much longer. For example, it can detect more than 100 types of tumors with higher reliability than an expert with several years of experience. AI systems can also support risk and disaster management, for example by evaluating the likelihood of a bridge collapse, thereby saving thousands of lives by improving preventative measures.
Linguistic AI models are even capable of integrating domains previously considered uniquely human, including art, through the creation of images on demand. Their ability to make rational and logical decisions has even allowed them to place themselves in important decision-making positions, such as the position of CEO of a large corporation.
Experts of the new study, tentatively published on arXivhowever, they believe that military use of AI would pose a danger. Decision-making based solely on logic could be particularly risky for humanity, as it would not necessarily take into account our moral and social values. What decisions should AI make to protect the planet, and who would control deadly nuclear or bacteriological weapons? If such a scenario were to occur, the risk to humans could be significant, as System X might at some point think that humans are a factor that needs to be eradicated in order to save Earth.
In a less extreme scenario, AI automation could lead to major societal changes, especially when it comes to the industrial revolution. Millions of people would be at risk of unemployment, as in the era of industrial automation, when thousands of workers found themselves out of work.
A survey involving 327 researchers
The new study, led by researchers at the New York University Center for Data Science, surveyed the opinions of 327 researchers, all of whom were authors of AI research in natural language processing. The survey found that 36% of these experts believed that a nuclear-level disaster involving AI would be possible this century.
The fear of this doomsday scenario would be further highlighted in the specific responses of female experts who participated in the survey, as well as from participants from specific minority groups. 46% of women and 53% of minority groups considered it a possible action. Moreover, the experts surveyed would be even more pessimistic about our ability to manage a potentially dangerous future technology.
In addition, 57% of scientists surveyed believe that large-scale AI models could one day surpass the intellectual capabilities of humans, and 73% believe that the automation of work through AI will bring about profound changes. The authors of the survey are more concerned with the direct risks of AI than the resulting all-out nuclear war. It should also be kept in mind that these reviews included only a few hundred researchers and the numbers may be underestimates.