FLI Condemns the Development of Devices with Killer Artificial IntelligencePosted by Dayam Ali Aslam on March 22nd, 2020 To all this, the Institute for the Future of Life or Institute Future of Life in English (FLI) has been enacted concerning the development of autonomous lethal weapons through a press release. Through this statement, the institute hopes to make the world's governments see reason to avoid this type of weapons that could end in catastrophes. According to Max Tegmark, president of the FLI, artificial intelligence has a lot of potential to help the world, but its abuse must be avoided. He also added that artificial intelligence weapons that simply decide to kill people on their own are as destabilizing and disgusting as biological weapons are, so they must be treated equally. The pledge prohibiting AI weapons is signed by 170 organizations and many people. Among the people that are found we can find the founder of Skype, the founders of Google DeepMind and the founder of OpenAI. On the other hand, countries like the United States, Russia and the United Kingdom have not signed the treaty. Slaughterbots, the short film of the near future with AI killer drones Is AI dangerous? What do great personalities think about Artificial Intelligence (AI) Bill Gates, co-founder of Microsoft «I am in the field that is concerned with superintelligences. First, the machines will do a lot of work for us and they won't be super smart. That should be positive if we handle it well. However, a few decades later, the intelligence will be strong enough to become a concern. I agree with Elon Musk and others on this and do not understand why some people are not concerned. ” -Bill Gates. The businessman made it clear that artificial intelligence puts the entire human race at great risk. Elon Musk, CEO of Tesla, Inc. "I am constantly warning people of the problem, but until we see robots on the streets murdering the population, they will not know what to do because they consider it very ethereal" - Elon Musk. This character believes that AI is one of the greatest dangers facing humanity today. In this regard, he hopes that regulations and restrictions will be made regarding the development of weapons with artificial intelligence. Eric Horvitz, Research Scientist at Microsoft "We have to be sure that the systems will behave safely and according to our goals, even in unforeseen situations, keeping an eye on their evolution and potential risks at all times." -Eric Horvitz. He himself believes that we should not be afraid of this technology since it represents a great technological advance in all areas, but that it must be regulated in order to control it and avoid possible risks. Mark Zuckerberg, founder of Facebook Zuckerberg sees a positive future related to AI, in fact, he says that in some years we can have a better quality of life thanks to these technologies. Even, due to his thoughts against it, he has had to face Elon Musk from social networks several times. Stephen Hawking, the most important astrophysicist of the century In this regard, the astrophysicist believes that artificial intelligence could become a real danger to humanity. Proposal? A world government capable of controlling all the power that this technology has. Fabio Gandour, Head of Research at IBM In this regard, Gandour affirms that he is not afraid of AI, since it is programmable and if it bothers you, you just disconnect it and that's it. Instead, he is more afraid of people, who can act unpredictably. What do you think? Can artificial intelligence overcome current barriers and spiral out of control? Like it? Share it!More by this author |