FLI Condemns the Development of Devices with Killer Artificial Intelligence

Posted by Dayam Ali Aslam on March 22nd, 2020

To all this, the Institute for the Future of Life or Institute Future of Life in English (FLI) has been enacted concerning the development of autonomous lethal weapons through a press release. Through this statement, the institute hopes to make the world's governments see reason to avoid this type of weapons that could end in catastrophes. According to Max Tegmark, president of the FLI, artificial intelligence has a lot of potential to help the world, but its abuse must be avoided. He also added that artificial intelligence weapons that simply decide to kill people on their own are as destabilizing and disgusting as biological weapons are, so they must be treated equally.

The pledge prohibiting AI weapons is signed by 170 organizations and many people. Among the people that are found we can find the founder of Skype, the founders of Google DeepMind and the founder of OpenAI. On the other hand, countries like the United States, Russia and the United Kingdom have not signed the treaty.

Slaughterbots, the short film of the near future with AI killer drones
The FLI has previously been informed and warning about what could happen if there is not a good control over autonomous weapons or killer robots . Previously, at the end of 2017 this institute had uploaded a rather terrifying internet short film where the development of a drone has been developed by a company called StratoEnergetics. This drone is capable of ending the life of a person and attacking in the form of a swarm. Furthermore, it is capable of detecting its target regardless of distance. To eliminate your target, you only need any data, otherwise this drone would take care of it. But not everything is rosy. As often happens, weapons end up in the wrong handsand attacking important personalities. Although this technology has not yet been developed, this short gives an air of the near future if AI were misused. Be aware that video is violent and could be disturbing to many.

Is AI dangerous? What do great personalities think about Artificial Intelligence (AI)
Some of the most important personalities have spoken about it, others have not. Some in favor, others against. Let's see what the great personalities of the world think about Artificial Intelligence.

Bill Gates, co-founder of Microsoft
Gates is one of the most important personalities of today and definitely has an extremely brilliant brain. After a question and answer session on Reddit , Bill Gates gave his position on artificial intelligence:

«I am in the field that is concerned with superintelligences. First, the machines will do a lot of work for us and they won't be super smart. That should be positive if we handle it well. However, a few decades later, the intelligence will be strong enough to become a concern. I agree with Elon Musk and others on this and do not understand why some people are not concerned. ” -Bill Gates.

The businessman made it clear that artificial intelligence puts the entire human race at great risk.

Elon Musk, CEO of Tesla, Inc.
Elon Musk is one of the most important physicists of the moment with his great works and missions, among which is colonizing Mars.

"I am constantly warning people of the problem, but until we see robots on the streets murdering the population, they will not know what to do because they consider it very ethereal" - Elon Musk.

This character believes that AI is one of the greatest dangers facing humanity today. In this regard, he hopes that regulations and restrictions will be made regarding the development of weapons with artificial intelligence.

Eric Horvitz, Research Scientist at Microsoft
Another opinion on the matter is also born from Microsoft from Eric Horvitz, the person in charge of Microsoft's research area. In this regard, Horvitz does not completely agree with the words of Bill Gates.

"We have to be sure that the systems will behave safely and according to our goals, even in unforeseen situations, keeping an eye on their evolution and potential risks at all times." -Eric Horvitz.

He himself believes that we should not be afraid of this technology since it represents a great technological advance in all areas, but that it must be regulated in order to control it and avoid possible risks.

Mark Zuckerberg, founder of Facebook
If you use Facebook as more than half of the population you know his name and you know that he is one of the richest characters in the world despite his young age.

Zuckerberg sees a positive future related to AI, in fact, he says that in some years we can have a better quality of life thanks to these technologies.

Even, due to his thoughts against it, he has had to face Elon Musk from social networks several times.

Stephen Hawking, the most important astrophysicist of the century
Hawking is one of the most important personalities of the century for his great intelligence that revolutionized the world of physics and how we see the universe. Before his death from his degenerative disease in 2018, Stephen had spoken about AI.

In this regard, the astrophysicist believes that artificial intelligence could become a real danger to humanity.

Proposal? A world government capable of controlling all the power that this technology has.

Fabio Gandour, Head of Research at IBM
To finish we have the opinion of Fabio Gandour, another quite important personality in this technological world.

In this regard, Gandour affirms that he is not afraid of AI, since it is programmable and if it bothers you, you just disconnect it and that's it.

Instead, he is more afraid of people, who can act unpredictably.

What do you think? Can artificial intelligence overcome current barriers and spiral out of control?

Like it? Share it!


Dayam Ali Aslam

About the Author

Dayam Ali Aslam
Joined: March 22nd, 2020
Articles Posted: 11

More by this author