Google is against artificial intelligence in weapons

Google has assured that it no longer intends to use artificial intelligence to develop ‘intelligent’ weapon systems or human rights abusive systems.

CEO Sunart Pinchai, presenting Google’s policy and principles on artificial intelligence technologies, said “we will continue to work with governments and the military in many other areas” such as cyber-security, research and rescue, selection of new personnel from the army, etc.

The American company has recently received strong criticism from even 4,600 workers who have signed a protest against a contract concluded last year with the US Pentagon that concerns the incorporation of artificial intelligence into the Maven Program, military drones to better recognize their ground targets. This $ 10 million contract will not be renewed after March 2019 expires.

The company’s seven principles for implementing artificial intelligence include exploiting the latter “to help people deal with urgent problems” such as fire prediction, diagnosis and disease cure.

“How artificial intelligence develops and uses will have a significant impact on society for many years. As leaders in this field, we feel deeply responsible for doing it right, “Pinchai said, according to the French Agency and Reuters.

As he said, Google’s Artificial Intelligence programs will be designed to be “socially beneficial”, as well as “avoid creating or enhancing unjust prejudices” or “harming”, which includes “weapons or other technologies that can cause harm to people. ”

On the other hand, a Google representative has recognized that applying these principles in practice will not be easy because the company is not able to monitor the use of its software tools by everyone, as some of them anyone can Downloaded for free from the internet.