Google Promises Not To Use A.I. For Military Purposes After Employees Quit To Protest Project Maven

MariaXShutterstock

Google has seemingly buckled under the pressure of employee protests and has issued a statement saying that Google would not use their A.I. technology for military purposes, detailed the New York Times. However, Google did not denounce all relations with the military, saying that it would continue its A.I. work with governments and military in cybersecurity, training, military recruitment, and more.

Previously, Google employees quit their jobs in protest of Project Maven, a highly controversial partnership between Google and the U.S. military. The purpose of the project was to enable military drones to automatically classify objects that they see using A.I. and machine learning, reported Inquisitr. Some of the fears of the technology included giving the military the ability to launch drone strikes on specific people or objects without any human oversight.

In addition to the employees that quit, a petition circulated among Google employees that protested partnership with the military that eventually garnered over 4,000 signatures. The Tech Workers Coalition also submitted a letter of demands signed by over 90 academics. However, the chief executive of the company, Sundar Pichai, did not mention the protests when discussing Google’s new promises when it comes to using A.I. technology. Pichai also failed to mention Project Maven by name.

null

Pichai also underscored that “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” according to Reuters. Google also said that it would not “pursue AI applications intended to cause physical injury, that tie into surveillance ‘violating internationally accepted norms of human rights.'”

Even so, Google intends to continue taking government contracts, saying that “these collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”

As technology becomes more sophisticated, it becomes more difficult for the regular citizen and lawmakers to understand the implications of new technologies. Especially when it comes to A.I., there are many potentially dangerous applications that can harm humanity.

However, thanks to the Cambridge Analytica scandal, consumers are becoming increasingly aware of the implications of technology in their day-to-day lives. Google has reiterated its commitment to privacy safeguards in their A.I., and also said that it would work to create A.I. systems that would not discriminate based on gender, race, or sexual orientation.

null

A Google employee responded to the new promises, saying that it would be difficult to really know if Google is following its own guidelines without external oversight. Currently, Google is bidding on a multi-billion project for the Pentagon called JEDI. The aim of the project is to build a cloud computing system, reported Wired.