Google pledges not to use AI for weapons or surveillance

Missile-AI-Use

Google's principles say it will not pursue AI applications meant to cause physical injury, that tie into surveillance "violating internationally accepted norms of human rights", or that present greater "material risk of harm" than countervailing benefits. Google then reportedly told its staff it would not bid to renew the contract, for the Pentagon's Project Maven, after it expires in 2019.

The principles, spelled out by Google chief executive Sundar Pichai in a blog post, commit the company to building AI applications that are "socially beneficial", that avoid creating or reinforcing bias and that are accountable to people.

The AI principles represent a reversal for Google, which initially defended its involvement in Project Maven by noting that the project relied on open-source software that was not being used for explicitly offensive purposes.

These have been obviously made in response to the controversy surrounding their involvement in Project Maven, a program run out of the Pentagon in the United States for the Department of Defense that was aimed at improving imagery used by drones. The following month, dozens of workers resigned in protest from the company. The new guidelines come after weeks of internal turmoil as employees threatened to or did actually resign in opposition to agreements Google had made with the federal government to leverage their AI capabilities for the US military. In it, Google says that its principles "are not theoretical concepts", but rather "concrete standards" that'll "actively govern" its future AI work.

It's interesting that Google mentioned worldwide human rights laws here, because just recently, the United Nations' Special Rapporteur called on technology companies to implement global human rights laws by default into their products and services, instead of their own filtering and censorship rules, or even the censorship rules of certain local governments. "Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology". And they should only be made available for purposes that fall in line with the above.

Google said it will not pursue development of AI when it could be used to break worldwide law.

Academics and students in the fields of computer science and artificial intelligence joined Google employees in voicing concerns about Project Maven, arguing that Google was unethically paving the way for the creation of fully autonomous weapons.

However, Google went on to confirm that they will continue to work with government bodies and military.

"At Google, we use AI to make products more useful-from email that's spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy".

Yet Google's cloud-computing unit, where the company is investing heavily, wants to work with the government and the Department of Defense because they are spending billions of dollars on cloud services.

"Ultimately, how the company enacts these principles is what will matter more than statements such as this", Asaro said. In response, they circulated an internal letter, arguing that "Google should not be in the business of war".

Related news: