Google has promised after fierce internal debates not to provide artificial intelligence for weapons systems.
At the same time, others will continue to work with the military and governments, the Internet company said Thursday. Over the past few months, the company has received massive criticism of its cooperation with a US Department of Defense drone project. It used “tensorflow” chips from alphabet- daughter Google for machine learning to detect objects by aircraft cameras.
A number of Google employees criticized this, although it was emphasized that it was about the evaluation of surveillance images and not attacks. There were internal petitions to get out of the “Project Maven”, several employees filed their notice according to media reports in protest. Google has announced that the project will expire in 2019.
Google has now published rules for working on artificial intelligence. Accordingly, Google will not develop or use technologies that could cause harm. Besides weapons, surveillance is also taboo, “which violates internationally recognized standards”. Similarly, Google will not develop any artificial intelligence that violates international law and human rights.
The group also wants to pay special attention to the fact that the software has no “unfair prejudices” or discriminates against skin color, gender, sexual orientation or income. It will be ensured that the AI systems are controlled by people.
“As artificial intelligence is developed and applied, our society is massively influencing over the coming years,” wrote Google boss Sundar Pichai. As one of the pioneers, Google feels the responsibility to do it right.
The group will also work with governments and the military on cybersecurity, bailouts and training, among other things, Pichai said.
Google is seen as one of the candidates for a billion-dollar deal to provide cloud services to the Department of Defense.