Published: Sat, June 09, 2018
Markets | By Erika Turner

Google's Sundar Pichai to stop AI weapon project

Google's Sundar Pichai to stop AI weapon project

Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates global norms, or that works in ways that go against human rights.

It planted its ethical flag on use of AI just days confirming it would not renew a contract with the U.S. military to use its AI technology to analyse drone footage.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees.

Google promotes the benefits of artificial intelligence for tasks like early diagnosis of diseases and the reduction of spam in email. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

These have been obviously made in response to the controversy surrounding their involvement in Project Maven, a program run out of the Pentagon in the USA for the Department of Defense that was aimed at improving imagery used by drones.

Sikh soldier wears turban in Queen's official birthday parade
Prince Harry and his new wife Meghan Markle attended Queen Elizabeth's 92nd birthday celebration on Saturday. Senior royals including the Queen, Prince Charles and the Duchess of Cornwall also took part in the event.

Google is banning the development of artificial-intelligence software that can be used in weapons, chief executive Sundar Pichai said Thursday, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI. However, Google could improve by adding more public transparency and working with the United Nations to reject autonomous weapons, he said. In response, they circulated an internal letter, arguing that "Google should not be in the business of war". So today, we're announcing seven principles to guide our work going forward. The principles also state that the company will work to avoid "unjust impacts" in its AI algorithms by injecting racial, sexual or political bias into automated decision-making.

The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers because existing security mechanisms are unreliable. America does not have a great track record when it comes to adhering to "widely accepted principles of worldwide law and human rights" or keeping its word.

CNBC also noted that Pichai's vow to "work to limit potentially harmful or abusive applications" is less explicit than previous Google guidelines on AI.

We will incorporate our privacy principles in the development and use of our AI technologies. For example, cybersecurity, healthcare, and search and rescue.

"While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas", Pichai wrote. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe".

Like this: