Google is banning the development of artificial-intelligence software that can be used in weapons, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI.
The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries.
“We recognize that such powerful technology raises equally powerful questions about its use,” chief executive Sundar Pichai wrote in a blog post. “As a leader in AI, we feel a special responsibility to get this right.”
The ethical principles are a response to a firestorm of employee resignations and public criticism over a Google contract with the Defense Department for software that could help analyze drone video, which critics argued had nudged the company one step closer to the “business of war.” Google executives said last week that they would not renew the deal for the military’s AI endeavor, known as Project Maven, when it expires next year.
The company will, however, continue to work with governments and the military in cyber security, training, veterans’ health care, search and rescue, and military recruitment, Pichai said. The Web giant — famous for its past “Don’t be evil” mantra — is in the running for two multibillion-dollar Defense Department contracts for office and cloud services.
Google’s $800 billion parent company, Alphabet, is considered one of the world’s leading authorities on AI and employs some of the field’s top talent, including at its London-based subsidiary Deep Mind. But the company is steeped in a fierce competition for researchers, engineers and technologies with Chinese AI firms and domestic competitors, such as Facebook and Amazon, who could contend for the kinds of lucrative contracts Google says it will give up.