Google updated its ethical guidelines on Tuesday, removing its commitments not to apply artificial intelligence (AI) technology to weapons or surveillance. The previous version of the company’s AI principles included prohibitions on pursuing technologies likely to cause overall harm, including those used for weaponry and surveillance.
Google revises AI guidelines and opens door to military applications
In a blog post by Demis Hassabis, Google’s head of AI, and James Manyika, senior vice president for technology and society, the executives explained that growth in AI technology necessitated adjustments to the principles. They emphasized the importance of leading democratic countries in AI development and the need to serve government and national security clients. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” they stated. Additionally, they argued for collaboration among companies and governments sharing these values to create AI that protects people and supports national security.
The revised AI principles now include elements of human oversight and feedback mechanisms to ensure compliance with international law and human rights standards. They also highlight a commitment to testing technology to mitigate unintended harmful effects.
Nvidia warns: New AI chip rules could hurt America’s edge in tech
This change contrasts with Google’s earlier stance, which had made it an outlier among major AI developers. Companies such as OpenAI and Anthropic have established partnerships with military firms to participate in defense technology, while Google had previously opted out of similar endeavors due to employee protests. Google initially introduced its AI principles in 2018 after staff opposition regarding its involvement in the Pentagon’s Project Maven.
In a broader context, the relationship between the U.S. technology sector and the Department of Defense is tightening, with AI increasingly important to military operations. Michael Horowitz, a political science professor at the University of Pennsylvania, remarked that it is sensible for Google to update its policies to reflect this evolving landscape. Concurrently, Lilly Irani, a professor at the University of California, San Diego, suggested that the executive statements reflect ongoing patterns that have emerged over time.
Former Google CEO Eric Schmidt has previously expressed concerns over the United States being outpaced in AI by China, highlighting the geopolitical dynamics around AI development. The recent update comes amid tensions, as the U.S.-China technological rivalry intensifies. Following the announcement of new tariffs from the Trump administration, Beijing initiated an antitrust inquiry into Google.
Internal protests regarding Google’s cloud computing contracts with Israel have raised ethical concerns among employees, citing potential implications for the Palestinian populace. Reports reveal that Google granted the Israeli Defense Ministry increased access to its AI tools following the attacks by Hamas on October 7, 2023.
Featured image credit: Kerem Gülen/Ideogram