Google OK’s its AI for use in weapons, surveillance
Google has quietly changed its policies to remove language that prohibited the use its AI to be used for weapons or surveillance.
Wired reports:
“The company removed language promising not to pursue ‘technologies that cause or are likely to cause overall harm,’ ‘weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,’ ‘technologies that gather or use information for surveillance violating internationally accepted norms,’ and ‘technologies whose purpose contravenes widely accepted principles of international law and human rights.’”
The shift follows similar changes at Meta, Anthropic, and OpenAI that have led the companies to pursue federal contracts to use their technology in defense, law enforcement, and national security applications.
In a blog post describing the policy changes, Google DeepMind CEO (and Nobel Prize winner) Demis Hassabis and James Manyika, SVP of technology and society, said:
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
“The company removed language promising not to pursue ‘technologies that cause or are likely to cause overall harm,’ ‘weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,’ ‘technologies that gather or use information for surveillance violating internationally accepted norms,’ and ‘technologies whose purpose contravenes widely accepted principles of international law and human rights.’”
The shift follows similar changes at Meta, Anthropic, and OpenAI that have led the companies to pursue federal contracts to use their technology in defense, law enforcement, and national security applications.
In a blog post describing the policy changes, Google DeepMind CEO (and Nobel Prize winner) Demis Hassabis and James Manyika, SVP of technology and society, said:
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”