DeepMind staff raise concerns over military contracts

In an internal letter dated 16 May, they argued that such involvement contradicts Google’s ethical stance and mission statement on responsible AI.

 Art, Graphics, Light, Advertisement

Tensions are rising within Google’s AI research division, DeepMind, as over 200 employees have expressed concerns regarding the company’s involvement in defence contracts. The discontent stems from Google’s reported agreements to provide AI and cloud computing services to military organisations, including the Israeli military.

In an internal letter circulated in May, the employees voiced their unease, emphasising that such contracts contradict Google’s mission to lead in ethical AI development. The letter argues that any association with military activities could undermine the company’s commitment to responsible AI, as stated in Google’s AI Principles.

Why does this matter?

The dissent highlights a growing cultural divide between DeepMind and Google, particularly regarding the ethical implications of their technologies. DeepMind, acquired by Google in 2014, had previously been assured that its AI developments would not be used for military or surveillance purposes, a promise now seemingly in jeopardy.

The situation underscores the ongoing ethical debates within tech companies about the application of AI in military contexts, raising questions about the balance between innovation and ethical responsibility.