In a blog post published by CEO Sundar Pichai, Google described a series of principles which are to guide the company's work on artificial intelligence (AI). In his presentation of the principles, Pichai noted that they 'are not theoretical concepts', but 'concrete standards that will actively govern [the company's] research and product development and will impact [its] business decisions'. In Google's vision, AI should: (1) be socially beneficial; (2) avoid creating or reinforcing unfair bias; (3) be built and tested for safety; (4) be accountable to people; (5) incorporate privacy design principles; (6) uphold high standards of scientific excellence; and (7) be made available for uses that accord with these principles. The company also commits not to design or deploy AI in the following areas: technologies that cause or are likely to cause harm; weapons or other technologies whose principal purpose or implementation is to cause or indirectly facilitate injury to people; technologies that gather or use information for surveillance violating internationally accepted norms; and technologies whose purpose contravenes widely accepted principles of international law and human rights. The announcement of these principles comes in the context of Google employees protesting over the company's partnership with the US Department of Defense on using AI to analyse drone footage. With some of the employees resigning as a sign of protest, Google has reportedly committed not to renew the contract upon its expiration in 2019.