Google Employees Leaving Over Face Recognition

Google's project Maven

Do no harm.

This is a creed that Google said it would live by from its conception. Of course, times have changed, expectations for the company may have shifted, but the belief that Google should be putting this mantra into practice with everything they do is still prized among many, including the twelve employees that resigned and the now 4,000 employees that signed a petition urging Google CEO Sundar Pichai to pull out of AI military project, Maven.

Project Maven is a plan initiated by the US Department of Defense, whose goal is to harness the power of artificial intelligence within the US military. Essentially, the objective of the project is to fast-track the analysis of military footage captured by drones using AI to distinguish between humans and other objects.

The project seems innocent enough, but it’s not that simple.

Google Employee Pushback to Project Maven

Employees at Google worry that being involved in the business of war is a stepping stone to bigger, badder, and darker things. As mentioned earlier, 4,000 of the company’s 85,000 employees signed a petition with the hope to inspire Pichai to think twice about allowing the company to move in this seemingly belligerent direction.

The petition began after 12 employees actually resigned after learning that Google was crossing the threshold into war technology. It calls for the company to back out of Project Maven as well as draft a new policy that publicly states that it will not involve Google or any of its contractors in projects that encourage or support any kind of war effort. The employees signing the petition are relying on the fact that Google has never before involved itself in any activities that might support a war, setting a precedent that the employees do not wish to stray away from.

In 2015, employees and bloggers were successful in getting the company to reverse its policy on sexually explicit content on the blog, so this isn’t the only example for how employees have enacted change within the company, but this is the first time the company has seen employees actually resign in response to unwanted actions.

Google’s employees aren’t the only ones who are hoping that Pichai and the company terminate their relationship with the US military and Project Maven. Scholars and academics alike, along with some important top executives at other companies, wrote an open letter to Google CEO Sundar Pichai, CEO of Alphabet Inc. Larry Page, and CEO of Google Cloud Diane Greene, expressing their support for the former employees and those who drafted and signed the petition. The letter urged Google to join others in the field in calling for an international treaty prohibiting autonomous weapon systems.

Google’s Response to Project Maven Protests

Though Google has yet to stop work on the current Project Maven contract, Google Cloud CEO Diane Greene announced in a closed meeting with employees in late May that it would not renew the contract with the Department of Defense. This was confirmed officially in a post by Greene on June 7th.

That same day, Pichai published new principles guiding Google’s AI work. The guidelines are a good step in the right direction, but are worryingly subjective. For example, in the first principle, “Be socially beneficial,” includes a description saying “we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.” That all sounds well and good, but the ultimate cost-benefit analysis is in Google’s hands alone. Devin Coldewey of TechCrunch has don’t a solid job outlining the concerns with Google’s new AI guidelines.

In the end, the employees protests have leveraged some changes, but Google’s response hasn’t been as decisive as the employee-activists and academic community had hoped. These actions fall short of the request to halt work on Project Maven and the guidelines outlined by Pichai don’t explicitly bar the company from engaging in military projects moving forward.

Why Is Artificial Intelligence Regulation for Military Important?

The thing about AI is that, at least at this point, the responsibility falls completely on us. We create the technology, design the algorithms, and even tell the programs what to focus on to facilitate machine learning. If all the government wants to do with the technology is use it to expedite their photo recognition process, that’s all it will do. The fear, however, is that this is simply the first step down a dangerous and slippery path to weaponizing AI and Google becoming a name that is synonymous with violence. This is why the employees have decided to fight against the company and its policies. Once the process begins, it’s so much harder to pull out.

In 1899, the most powerful states in the world signed a treaty that banned military use of aircraft. The reason for this was the world’s leaders recognized the potential destructive power of such military equipment. After five years, the treaty was allowed to expire and war as we know it began. Weaponized aircraft saw the slaughter that would become World War I.

Experts see a similar story forming in regard to artificial intelligence.

If we don’t put permanent safeguards in place, we are looking at a ridiculously high probability that AI will change the face of war as we know it—and not necessarily for the better. AI could be as transformative as nuclear weapons. When it comes to war, we need not only look at the potential influences that AI might have now, but more importantly what impact it may have on the future.

Should AI be banned for military use? What nuance is critical here, in your opinion?

Share