Artificial Intelligence: Let’s Get Ethical, [and Technical]
Last week, artificial intelligence technologist Liz O’Sullivan wrote an article for New York-based American Civil Liberties Union outlining why she quit her job at AI company Clarifai as a moral protest.
The company’s CEO, Matt Zeiler, she said, had decided not to sign a pledge signed by many other AI researchers and scientists worldwide promising not to do research that contributed to the development of autonomous weapons systems.
O’Sullivan had written to Zeiler asking him to sign the pledge — he instead “called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government”.
Before we get into O’Sullivan’s objections, it’s worth noting that Zeiler has written a blog post addressing some of the concerns and criticisms that the company has faced. First, he says that the company is there to ‘do good’ through AI — something that O’Sullivan herself acknowledges in her article for the ACLU.
Specifically as it concerns Project Maven — the Pentagon military program focussed on the use of autonomous technology in warfare — Zeiler had this to say: “After careful consideration, we determined that the goal for our contribution to Project Maven — to save the lives of soldiers and civilians alike — is unequivocally aligned with our mission.
“For this project, we’re using the same widely-available version of the Clarifai technology that any developer or business can access today. The capabilities developed here also have important civilian applications such as disaster response and search and rescue. We are a leading AI company and with Responsibility being a core value of ours, we believe in putting our resources toward society’s best interests, and that includes America’s security.”
That’s a sentiment that’s got some pretty serious backing, too. Michael Bloomberg, the founder of Bloomberg and former mayor of New York (as well as rumoured presidential candidate), wrote in an op-ed that Google’s decision to back out of the project (more on that later) was a “grave error”. Continuing to work on the project would have allowed Google to save lives, he said.
O’Sullivan, clearly, does not share that viewpoint. “I could not abide being part of that, so I quit,” she wrote. “My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.”
She reassured readers that artificial general intelligence is still a long way off — we aren’t going to see the Terminator any time soon. The dangers, she said, are far more banal but just as dangerous.
“The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target,” she wrote.
And that issue can be boiled down to one phrase: “the human in the loop”. Having human oversight of these decisions is crucial, O’Sullivan said. She identified six key reasons why this is important.
Those reasons are: accidents, hacking, the black box problem, morality & context, war at machine speed and escalation.
Those points are important, she says, but ultimately, it is the ‘dual use’ of technologies like face recognition and object localisation that concerns O’Sullivan.
What that means is that though there are companies out there that looking to ‘democratise’ those technologies, that doesn’t stop them from being used for “targeting people with killer drones”, she writes.
Maven and people power
She noted that Google employees managed to force the company to cut its ties with the government project, and she calls on other tech workers to do the same.
Project Maven, she says, “might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell”.
What it goes to show is that tech workers no longer think just about technology. Regardless of your view on the Maven issue, it is clear that in so many ways, technologists continue to hold the key to a company’s fortunes.