As concerns mount over the uses of data, some in the field are trying to forge ethical guidelines.

The tech industry is having a moment of reflection. Even Mark Zuckerberg and Tim Cook are talking openly about the downsides of software and algorithms mediating our lives. And while calls for regulation have been met with increased lobbying to block or shape any rules, some people around the industry are entertaining forms of self regulation. One idea swirling around: Should the programmers and data scientists massaging our data sign a kind of digital Hippocratic oath?

Microsoft released a 151-page book last month on the effects of artificial intelligence on society that argued “it could make sense” to bind coders to a pledge like that taken by physicians to “first do no harm.” In San Francisco Tuesday, dozens of data scientists from tech companies, governments, and nonprofits gathered to start drafting an ethics code for their profession.

The general feeling at the gathering was that it’s about time that the people whose powers of statistical analysis target ads, advise on criminal sentencing, and accidentally enable Russian disinformation campaigns woke up to their power, and used it for the greater good.

“We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” DJ Patil, chief data scientist for the United States under President Obama, told WIRED. (His former White House post is currently vacant.) Patil kicked off the event, called Data For Good Exchange. The attendee list included employees of Microsoft, Pinterest, and Google.

Patil envisages data scientists armed with an ethics code throwing themselves against corporate and institutional gears to prevent things like deployment of biased algorithms in criminal justice.

It's a vision that appeals to some who analyze data for a living. “We're in our infancy as a discipline and it falls to us, more than anyone, to shepherd society through the opportunities and challenges of the petabyte world of AI,” Dave Goodsmith, from enterprise software startup DataScience.com wrote in the busy Slack group for Tuesday’s effort.

Others are less sure. Schaun Wheeler, a senior data scientist at marketing company Valassis followed Tuesday’s discussions via Slack and a live video stream. He arrived skeptical, and left more so. The draft code looks like a list of general principles no one would disagree with, he says, and is being launched into an area that lacks authorities or legislation to enforce rules of practice anyway. Although the number of formal training programs for data scientists is growing, many at work today, including Wheeler, are self-taught.

Tuesday’s discussions yielded a list of 20 principles that will be reviewed and released for wider feedback in coming weeks. They include “Bias will exist. Measure it. Plan for it,” “Respecting human dignity,” and “Exercising ethical imagination.” The project's organizers hope to see 100,000 people sign the final version of the pledge.

“The tech industry has been criticized recently and I think rightfully so for its naive belief that it can fix the world,” says Wheeler. “The idea you can fix an entire complex problem like data breaches through some kind of ethical code is to engage in that same kind of hubris.”

One topic of debate Tuesday was whether a non-binding, voluntary code would really protect data scientists who dared to raise ethical concerns in the workplace. Another was whether that would have much effect.

Rishiraj Pravahan, a data scientist at AT&T, said he is supportive of the effort to draft an ethics pledge. He described how he after he and a colleague declined to work on a project involving another company they didn’t think was ethical, their wishes were respected. But other workers were swapped in and the project went ahead anyway.

Available evidence suggests that tech companies typically take ethical questions to heart only when they sense a direct threat to their balance sheet. Zuckerberg may be showing contrition about his company’s control of distributing information, but it came only after political pressure over Facebook’s role in Russian interference in the 2016 US election.

Tech companies that make money by providing platforms for others can have additional reason not to be too prescriptive about ethics. Anything that could scare off customers from building on your platform is risky.

Microsoft’s manifesto on AI and society discussed a Hippocratic Oath for coders, and an ethical review process for new uses of AI. But Microsoft President Brad Smith suggests that the company wouldn’t expect customers building AI systems using Microsoft’s cloud services to necessarily meet the same standards. “That’s a tremendously important question and one we have not yet answered ourselves,” he says. “We create Microsoft Word and know people can use it to write good things or horrendous things.”

Privacy activist Aral Balkan argues that an ethics code like that drafted this week could actually worsen societal harms caused by technology. He fears it will be used by corporations as a signal of virtue, while they continue business as usual. “What we should be exploring is how we can stop this mass farming of human data for profit,” he says. He points to the European Union’s General Data Protection Regulation coming into force this year as a better model for preventing algorithmic harms.

Patil was once chief scientist at LinkedIn, but somewhat like Balkan is skeptical of tech companies’ ability to think carefully about the effects of their own personal-data-fueled products. “I don’t think we as a society can rely on that right now because of what we’ve seen around social platforms and the actions of tech companies motivated only by profits,” he says.

Longer term, Patil says one of his hopes for the draft ethics code thrashed out Tuesday is that it helps motivate policy makers to set firmer, but well-considered, limits. “I would like to see what happens here start to define what policy looks like,” he says.

Ethical Boundaries