Arvind Narayanan, a computer scientist at Princeton University, has helped define and redefine the concepts of privacy and fairness in machine learning. Caroline Gutman for Quanta Magazine

Introduction

Once in a while, a person can take an abstract concept that’s seemingly too vague for formal study and offer an elegant formal definition. Claude Shannon did it with information, and Andrey Kolmogorov did it with randomness. For the past few years, researchers have been trying to do the same for the concept of fairness in machine learning. Unfortunately, this has been trickier. Not only is the concept harder to define, but it’s also impossible for a single definition to satisfy all desirable fairness metrics. Arvind Narayanan, a computer scientist at Princeton University, has been instrumental in contextualizing different views and helping this new field establish itself.

His career has spanned all levels of abstraction, from theory to policy, but the journey that eventually led to his current work began in 2006. That year, Netflix sponsored a competition that would award $1 million to whoever improved the accuracy of their recommendation system by 10%. Netflix provided a purportedly anonymous data set of users and their ratings, with personally identifiable information removed. But Narayanan showed that with a sophisticated statistical technique, you need only a few data points to reveal the identity of an “anonymous” user in the data set.

Since then, Narayanan has focused on other areas where theory meets practice. Through the Princeton Web Transparency and Accountability Project, his team uncovered surreptitious ways that websites track users and extract sensitive data. His team found out that a group like the National Security Agency could use web browsing data (specifically, cookies placed by third parties) not only to discover the user’s real-world identity, but also to reconstruct 62% to 73% of their browsing history. They showed that — to riff on the famous New Yorker cartoon — on the internet, websites now know you’re a dog.

In recent years, Narayanan has turned specifically to machine learning — an application of artificial intelligence that gives machines the ability to learn from data. While he welcomes advances in AI, he points out how such systems can fail even with good intentions, and how these otherwise useful technologies can become tools to justify discrimination. In this light, the seemingly unconnected dots that have defined Narayanan’s research trajectory form a kind of constellation.

Quanta spoke with Narayanan about his work on de-anonymization, the importance of statistical intuition, and the many pitfalls of AI systems. The interview has been condensed and edited for clarity.

Some content could not be imported from the original document. View content ↗

Video: Narayanan discusses his work on de-anonymization and fairness and why it
matters.

Christopher Webb Young/Quanta Magazine; Rick Cook for Quanta Magazine

Introduction

Did you always want to do math and science research?

I grew up very interested in both, but primarily in math. I was good at solving puzzles and even had some success at the International Mathematical Olympiad. But I had a huge misconception about the difference between puzzle-solving and research math.

And so early on, I focused my research on cryptography, especially theoretical cryptography, because I was still laboring under the delusion that I was very good at math. And then the rest of my career has been a journey of realizing that’s actually not my strength at all.

That must have served as a good background for your de-anonymization work.

You’re right. What allowed the de-anonymization research is the skill I call statistical intuition. It’s not actually formal mathematical knowledge. It’s being able to have an intuition in your head like: “If I take this complex data set and apply this transformation to it, what is a plausible outcome?”

Intuition might often be wrong, and that’s OK. But it’s important to have intuition because it can guide you toward paths that might be fruitful.

Narayanan’s work has shown the importance, and difficulty, of maintaining privacy online.

Caroline Gutman for Quanta Magazine

Introduction

How did statistical intuition help with your work on the Netflix data?

I had been trying to devise an anonymization scheme for high-dimensional data. It completely failed, but in the process of failing I’d developed the intuition that high-dimensional data cannot be effectively anonymized. Of course Netflix, with their competition, claimed to have done exactly that.

I had my natural skepticism of companies’ marketing statements, so I was motivated to prove them wrong. My adviser, Vitaly Shmatikov, and I worked on it for a few intense weeks. Once we realized that the work was really having an impact, I started doing more.

What was the overall impact? Did you hear back from Netflix and other companies whose data turned out to be not quite so anonymous?

Well, one positive impact is that it spurred the science of differential privacy. But in terms of how companies reacted, there have been a few different reactions. In many cases, companies that would have otherwise released data sets to the public are now no longer doing so — they’re weaponizing privacy as a way to fight transparency efforts.

Facebook is known for doing this. When researchers go to Facebook and say, “We need access to some of this data to study how information is propagating on the platform,” Facebook can now say, “No, we can’t give you that. That will compromise the privacy of our users.”

You once wrote a paper arguing that the term “personally identifiable information” can be misleading. How so?

I think there is confusion among policymakers arising from two different ways in which the term is used. One is information about you that is very sensitive, like your social security number. Another meaning is information that can be indexed into some data sets and thereby used to find more information about you.

These two have different meanings. I have no beef with the concept of PII in the first sense. Certain pieces of information about people are very sensitive, and we should treat them more carefully. But while your email address is not necessarily very sensitive for most people, it’s still a unique identifier that can be used to find you in other data sets. As long as the combination of attributes about a person is available to anyone else in the world, that’s all you need for de-anonymization.

Narayanan on the Princeton campus. Caroline Gutman for Quanta Magazine

Introduction

How did you eventually come to studying fairness?

I taught a fairness and machine learning course in 2017. That gave me a good idea of the open problems in the field. And together with that, I gave a talk called “21 Fairness Definitions and Their Politics.” I explained that the proliferation of technical definitions was not because of technical reasons, but because there are genuine moral questions at the heart of all this. There’s no way you can have one single statistical criterion that captures all normative desiderata — all the things you want. The talk was well received, so those two together convinced me that I should start to get into this topic.

You also gave a talk on detecting AI snake oil, which was also well received. How does that relate to fairness in machine learning?

So the motivation for this was that there’s clearly a lot of genuine technical innovation happening in AI, like the text-to-image program DALL·E 2 or the chess program AlphaZero. It’s really amazing that this progress has been so rapid. A lot of that innovation deserves to be celebrated.

The problem comes when we use this very loose and broad umbrella term “AI” for things like that as well as more fraught applications, such as statistical methods for criminal risk prediction. In that context, the type of technology involved is very different. These are two very different kinds of applications, and the potential benefits and harms are also very different. There is almost no connection at all between them, so using the same term for both is thoroughly confusing.

People are misled into thinking that all this progress they’re seeing with image generation would actually translate into progress toward social tasks like predicting criminal risk or predicting which kids are going to drop out of school. But that’s not the case at all. First of all, we can only do slightly better than random chance at predicting who might be arrested for a crime. And that accuracy is achieved with really simple classifiers. It’s not getting better over time, and it’s not getting better as we collect more data sets. So all of these observations are in contrast to the use of deep learning for image generation, for instance.

How would you distinguish different types of machine learning problems?

This is not an exhaustive list, but there are three common categories. The first category is perception, which includes tasks like describing the content of an image. The second category is what I call “automating judgment,” such as when Facebook wants to use algorithms to determine which speech is too toxic to remain on the platform. And the third one is predicting future social outcomes among people — whether someone would be arrested for a crime, or if a kid is going to drop out of school.

In all three cases, the achievable accuracies are very different, the potential dangers of inaccurate AI are very different, and the ethical implications that follow are very different.

For instance, face recognition, in my classification, is a perception problem. A lot of people talk about face recognition being inaccurate, and sometimes they’re right. But I don’t think that’s because there are fundamental limits to the accuracy of face recognition. That technology has been improving, and it’s going to get better. That’s precisely why we should be concerned about it from an ethical perspective — when you put it into the hands of the police, who might be unaccountable, or states who are not transparent about its use.

Narayanan’s research often includes working with sociologists, philosophers, political scientists, lawyers and other outside experts. “Interdisciplinary collaborations have been some of the most enjoyable collaborations,” he said. Caroline Gutman for Quanta Magazine

Introduction

What makes social prediction problems so much harder than perception problems?

Perception problems have a couple of characteristics. One, there’s no ambiguity about whether there’s a cat in an image. So you have the ground truth. Second, you have essentially unlimited training data because you can use all the images on the web. And if you’re Google or Facebook, you can use all the images that people have uploaded to your app. So those two factors — the lack of ambiguity and data availability — allow classifiers to perform really well.

That’s different from prediction problems, which don’t have those two characteristics. There’s a third difference I should mention, which in some sense is the most important one: The moral consequences of putting these prediction models into action are very different from using a language translation tool on your phone, or an image labeling tool.

But that’s not the same seriousness as the tool used to determine whether someone should be, say, detained pretrial. Those have consequences for people’s freedom. So the irony is that the area where AI works most poorly, hasn’t really been improving over time, and is unlikely to improve in the future is the area that has all of these incredibly important consequences.

Much of your work has required talking to experts outside your field. What’s it like to collaborate with others like this?

Interdisciplinary collaborations have been some of the most enjoyable collaborations. I think any such collaboration will have its frustrating moments because people don’t speak the same language.

My prescription for that is: culture, then language, then substance. If you don’t understand their culture — such as what kind of scholarship they value — it’s going to be really hard. What’s valuable to one person may seem irrelevant to another. So the cultural aspects have to be navigated first. Then you can start establishing a common language and vocabulary and finally get to the substance of the collaboration.

How optimistic are you about whether we can safely and wisely adopt new technology?

Part of the issue is a knowledge gap. Decision-makers, government agencies, companies and other people who are buying these AI tools might not recognize the serious limits to predictive accuracy.

But ultimately I think it’s a political problem. Some people want to cut costs, so they want an automated tool, which eliminates jobs. So there’s a very strong pressure to believe whatever these vendors are saying about their predictive tools.

Those are two different problems. People like me can perhaps help address the information gap. But addressing the political problem requires activism. It requires us to take advantage of the democratic process. It’s good to see that there are a lot of people doing that. And in the long run, I think we can push back against the harmful and abusive applications of AI. I don’t think it’s going to change in an instant but through a long, drawn-out, protracted process of activism that has already been going on for a decade or more. I’m sure it’ll continue for a long time.