In the tug of war between data science and privacy, deidentification is meant to be a compromise. When datasets containing personal information are shared for research or used by companies, they try to disguise data – removing the final one or two digits of a zip code, for example – while still preserving its utility for insight. But while deidentification is often intended to satisfy legal requirements for data privacy, the most commonly used methods stand on shaky technical ground.

In a new paper that received a Distinguished Paper Award at the USENIX Security Symposium in August 2022, UChicago CS Assistant Professor Aloni Cohen deals the latest decisive blow against the most popular deidentification techniques. By describing a new kind of attack called “downcoding,” and demonstrating the vulnerability of a deidentified data set from an online education platform, Cohen sends a warning that these data transformations should not be considered sufficient to protect individual’s privacy.

For years, computer science security and privacy researchers have sounded the alarm about the methods most often used to deidentify data, finding new attacks that can reidentify seemingly anonymized data points and proposing fixes. Yet these methods remain in common use, and held up as legally sufficient for fulfilling privacy-protection regulations such as HIPAA and GDPR.

“Policymakers care about real world risks instead of hypothetical risks,” Cohen said. “So people have argued that the risks security and privacy researchers pointed out were hypothetical or very contrived.”

While pursuing his PhD at MIT, Cohen set out to disprove this argument. The most common deidentification methods stem from an approach called k-anonymity, which transforms data just enough to make each individual indistinguishable from a certain number of other individuals in the data set. Cohen’s idea was that the very target of this deidentification method left it open to attack.

“The goal when you’re doing that sort of technique is to redact as little as you need to guarantee a target level of anonymity,” Cohen said. “But if you achieve that goal of redacting just as little as you need, then the fact that that’s the minimum might tell you something about what was redacted.”

Deidentification works by redacting quasi-identifiers – information that can be put together with data from a second source to de-anonymize a data subject. Failing to account for all possible quasi-identifiers can lead to disclosures. In one famous example, researchers took deidentified Netflix viewing data and combined it with data from the online movie review site IMDB, identifying users in the first data set by when they logged reviews of the movies they had recently watched.

Since these discoveries in the 2000s, policymakers have relied on experts to determine which aspects of a dataset are quasi-identifiers or not, to establish the bar for anonymity. Cohen tested the extreme – if every attribute is considered a quasi-identifier, do k-anonymity and its derivative techniques still work?

“If deidentification works at all, it should work when everything is quasi-identifying,” Cohen said. “That’s part of what makes this work powerful. It also means that the attacks work against almost all the techniques related to k-anonymity instead of any one specifically. The Netflix attack showed that it’s hard to say what is and isn’t a quasi-identifier. The downcoding attacks shows that, in certain settings, it doesn’t much matter.”

The paper describes two theoretical attacks and one real-world example that undermine the argument for these protections. The first, downcoding, reverse-engineers the transformations performed on the data, such as the zip code example mentioned earlier. The second attack uses downcoding for a predicate singling-out (PSO) attack, a specific type of attack against data anonymization standards under the European Union’s privacy law GDPR. That proof was important to show policymakers that k-anonymity is not sufficient for “publish-and-forget” anonymization under GDPR, Cohen said.

“The argument we’re making is against the idea that any of those techniques are sufficient to meet the legal bar of anonymization,” Cohen said. “We’re directly pushing back on that claim. Even by the regulatory standards, there’s a problem here.”

Cohen illustrated this insufficiency with a separate real-world demonstration on deidentified data from edX, the popular massively open online course (MOOC) platform. By combining the dataset with data scraped from resumes posted to LinkedIn – information that would be trivially available to potential employers – Cohen could identify people who started but failed to complete edX courses, a potential violation of FERPA, the Family Educational Rights and Privacy Act. (edX was alerted to the flaw and has changed its data protections.)

The takeaway message, Cohen said, is that these deidentification methods are not a magic wand for waving away privacy concerns when sharing potentially sensitive data. He hopes that regulators will realize that a layered approach will be much more effective to achieve their goals.

“If what you want to do is take data, sanitize it, and then forget about it – put it on the web or give it to some outside researchers and decide that all your privacy obligations are done – you can’t do that using these techniques,” Cohen said. “They should not free you of your obligations to think about and protect the privacy of that data.”

Related News

More UChicago CS stories from this research area.
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
UChicago CS News

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
UChicago CS News

Chicago Public Schools Student Chris Deng Pursues Internet Equity with University of Chicago Faculty

May 16, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago / School of the Art Institute Class Uses Art to Highlight Data Privacy Dangers

Apr 03, 2023
Students posing at competition
UChicago CS News

UChicago Undergrad Team Places Second Overall In Regionals For World’s Largest Programming Competition

Mar 17, 2023
Young students on computers
UChicago CS News

UChicago and NYU Research Team Finds Edtech Tools Could Pose Privacy Risks For Students

Feb 21, 2023
UChicago CS News

UChicago Scientists Develop New Tool to Protect Artists from AI Mimicry

Feb 13, 2023
In the News

Professors Rebecca Willett and Ben Zhao Discuss the Future of AI on Public Radio

Jan 26, 2023
UChicago CS News

Professor Heather Zheng Named ACM Fellow

Jan 18, 2023
UChicago CS News

Professor Fred Chong Named IEEE Fellow

Dec 09, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube