Plus, the more moral relativity of Huawei vs. America
Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.
As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.
I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.
“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.
“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.
The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:
Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.
Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”
In other words, the scale of those predictions completely undermines notice and consent.
Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.
Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.
Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Because people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.
He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”
Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable,” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies that can take the value out of this tech without risking provoking consumer ire.