The Future Legitimacy of Big Data will Depend Upon Protecting the Least Advantaged in an Online World

December 1, 2016

Cory Robinson, Senior Lecturer/Assistant Professor in Communication Design
Linköping University

 

A report in October that AT&T’s Hemisphere project, which provides call records and cellular data to the U.S. Drug Enforcement Agency (DEA), is a profit-making enterprise, has raised concerns about organisations’ financial incentives to ignore human rights. The EFF, the U.S. digital civil liberties group, has expressed concern both that AT&T has sought to hide its role as provider of information, and that the police may be using Hemisphere to find evidence for the “discovery” of which they subsequently seek court orders—an allegedly retrospective procedure known as “parallel construction”.

The ongoing Hemisphere story—the  EFF is waiting for a Federal court to decide whether the Department of Justice should be forced to provide information about Hemisphere—highlights the lack of clarity about the rights of citizens when it comes to the collection of their personal data by both governments and companies. Without doubt, big data can improve many areas of our lives. But benefitting from big data, whether making improvements in law enforcement, or delivering better goods and services, should not come at the cost of citizens’ privacy. It is time for organisations, public and private, to rethink how they can protect citizens’ privacy as their power to invade it grows ever greater.

Data mining, while powerful and allowing better product offerings and personal recommendations for consumers, opens significant risks including identity theft, social sorting, and potential discrimination. A framework for establishing the right to privacy and anonymity, can be found in the work of  John Rawls, the American philosopher, who died in 2002. In his A Theory of Justice, Rawls proposed an “Equal Liberty Principle” that gives everyone equal access to basic freedoms and a “Difference Principle”, which ensures that social and economic inequalities, in so far as they are permitted, should be arranged so as to be of greatest marginal benefit to the least advantaged in society.

This Rawlsian view, takes those without Internet access as the least advantaged and would build their protection into Internet marketing practices from the outset. How, given the newest consumer data gathering and marketing technologies, does this idea contrast with the experience of people on low incomes today? And, more generally, should we accept that the economic disadvantages of Rawls’“least advantaged” will be recreated on the Internet as the processing of personal information by government and private organisations excludes certain groups while privileging others?

Critical to Rawls’ idea of the least advantaged is his definition of primary goods as “what free and equal persons need as citizens”. The Internet is so pervasive and necessary in everyday life that it can now be seen, in Rawls’ terms, as a primary good. Increasingly, broadband Internet is being viewed in these terms. It is the lowest tier recommended for Internet access by many government groups, including the U.S. Federal Communications Commission and the United Nations. In addition to being a primary good, Internet access is also increasingly looked on as a basic human right by organisations like the UN, the Council of Europe, and by countries such as Estonia. Privacy, too, can be seen as a primary good in this sense as it allows for anonymity and other crucial principles related to self-development. Privacy as a primary good should include the concept of anonymous existence online, subject of course to the terms of use of some websites like LinkedIn and websites where legal and contractual obligations, such as filing tax returns, are increasingly being met. Rawls’s concept of primary goods assumed that they were scarce. In this sense anonymity can be treated as a scarce good and, therefore, as having value in the online realm. Without anonymity, identification of individuals can occur, which may ultimately lead to discrimination based on these identifiers. Consequently, the least advantaged should be among the main considerations when respecting anonymity. By protecting the least advantaged, these individuals would be able, through anonymity, to have a private existence, and not suffer social sorting, web lining, or other forms of discrimination. Anonymity could become, therefore, a form of self-protection and self-development for all Internet users, and more importantly for the least advantaged among them.

Rapid developments in digital technology and in data processing power make it hard to develop ethical guidelines that are up to the task of protecting those who may be disadvantaged by the big data strategies of government and corporations. A combination of practical measures, such as consumers being educated to be careful who they provide information to, ethical guidelines embedded into organisations’ procedures and into software design, together with hard legislation, such as rules to protect personal information, will likely be the most benefit to the least advantaged. By supporting, under certain conditions, anonymity and by adopting principles to protect the least advantaged, companies and law enforcement agencies can not only better fulfill their ethical duty to citizens, they will legitimise the use of digital technology and big data as ways to reach their goals.

END

Views expressed in this article are those of the author and not those of the Global Digital Foundation which does not hold corporate views.