The Future Legitimacy of Big Data will Depend Upon Protecting the Least Advantaged in an Online World
December 1, 2016
Cory Robinson, Senior Lecturer/Assistant Professor in Communication Design
A report in October that AT&T’s Hemisphere project, which provides call records and cellular data to the U.S. Drug Enforcement Agency (DEA), is a profit-making enterprise, has raised concerns about organisations’ financial incentives to ignore human rights. The EFF, the U.S. digital civil liberties group, has expressed concern both that AT&T has sought to hide its role as provider of information, and that the police may be using Hemisphere to find evidence for the “discovery” of which they subsequently seek court orders—an allegedly retrospective procedure known as “parallel construction”.
The ongoing Hemisphere story—the EFF is waiting for a Federal court to decide whether the Department of Justice should be forced to provide information about Hemisphere—highlights the lack of clarity about the rights of citizens when it comes to the collection of their personal data by both governments and companies. Without doubt, big data can improve many areas of our lives. But benefitting from big data, whether making improvements in law enforcement, or delivering better goods and services, should not come at the cost of citizens’ privacy. It is time for organisations, public and private, to rethink how they can protect citizens’ privacy as their power to invade it grows ever greater.
Data mining, while powerful and allowing better product offerings and personal recommendations for consumers, opens significant risks including identity theft, social sorting, and potential discrimination. A framework for establishing the right to privacy and anonymity, can be found in the work of John Rawls, the American philosopher, who died in 2002. In his A Theory of Justice, Rawls proposed an “Equal Liberty Principle” that gives everyone equal access to basic freedoms and a “Difference Principle”, which ensures that social and economic inequalities, in so far as they are permitted, should be arranged so as to be of greatest marginal benefit to the least advantaged in society.
This Rawlsian view, takes those without Internet access as the least advantaged and would build their protection into Internet marketing practices from the outset. How, given the newest consumer data gathering and marketing technologies, does this idea contrast with the experience of people on low incomes today? And, more generally, should we accept that the economic disadvantages of Rawls’“least advantaged” will be recreated on the Internet as the processing of personal information by government and private organisations excludes certain groups while privileging others?
Rapid developments in digital technology and in data processing power make it hard to develop ethical guidelines that are up to the task of protecting those who may be disadvantaged by the big data strategies of government and corporations. A combination of practical measures, such as consumers being educated to be careful who they provide information to, ethical guidelines embedded into organisations’ procedures and into software design, together with hard legislation, such as rules to protect personal information, will likely be the most benefit to the least advantaged. By supporting, under certain conditions, anonymity and by adopting principles to protect the least advantaged, companies and law enforcement agencies can not only better fulfill their ethical duty to citizens, they will legitimise the use of digital technology and big data as ways to reach their goals.
Views expressed in this article are those of the author and not those of the Global Digital Foundation which does not hold corporate views.