The Digital Services Act: three things to consider

Paul MacDonnell

April 6, 2022

The Digital Services Act (DSA) will lay down measures so that digital platforms can “ensure consumer trust in the digital economy, while respecting users’ fundamental Rights”. While these aims are worthwhile — who doesn’t want trust in platforms and respect for human rights? —  some of the DSA’s proposals are not practical. Others could lead to de facto censorship of online media. 

Here are three key issues. 

EXPLAINING EVERY MODERATION DECISION WILL LEAD TO NOTIFICATION SPAM: ARTICLE 15

Platforms can’t rely on human eyes and ears alone to moderate the huge quantity of content that pours onto them every hour of the day. Instead they use algorithms as a first line of content evaluation. These algorithms make billions of decisions about what gets promoted or demoted (i.e. shown to more or fewer users), flagged, or deleted. As they are, by no means, always going to get this right the DSA, rightly, aims to protect the right to appeal decisions to reject, remove, or disable access to content.

Article 15’s requirement that creators be notified of all evaluation decisions that reduce contents’ ranking, including automatic decisions, is impractical and belies the necessity that platforms’ moderation systems must work at scale. Its full implementation would result in a torrent of notification spam that would be of no functional use to creators. This requirement should be replaced with: a) a broad explanation as to how moderation systems work in principle (platforms already provide these); and, b) a requirement to provide, on request from a creator only, an explanation of how the platform’s moderation, both human and algorithmic, has affected a particular item of their content.  

A blanket requirement to notify creators about every decision is likely to harm consumer and creator’s interests by hiding recognizable patterns of moderation decision-making within never-ending notifications. Furthermore it will be extremely difficult, or even impossible, within a torrent of notifications, for creators to know if a platform’s moderation has prejudiced their content based on poorly-executed policies to counter misinformation, disinformation, and malinformation. If platforms are curating content to minimize these then there needs to be some avenue of review as to whether they are getting it right. Otherwise, the online world could become subject to effective content censorship with no counterpart in the offline world. Other parts of the DSA already propose to extend an apparatus of moderation within civil society, through the appointment of ‘trusted flaggers. Both this and opacity of platform decision-making could harm freedom of expression over the long term. 

Notwithstanding, platforms must have the right to withhold any obligation to explain moderating decisions from bad actors who, it can be assumed, are likely to use such explanations as opportunities to game its moderation policies in the future. Bad actors should include: hostile governments and organizations owned by or beholden to them; purveyors of spam; and organizations or individuals seeking to commit fraud.

 

THERE IS NO NEED FOR AN EXPANDED LIST OF ITEMS UNDER THE REQUIREMENT TO ANALYZE AND REPORT ON ‘SYSTEMIC RISK’: ARTICLE 26

The EP proposes a long list of risks, that go beyond ‘significant systemic risk’ against which at least once per year and whenever any new service is introduced the platform’s function must be checked. These include risks to: fundamental rights, protection of personal data, freedom of the press, and malfunctioning of their service.

The need to conduct risk assessment against an extended sublist and for each country where the platform’s services are provided is unnecessary and will not enhance consumer protection. Rather it could push platforms into pursuing risk-averse strategies whereby they increasingly exclude or demote content they consider to be in a grey area in order to simplify the risk-reporting process. Risk assessments should be limited to ‘significant systemic risk’ as stated in the EC’s proposal.

THERE SHOULD BE NO RIGHT OF APPEAL FOR A PLATFORM’S REFUSAL TO TAKE DOWN CONTENT: ARTICLE 17 AND 18

The current draft text of the DSA proposes to grant a right of appeal against a platform’s refusal to take down content.

The DSA proposes to erect a parallel system of publishing regulation that will uniquely apply to online platforms. In particular it will subject content published on them to the claims of ‘trusted flaggers’ whose view on what should and should not be allowed online, will be granted special privilege.

Granting a right of appeal against a decision not to remove content will grant trusted flaggers even greater powers. As we can assume that many of them will be politically motivated, well organised, and well funded, the proposal will, in effect, be giving special rights to political activists, allowing them to game the appeals system, enabling them to seek the suppression of the speech of individuals and groups with whom they disagree.

In democracies laws around speech have, for very good reasons, always leaned more heavily in favour of the rights of citizens to read, see, and hear what they want and against the claims of those who wish, for political reasons, to suppress these rights. Allowing an appeal against platforms’ refusal to remove content would strike the wrong balance between the right to free expression and the right not to be offended. Allowing activists, whether behind the banners of trusted flaggers or not, will encourage them down the path of hostile litigation against the speech rights of Europeans.