Facebook (still) lacks good governance

Written by Farzaneh Badiei


The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.

This week, Facebook made two key announcements about combating hate and extremism online and the establishment of an independent oversight board. The announcements were timely and strategic: on Wednesday, September 18, Facebook and other tech giants had a hearing at the US Senate about “Mass Violence, Extremism, and Digital Responsibility.” Beginning of this week, there was a side meeting during the UN General Assembly about the Christchurch Call, a call to eradicate terrorist and violent extremist content online.

Facebook’s efforts to address the issue of violent extremism through various initiatives and not through concrete social media governance solutions is unlikely to achieve its goals. Social media governance solutions have to cover all the three parts of policymaking, adjudication and enforcement, through which Facebook asserts authority. Currently such governance arrangements are either inadequate or non-existent.

Procedural justice is necessary to maintain the legitimacy of the decisionmaker but Facebook does not address this key theory of governance in most of the arrangements it proposes to launch. The main components of procedural justice, including how fairly Facebook treats its users, i.e. gives them a voice, treats them with neutrality and explains the reasons for its decisions, most of the time are not at the center of its governance initiatives.

At Facebook, the policymaking processes about content-take down are inconsistent with procedural justice because they are opaque, top-down, and reactive. The enforcement mechanisms do not go much beyond content moderation, and removal of accounts. The dispute resolution mechanisms do not give much of a chance to the users to be heard, and the outcomes do not explain fully why a decision was made.

The inadequacy or the lack of governance mechanisms that deal with extremist content is apparent from Facebook’s reaction to governments requests to combat extremisms online. In its announcement, Facebook mentions the Christchurch Call as one of the main reasons (but not the only reason) to make changes to its policy on terrorism and dangerous content. The Christchurch Call is an initiative of the New Zealand and French governments. It was formed in the aftermath of Christchurch attack, to eradicate violent, extremist content online. The two governments negotiated a set of commitments with a few large tech corporations (including Facebook). The negotiations took place in a bilateral fashion and the governments of New Zealand and France issued the Call without considering civil society and other stakeholders’ feedback. Only certain tech corporations were in the room. Civil society, the technical community and academics were not consulted until after the commitments were agreed upon. It is worth noting that the New Zealand government has been trying hard to include civil society in the implementation process.

During hearings and negotiations with governments, companies make promises that are mostly about tactics and techniques of taking content down (most of the time promising to take content down automatically and by using Artificial Intelligence). They are rarely about reforming policy processes through which Facebook sets its community standards about content moderation and defines the conditions under which the users should are permitted or prohibited from using its services.

Perhaps Facebook leaders believe content removal is a more efficient solution than having an elaborate decision-making system that embodies procedural justice from the beginning. It is true that content removal can give companies some impressive and tangible Key Performance Indicators (KPIs). In 2018 Zuckerberg announced that they are going to proactively handle content with Artificial Intelligence. He also stated that Facebook proactively flags 99% of “terrorist” content and that they have more than 200 people working on counter-terrorism. In its recent announcement, Facebook stated that it has expanded that team to 350 people with a much broader focus on “people and organizations that proclaim or are engaged in violence leading to real-world harm.”

Presenting a large volume of content and account removals might provide some temporary relief for the governments. However, removal of accounts and content on its own is not really a governance mechanism. While some content needs to be removed, with urgency, the decision to remove content, or ban certain organizations and individuals from the platform should be made through a governance mechanism that is procedurally just. Investing in a system that issues fair decisions and upholds procedural justice will yield better results in the long term: there will be fewer violations of the rules, the users will perceive the process as legitimate and self-enforce the dispute resolution outcome. Good governance demands a social infrastructure that can shape decisions from the front end.

Convening the oversight board was a step towards addressing the governance issue. Facebook invested a lot in coming up with such an oversight board, in consultation with various organizations around the world. Such efforts are indeed commendable, but not sufficient. The oversight board is only tasked with resolving cases of content removal that are too complex and disputed either by Facebook or by users. The volume of take-downs are very large and the board will only handle a limited number of cases. The oversight board is in charge of applying the top-down policies of Facebook. Thus, it is not clear how it can be used as a tool for holding Facebook accountable to users.

An example of a top-down policy decision is Facebook’s recent addition to the definition of terrorism. During the Christchurch call discussions, civil society and academics emphasized that we do not have a globally applicable definition of “terrorism.” Facebook acknowledges that this is a problem; however, since no clear governance boundaries sets a process for making policy decision, it has discarded such feedback, come up with its own global definition of terrorism, and recently broadened that definition.

Setting a definition of “terrorism,” applying it globally and expanding it at the request of governments or in the aftermath of crisis illustrates that Facebook does not have the adequate governance structure in place to respond to these requests through legitimate processes.

Policymaking is a major part of social media governance. Having an independent board to resolve disputes does not solve the problem of top-down policies and opaque policymaking processes. Social media platforms are in need of governance and the increase in the number of content take-down is not the best measure for combatting extremism.

Previous
Previous

Govern Fast and Break Things