Today, let’s talk about the highest-profile conflict to date between Meta and its Oversight Board, an independent organization the company established to help it navigate the toughest questions around content policy and moderation.
Since before the board was created, it has faced criticism that it primarily performs a public relations function for the company formerly known as Facebook. The board relies on funding from Meta, has a contractual relationship with Meta that governs the use of user data, and its founding members were personally selected by the company.
Aiding the perception that this is primarily a public relations project is the fact that, to date, Meta and the board have rarely been in conflict. In the first quarter of its existence, of 18 recommendations the board made to Meta, the company implemented 14. And while it often rules against Facebook content moderators, ordering deleted posts to be reinstated, none of those reversals has generated significant controversy. (Also, from Facebook’s perspective, the more the board invests it, the more credible it is, and thus the more blame it can take for any unpopular calls.)
That is what made this week’s statements, released by both sides, so remarkable.
After the Russian invasion of Ukraine in February, Meta had asked the board to issue an advisory opinion on how it should moderate content during the war. The conflict had raised a number of difficult questions, including under what circumstances users can post photos of dead bodies or videos of prisoners of war criticizing the conflict.
And in the most prominent content moderation issue of the invasion to date, Meta decided to temporarily allow calls for violence against Russian soldiers, Vladimir Putin, and others.
All of which raised important questions about the balance between freedom of expression and user safety. But after asking the board for their opinion, Meta changed her mind and asked the board members not to say anything.
From the company’s blog post:
Late last month, Meta withdrew a request for a policy advisory opinion (PAO) related to Russia’s invasion of Ukraine that had previously been referred to the Oversight Board. This decision was not made lightly: the PAO withdrew due to ongoing safety and security concerns.
While the PAO has been withdrawn, we continue our efforts related to the Russian invasion of Ukraine and believe we are taking appropriate steps to protect the speech while balancing ongoing security concerns on the ground.
In response, the board said in a statement that it is “disappointed” by the move:
While the Board understands these concerns, we believe the application raises significant issues and we are disappointed by the company’s decision to withdraw it. The Board also notes that the withdrawal of this request does not lessen Meta’s responsibility to carefully consider the ongoing content moderation issues that have arisen from this war, which the Board continues to follow. In fact, the importance for the company of defending freedom of expression and human rights has only increased.
Both statements were extremely vague, so I spent a day talking to people familiar with the matter who could fill me in on what happened. This is what I have learned.
One of the most disturbing trends of the past year has been the way authoritarian governments in general, and Russia in particular, have used intimidation of employees on the ground to force platforms to do their bidding. Last fall, Apple and Google removed from their respective stores an app that allowed anti-Putin forces to organize ahead of the election. Later, we learned that Russian agents had threatened their employees, in person, with jail time or worse.
Life for those employees, and their families, has only become more difficult since Putin’s invasion. The country passed draconian laws prohibiting truthful discussion of the war, and the combination of those laws and US and European sanctions has forced many platforms to withdraw services from Russia altogether.
In the wake of Meta’s decision to allow calls for violence against the invaders, Russia said Meta had engaged in “extremist” activities. That potentially put hundreds of Meta employees at risk of jail time. And while the company has now successfully moved its employees out of the country, the extremist language could mean they will never be allowed back into the country while working in Meta. Furthermore, it could mean that the families of employees in Russia could still be subject to persecution.
There is precedent for both outcomes under Russia’s extremism laws.
So what does the Board of Control have to do with this?
Meta had asked for a fairly broad opinion on his approach to restraint and Russia. The board has already shown a willingness to make expansive policy recommendations, even in narrower cases brought forward by users. After soliciting input, the company’s legal and security teams were concerned that anything the board said could be used in some way against employees or their families in Russia, either now or in the future.
Technically, the Oversight Board is a separate entity from Meta. But many Westerners still refuse to recognize that distinction, and the company’s lawyers feared that Russia would not either.
All of this is compounded by the fact that technology platforms have received little to no support to date, either from the United States or the European Union, in their struggles to keep key communication services running in Russia and Ukraine. It’s not obvious to me what Western democracies could do to reduce the platforms’ fears about how Russia might treat employees and their families. But conversations with executives at several big tech companies over the past year have made it clear that they all feel they are in danger.
All that said, the news still represents a significant blow to the Oversight Board’s already fragile credibility, and arguably reduces its value to Facebook. The company spent several years and $130 million to create an independent body to advise it on policy issues. Asking that body for its advice, advice that would not even be binding on the company, and then belatedly deciding that such advice could be dangerous calls into question the purpose of the entire company. If the only role of the Oversight Board is to handle the easy questions, why bother with that?
Facebook and the board declined to comment to me beyond their statements. It’s fair to note that despite the setback here, the company has stood up to Russia in some big ways, including deciding to allow the Ukrainians to call for Putin’s death. Meta could have turned Russia around on that, and decided not to.
At the same time, we once again find that at a crucial moment, Facebook executives fail to adequately understand risk and public perception. Russia has been threatening platform employees since at least last September. Whatever danger there was to employees and their families existed long before Facebook sought input from its board. To find out that just a few weeks later…well, speaking of an oversight.
I am aware that the Oversight Board has changed Facebook for the better. And when it comes to rogues threatening platform employees, tech companies have very few options available to them. The case of Russia, in this as in so many other situations, was truly a no-win situation.
But that doesn’t mean you won’t have collateral damage to both Meta and your board. Critics always feared that if the stakes were high enough, Facebook would blink and decide to make all the relevant decisions on its own. And then Vladimir Putin went and invaded his neighbor, and the critics were proven right.