Meta's Oversight Board sounds pretty blindsided by Zuckerberg's fact-check ending decree

The board decried the fact that the changes were "announced hastily, in a departure from regular procedure."

Meta's Oversight Board sounds pretty blindsided by Zuckerberg's fact-check ending decree

At least some of Meta’s own were as appalled as the rest of us when Mark Zuckerberg announced that the company was doing away with fact-checking in an asinine attempt to “restore free expression” back in January. While it’s not quite “Sorry, my Prada’s at the cleaners,” the company’s Oversight Board—the body in charge of issuing decisions concerning the removal of Meta content—derided Zuckerberg’s decree as being “announced hastily, in a departure from regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed” in a lengthy blog post today. 

It’s a pretty stunning stance for the board to take, considering how many companies have rolled over and capitulated to the same sort of pressure in recent months. Meta has poured over $200 million into funding the 21-member body since its formation in 2020, TheWrap reports, so it presumably has some real sway. It also intimately understands just how far Zuckerberg has strayed from his previously stated goals. “The Board calls on Meta to live up to its public commitment to uphold the UN Guiding Principles on Business and Human Rights,” it continued in its post, specifically calling on the company to assess the updated policy’s “potential adverse effects on Global Majority countries, LGBTQIA+ people, including minors, and immigrants, updating the Board on its progress every six months, and reporting publicly on this soon.”

That’s only one of 17 total recommendations, which also include improving enforcement of the company’s bullying and harassment policies, continually assessing the effectiveness of Community Notes compared to third-party fact-checkers “particularly in situations where the rapid spread of false information creates risks to public safety,” and “clarify[ing] the references to hateful ideologies not permitted under the Dangerous Organizations and Individuals policy.”

Some of the board’s strongest language came in condemning Meta’s handling of posts relating to the U.K.’s anti-immigration riots last year. Each of three posts related to the riots that Meta decided to leave on the platform “created the risk of likely and imminent harm. They should have been taken down,” the board wrote.

“The content was posted during a period of contagious anger and growing violence, fueled by misinformation and disinformation on social media… Meta activated the Crisis Policy Protocol (CPP) in response to the riots and subsequently identified the UK as a High-Risk Location on August 6. These actions were too late,” it continued. “The Board is concerned about Meta being too slow to deploy crisis measures, noting this should have happened promptly to interrupt the amplification of harmful content.”

In a statement in response (via The Guardian), a Meta spokesperson wrote, “We regularly seek input from experts outside of Meta, including the oversight board, and will act to comply with the board’s decision [to remove the offending posts].” The company will reportedly respond to the board’s wider recommendations within 60 days.

 
Join the discussion...