As X’s algorithm is open source—meaning free and available for anyone to use (or attack, apparently)—it’s perfectly legal for Meta to just swipe it and reconfigure it for its own purposes. (Kind of like big tech is doing to some other things in the world right now.) As the program, which the company will begin testing next week, rolls out to the wider public, it will supposedly be tweaked to better serve Meta-specific platforms like Facebook, Instagram, and Threads.
This is all to protect against “bias,” which seems to have become Zuckerberg’s personal bogeyman. “It’s time to get back to our roots around free expression,” the founder said in his initial announcement, condemning “governments and legacy media [which] have pushed to censor more and more.” The company now plans to fight back by not publishing notes “unless contributors with a range of viewpoints broadly agree on them.” It’s unclear how that will actually work in practice, but the company promises that “this isn’t majority rules” and a note won’t be made public “unless people who normally disagree decide that it provides helpful context.” While this stipulation seems like it could bar any notes from going up at all, Meta somehow predicts that this will “allow more people with more perspectives to add context to more types of content.” The company is initially launching the program with a pool of 200,000 potential contributors (with more open spots on the waitlist), so we’ll see how that goes.
If any community notes do make it through whatever vetting process the company lays out, posts they’re appended to “won’t have penalties associated with them the way fact checks did.” That’s a fancy way to say that claims will remain on the platform even if they’re blatantly false. In February, Meta announced that it was revamping a program to monetize viral posts, and today, it reasserted that notes “won’t impact who can see the content or how widely it can be shared.” Community note: that’s not good.