Facebook, Twitter release 2022 mid-term policies to fight the big lie.


Opinion

Activists have urged tech companies for months to fight fake news that the 2020 presidential election was rigged — such misinformation. Legalizing the 2022 midterms, in which all House seats and more than one-third of the Senate are up for grabs.

While false rumors that the last presidential election was rigged plagued their platforms, social media giants are pushing ahead with a playbook that polices misinformation this election cycle.

Facebook has again chosen not to remove some polling fraud and may instead use accounts to redirect users to accurate information about the polls. Twitter said it will implement or remove posts that make unverified claims about the election process, such as the 2020 race, and misinformation that violates its rules. (The company didn’t say when it would remove offensive tweets, but said tagging would reduce visibility.)

This stands in contrast to platforms such as YouTube and Tik Tok, which have banned and avoided claims of rigging in the 2020 election, according to recently released election plans.

The strictness of the companies’ policies and proper enforcement of their rules could mean the difference between a peaceful transition of power and an electoral crisis, experts warn.

“The ‘Big Lie’ has entered our political discourse, and it has become commonplace for election opponents to predict that the midterm elections are going to be rigged or filled with voter fraud,” said the media and Yosef Getau. Democracy Program Director at A liberal-leaning government watchdog common cause. “What we’ve seen is that Facebook and Twitter aren’t doing the best job, or any job at all, of dispelling and combating disinformation around the ‘Big Lie’.”

The political stakes of these content moderation decisions are high and the most effective way forward is unclear, especially as companies balance their desire to support free thought with their desire to prevent offensive content on their networks from endangering people or the democratic process.

The opposition will march to power in key battlegrounds in 2024

In 41 regions that held a nomination competition this year. So far, more than half of GOP winners — about 250 candidates in 469 contests — have accepted former President Donald Trump’s false claims about his defeat two years ago, according to a recent Washington Post analysis. In the year In 2020 battleground states, candidates who deny the legitimacy of the election have claimed two-thirds of GOP candidates for state and federal offices that have jurisdiction over elections., According to the analysis.

And those candidates turned to social media to spread their election-related lies. A recent report by Advance Democracy, a non-profit organization that studies disinformation, ranked candidates in support of Trump. And claims of election fraud linked to the QAnon conspiracy theory have been posted by the hundreds on Facebook and Twitter, drawing hundreds of thousands of interactions and retweets.

Those findings follow months of revelations about the role of social media companies in facilitating the ‘Stop Theft’ movement that surrounded the US Capitol until January 6. Earlier this year, a study from The Washington Post and ProPublica found that Facebook was hit with posts attacking the legitimacy of Joe Biden’s victory — at a rate of 10,000 a day — between Election Day and the January 6 riots. Facebook groups in particular became supporters of Trump’s baseless election fraud before his supporters descended on the Capitol to secure a second term.

“Rejecting candidates is nothing new,” said Kathy Harbat, former director of public policy at Facebook and Technology Policy Consulting. “… it’s high risk. [now] Because it comes with a [higher] “Threats of violence” although it’s unclear whether that risk will be the same as it was during the 2020 race when Trump was on the ballot.

Social media posts alleging election fraud are under investigation

Facebook spokesman Corey Chambis confirmed that the company does not directly remove posts from everyday users or candidates that allege widespread voter fraud, that the 2020 election is rigged or that the upcoming 2022 midterms are rigged. Facebook, which renamed itself Meta last year, banned content that violates its rules for inciting violence, including threats of violence against election officials.

Social media companies like Facebook have chosen to take a more political approach to dice to avoid having to make the tough call about which posts are true.

And while platforms are often willing to ban posts that confuse voters about the election process, their decision to crack down on subtle voter suppression — especially from politicians — is often politicized.

They have been criticized by civil rights groups for not pursuing policies designed to counter subliminal messages designed to sow doubt in the electoral process; For example, they say black people don’t benefit from voting, or that voting is worthless because of the long lines.

Here are the intermediate steps. Critics say Facebook is already behind.

In the year During the 2020 election, civil rights activists pressured Facebook to expand its voter suppression policies to address the few indirect attempts to rig the vote and make its rules tougher on Trump comments. For example, some groups have argued that Trump’s repeated posts questioning the legality of mail-in voting could discourage vulnerable populations from voting.

But when Twitter and Facebook attached tags to some of Trump’s posts, they faced criticism from conservatives who said their policies were biased against the right-leaning politician.

Those decisions are further complicated by the fact that it is not entirely clear whether the labels will be effective in combating consumer attitudes, experts said. According to Joshua Tucker, a professor at New York University, alerts that posts may be misleading may raise questions about the authenticity of the content or respond to those who believe those conspiracies.

One user looked at the account and said, “’Oh, I have to [question] This information,'” Tucker said. Or a user might see a warning label and say, ‘Oh, this is more evidence that Facebook is biased toward conservatives.’

Technology blind spots: sharing with researchers and listening to users

And while tags may work on one platform, they may not on another, or they may drive people offended by them to platforms with more moderate levels of approved content.

According to Nick Clegg, president of global affairs, users have complained that election-related accounts have been overused, and the company is experimenting with a more tailored strategy this cycle. Twitter, for its part, said it saw positive results last year when it tested newly designed disinformation content that steered people toward accurate information, according to a blog. Post.

Still, the specific policies that social media giants follow may be less important than the resources they devote to catching and addressing posts that violate the law, experts say.

“There are too many unanswered questions about the effectiveness of the implementation of these policies,” Harbaz said. “How does everything actually work in practice?”





Source link

Related posts

Leave a Comment

4 × 2 =