Facebook unveils multi-pronged effort to protect 2020 election process
With almost a year until the 2020 presidential election, Facebook is unveiling new safety and transparency protocols in order to better safeguard the U.S. election process and ensure the misinformation campaign that rocked the 2016 election is not repeated.
Facebook was just one of they key players in the Russian effort to sow discord and dissension in the U.S. during the 2016 race. In the lead-up to the 2016 elections, Russian troll farms were allowed to troll and spread misinformation on Facebook's sites, with various actors using the platform to target vulnerable populations, discourage voting and stir white nationalism. The company received widespread criticism for not better preventing acts of foreign influence in the democratic process.
According to testimony by top intelligence chiefs, U.S. officials believed the Kremlin sought to directly interfere in the U.S. election and pave the way for a Trump presidency, and one tactic of the multi-pronged attack was a wave of misinformation.
The tech giant rolled out three key initiatives Monday as part of their effort to prevent interference in the 2020 election:
Fighting foreign interference
- Protecting the accounts of candidates, elected officials, their teams and others through Facebook Protect, a new program that provides increased safety measures like two-factor authentication and hacking monitoring for accounts that are at greater risk of being targeted by "bad actors"
- Combating inauthentic behavior, including an updated policy
Increasing transparency
- Making Pages more transparent, including showing the confirmed owner of a Page
- Labeling state-controlled media on a given page
- Making it easier to understand political ads, including a new presidential candidate spend tracker for voters to be aware of campaign finance
Reducing misinformation
- Preventing the spread of misinformation, including clearer fact-checking labels
- Fighting voter suppression and interference, including banning paid ads that suggest voting is "useless" or advises people not to vote at all
- Helping people better understand the information they see online, including an initial investment of $2 million to support media literacy projects
In the aftermath of the 2016 election, Facebook repeatedly vowed to be more transparent and proactive about removing questionable and potentially harmful posts as a means to regain the trust of its millions of users. But just this month, Facebook once again came under fire for allowing political ads that contained unsubstantiated claims. The company recently ran an ad from the Trump reelection campaign that accused former vice president Joe Biden of corruption — but the accusation hasn't been proven, and the Biden campaign asked for the ad to be taken down.
Facebook, however, declined to do so. Facebook CEO Mark Zuckerberg concluded that banning political ads altogether would be comparable to censoring free speech.
"I know many people disagree, but in general I don't think it's right for a private company to censor politicians or the news in a democracy," he said in a speech earlier this month, adding that other major tech platforms have run similarly misleading ads.
Hate speech has been an issue for the social media platform in an increasingly divisive America. Such speech has even rocked the federal government, after a ProPublica report revealed the existence of a Facebook group in which U.S. Border Patrol agents made lewd and incendiary comments about elected officials, including freshman Congresswoman Alexandria Ocasio-Cortez.
To be more effective, the company vowed this past summer to dedicate some content moderators to hate speech alone. All come from the more than 20,000 outsourced content moderators who screen the platform.