Facebook, for the first time, acknowledges election manipulation
Without saying the words "Russia," "Hillary Clinton," or "Donald Trump," Facebook acknowledged Thursday for the first time what others have been saying for months.
In a paper released by its security division, the company said "malicious actors" used the platform during the 2016 presidential election as part of a campaign "with the intent of harming the reputation of specific political targets."
"Facebook is not in a position to make definitive attribution to the actors sponsoring this activity....however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017," the report's authors wrote, referring to the U.S. Intelligence Community's assessment that Russia waged an information campaign with the goal of harming Clinton and helping Trump.
The paper, by two members of Facebook's threat intelligence team and its chief security officer, noted that "fake news" has been incorrectly applied as a catch-all for a variety of techniques used to influence users of the platform. The company now divides these techniques into four specific groups:
- "Information (or Influence) Operations - Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment."
- "False News - News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive."
- "False Amplifiers - Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others)."
- "Disinformation - Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information."
While the paper notes that the company now considers this issue to be a security problem on par with "account hacking, malware, spam and financial scams," it stops short of laying out a plan to combat it.
But acknowledging the problem is a good start, says Jen Golbeck, an associate professor with the University of Maryland's Cybersecurity Center, and director of college's Social Intelligence Lab.
"Overall, I found it heartening. It's such a complicated issue and I think they did a fairly nuanced look at it," Golbeck said. "I don't think before this they had done a good job of really taking this seriously, all the way through the election."
Reviewing the election as a case study, the report's authors note that one aspect of the information campaign involved leveraging "information stolen from other sources, such as email accounts," likely a reference to a hack of the Democratic National Committee that intelligence agencies have said was Russian sponsored.
The January intelligence assessment concluded that Russia used a combination of subversive and overt online strategies to influence the election.
"Moscow's influence campaign followed a Russian messaging strategy that blends covert intelligence operations-such as cyber activity-with overt efforts by Russian Government agencies, state-funded media, third-party intermediaries, and paid social media users or "trolls," the Intelligence Community reported.
The company said in the paper that it is working on new products to combat false news, but Golbeck said stopping a foreign state's influence campaign might be harder than Facebook realizes. She noted that in some countries it's illegal to call out government propaganda for its falsehoods.
"Are they actually going to be fighting against governments that are trying to exploit the platform? What happens if China or Russia decides to ban Facebook because they are aggressively fighting their operations?" Golbeck asked.