Facebook to bar users from livestreaming if they violate community rules

Facebook said Tuesday it would ban users from its Live streaming feature for a set period of time if they violate certain community guidelines.

The move is in response to the mosque massacre in Christchurch, New Zealand, in March, in which a gunman livestreamed his gunning down of 50 victims.

"Starting today, people who have broken certain rules on Facebook -- including our Dangerous Organizations and Individuals policy -- will be restricted from using Facebook Live," Guy Rosen, Facebook's vice president of integrity, wrote in a Tuesday blog post.

A comprehensive list of offenses that would see a user barred from Live wasn't included, although the examples used all had to do with circulating terrorist-related content. It's one part of a two-pronged attack against malicious livestreaming, as Rosen also announced in the blog that Facebook is investing $7.5 million in research to develop better video detection technology.

Rosen explained that Facebook has historically banned rule-breaking users from its entire platform, but that its new policy seeks to set rules that would specifically bar people from the Live service.

"Today we are tightening the rules that apply specifically to Live," Rosen wrote. "We will now apply a 'one strike' policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time -- for example 30 days -- starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time."

He added that a user banned from Live will "over the coming weeks" also be restricted from other services on the platform, such as creating ads.

Weeks after the massacre, Facebook said that the 17-minute video wasn't reported during the period it was live, and that the first user report came 12 minutes after the livestream ended. In other words, the original video was available on Facebook for a full 29 minutes. However, the video was then uploaded over a million times by users. Facebook was able to purge 1.5 million uploads of the video and 1.2 million were blocked before going live on the platform.

To assist with such purges, the company is investing $7.5 million in research, across the University of Maryland, Cornell University and the University of California, Berkley, to improve video detection software.

Specifically, the company wants to get better at detecting edited versions of clips -- say, for instance, a banned clip that has its audio and colors distorted -- and at identifying if the poster is innocently sharing an image of someone intentionally manipulating videos and photos to bypass Facebook's systems.

"Dealing with the rise of manipulated media will require deep research and collaboration between industry and academia," Rosen wrote. "In the months to come, we will partner more so we can all move as quickly as possible to innovate in the face of this threat." 

This piece originally appeared on CNET

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.