New Challenge for Social Media: Policing Violent Live Videos

0
A YouTube video shows the aftermath of the Dallas shootings. PHOTO: YOUTUBE/MARCUS SYKES VIA STORYFUL

By DEEPA SEETHARAMAN

Horndiplomat-in handling clips like the fatal shooting of Dallas officers highlights monitoring difficulties

Two and a half hours after a Minnesota woman live-streamed the bloodied body of her boyfriend after he was fatally shot by police during a traffic stop on Wednesday, FacebookInc. took down the footage. Last month, however, a live French video of an alleged terrorist holding a child hostage remained online for 11 hours before Facebook removed it.

Tech companies, especiallyFacebook, have been pouring resources into live video this year, giving users the ability to broadcast their lives in real time on Facebook and Twitter Inc.’s Periscope.

Facebook, where users already watchmore than 100 million hours of video daily in their news feeds, is betting that live videos will get people to come to its site more often and stay there longer, which would help it boost ad rates.

In addition to clips posted by users, it is paying partners to produce live video. It has signed nearly 140 deals worth more than $50 million to media companies and video creators, The Wall Street Journal reported last month.

Live video, however, is uncharted territory for social-media sites. In the past year, there have been at least 18 violent acts—rapes, killings, suicides—disseminated on live video. This material can shine a light on events normally hidden from view, but also can shock or disturb viewers who have no way of knowing what is coming.
Facebook’s response to such images was tested twice last week: by the Minnesota video, which was reinstated more than an hour after being taken down, and the fatal shooting of Dallas police officers the following day, which was captured on Facebook Live by a witness.

Facebook added a “graphic content” warning to both videos, which have generated 5.6 million views each.

“There doesn’t seem to be any limit to what can be captured and what can be shared,” said Albert Gidari, director of privacy at Stanford Law School’s Center for Internet & Society. “There’s a lot of good that can come of that, and a lot of bad.”

The inconsistency in how Facebook and other sites are dealing with violent videos shows the perils of rolling the service out without the technology or manpower to police it. Facebook said the Minnesota video was removed because of a “technical glitch,” which it didn’t explain. The video was reinstated after users complained that it showed an important news event.

Advertisement

Facebook and Twitter both have standards that limit what users can post on their sites involving violence. Both ban any content that mocks or praises violence, but allow for it in cases in which the material is newsworthy.

The way social media sites have censored content on their sites before—relying mostly on users to flag objectionable posts, which are then screened by computer programs and human beings—isn’t always sophisticated enough for live video, experts say. They add that no software exists that can identify violence on streams without human intervention.

Facebook says it has a team working around the clock to review videos flagged by users. Twitter’s Periscope asks randomly selected viewers whether comments on live broadcasts that are flagged by others should be censored.

On Friday, Facebook acknowledged it hasn’t mastered monitoring live video. “Live video on Facebook is a new and growing format,” the company said in a post. “We’ve learned a lot over the past few months, and will continue to make improvements to this experience wherever we can.” One improvement has been its ability to interrupt a flagged live stream if it violates the company’s rules.

Live video, however, poses a particularly tough challenge. It is hard to tell if a video is going to run afoul of a site’s standards when it is unfolding in real time.

In the French video posted last month, the attacker made incendiary comments but the video didn’t contain any violent images, said David Thomson, a Paris-based journalist who writes about jihadism in France. He was able to view the video because he followed the alleged terrorist on Facebook. He wrote about it in a series of tweets, after which time Facebook took it down. He didn’t flag it directly to Facebook.

Tech companies are starting to test a more proactive approach to handling such content. For the past few months,Facebook has been running an experiment in which it reviews publicly shared live broadcasts once they have reached a certain number of views or gone viral, even if there are no complaints. Twitter says Periscope is working on a tool to automatically monitor live-streamed video clips for offensive actions or harassment.

The launch of live-video streaming before grasping all the challenge reflects a Silicon Valley ethos: ship it out and work out the kinks later.

In May, a few months after Facebook rolled out Facebook Live to its 1.65 billion monthly users, a 30-year-old Florida man live-streamed his standoff with police over a court order to hospitalize him. In a series of nine videos over more than three hours, the man made threats to police.

“Shoot me,” the man, Adam Mayo, screamed at the Tampa, Fla., SWAT team in one video. Later, he said, “You’re going to witness a death right now, but I won’t be the only one.” In the end, he was hospitalized.

Despite their disturbing nature and the threats, Facebook determined the videos didn’t violate its rules and allowed them to remain on the site without a warning.

After the Minnesota video appeared, people immediately seized upon the value of showing the violence. “This kind of real-time availability of information activated the movement,” said Michelle Gross, president of Communities United Against Police Brutality in Minneapolis.

 

SOURCE:WSJ

Leave a Reply