Babysitting of the Internet – Periscope Introduces Increased Comment Moderation

Share Your Thoughts: Facebooktwittergoogle_pluslinkedin

In an attempt to fight the risk of spam and abuse through their live streaming video app, Periscope recently announced their newest app addition: comment moderation.periscope screenshot

Founded in 2015, Periscope is a live video streaming platform for both iOS and Android devices. The app allows users to ‘go live’ via their mobile devices and talk to an audience that spans the globe. The videos are raw, uncensored and in real-time.

Periscope users watching live videos have the ability to comment their thoughts during the live broadcast; comments appear on a scrolling feed at the bottom of the video. Previously, Periscope offered comment moderation tools similar to that of its parent company, Twitter. “Users could report abuse via in-app mechanisms or block individual users. You could also restrict comments only to people you know, but this is less desirable for those interested in engaging with a wider, more public community on the app,” according to techcrunch.

The new system, however, kicks the moderation up a notch.

What’s New?

Periscope’s goal was to develop a system that remained transparent, community-led and live. With the new comment moderation system, users can vote on comments that they consider inappropriate. Here’s how it works (via: periscope.com):

  1. During a broadcast, viewers can report comments as spam or abuse. The viewer that reports the comment will no longer see messages from that commenter for the remainder of the broadcast. The system may also identify commonly reported phrases.
  2. When a comment is reported, a few viewers are randomly selected to vote on whether they think the comment is spam, abuse or looks okay.
  3. The result of the vote is shown to voters. If the majority votes that the comment is spam or abuse, the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled. Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.

Why Added Moderation?periscope moderation

According to the company’s announcement, comment moderation was created to reduce spam and abuse. But what do they mean by “spam” and “abuse”?

Periscope users are no stranger to inappropriately deemed comments and content during videos. Two recent instances include the assault of a man by two teens in France and the livestream of a teen’s rape in Ohio. Twitter has faced similar challenges, including the release of images of a gang rape victim in Brazil.

Due to these events, one can understand why periscope is taking action to increase moderation. Will the preventative measures work? We’d all like to think social media comments can be moderated. But, is that realistic? As of August 2015, Periscope had a whopping 10 million users; two million of which were active. These users have created 200 million broadcasts as of March 28, 2016.

Although great in theory, a few issues could arise from the new feature, including:

  1. Trolls – users who purposefully comment inappropriately. This could be out of spite, for enjoyment or simply to annoy other users. You can imagine how many trolls could appear in a pool of 10 million, making moderating comments increasingly difficult.
  2. Flag-Happy Users – Conversely, users who do not agree with a comment have the ability to flag said comment(s) – and this, too, can become overwhelming.

“We want our community to feel comfortable when broadcasting,” explained Kayvon Beykpour, Periscope CEO and co-founder. “One of the unique things about Periscope is that you’re often interacting with people you don’t know; that immediate intimacy is what makes it such a captivating experience. But that intimacy can also be a vulnerability if strangers post abusive comments. Broadcasters have always been able to moderate commenters in their broadcast, but we’ve now created a transparent tool that allows the collective actions of viewers to help moderate bad actors as well.”

Periscope and Twitter are not the only apps that have people raising their eyebrows when it comes to sensitive or inappropriate content.

The List Continues to Grow

Example of Periscope's comment moderation feature

Example of Periscope’s comment moderation feature

A similar video sharing app, YouNow, poses related risks. Although intended for users 13+, anyone with a valid email can create an active account. Users, as young as 10 years old, have been harassed on the app and encouraged to conduct dangerous dares, including eating a spoonful of cinnamon without water, potentially leading to severe choking. In addition, comments have been submitted requesting young users to perform inappropriate, sexual-related acts.

Users of all ages willingly share personal information, especially when broadcasting from their bedrooms and homes. Similar to Periscope, a live YouNow audience can span the globe, consisting of strangers of any age, any gender.

Other apps of question include: Yik Yak, Kik, Whisper, After School, etc. The list continues.

You may also be interested in: A Risk You Need to Know Now: YouNow

Babysitters of the Internet?

Harassment is all too common online. According to a study conducted by the Pew Research Center:

  • 60% of internet users said they had witnessed someone being called offensive names
  • 53% had seen efforts to purposefully embarrass someone
  • 25% had seen someone being physically threatened
  • 24% witnessed someone being harassed for a sustained period of time
  • 19% said they witnessed someone being sexually harassed
  • 18% said they had seen someone be stalked

Online dangers are real, the risks are significant. Crowdsourcing “acceptable community behavior” standards may work in part for Periscope; Reddit practices a variation of this approach with user defined subreddits and in general, Reddit does not allow NSFW Content.  Preventing illegal acts in real-time however, may be impossible to effectively police.

The challenge with live broadcasting is just that – it’s live. Moderating after-the-fact doesn’t help your child when they are being dared or encouraged to do something dangerous or illegal; it doesn’t help when your employees make a mistake and unwittingly release confidential information.

Appropriate use training is an excellent first step. Firestorm works with companies, schools and organizations of all types to review appropriate use plans for social and new media. Because new applications and tools are being created, released and adopted on a daily basis, plan review must keep up with the rapid rate of change.

To learn more on hosting training for your workplace, school, or other organization, contact us at 770-643-1114.

Related: Brand and Reputation – Memo – The Yik Yak of the Office

Share Your Thoughts: Facebooktwittergoogle_pluslinkedin

HOW CAN WE HELP YOU?