Advertisement
Singapore markets close in 28 minutes
  • Straits Times Index

    3,177.10
    +5.17 (+0.16%)
     
  • Nikkei

    40,003.60
    +263.20 (+0.66%)
     
  • Hang Seng

    16,529.48
    -207.62 (-1.24%)
     
  • FTSE 100

    7,731.94
    +9.39 (+0.12%)
     
  • Bitcoin USD

    63,560.74
    -3,917.14 (-5.81%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,149.42
    +32.33 (+0.63%)
     
  • Dow

    38,790.43
    +75.63 (+0.20%)
     
  • Nasdaq

    16,103.45
    +130.25 (+0.82%)
     
  • Gold

    2,151.40
    -12.90 (-0.60%)
     
  • Crude Oil

    82.53
    -0.19 (-0.23%)
     
  • 10-Yr Bond

    4.3400
    0.0000 (0.00%)
     
  • FTSE Bursa Malaysia

    1,546.65
    -6.99 (-0.45%)
     
  • Jakarta Composite Index

    7,347.96
    +45.51 (+0.62%)
     
  • PSE Index

    6,848.43
    -4.86 (-0.07%)
     

YouTube reveals it removed 8.3m videos from site in three months

YouTube stock image
YouTube said the quarterly report would ‘help show the progress’ the company was making. Photograph: Lionel Bonaventure/AFP/Getty Images

YouTube says it removed 8.3m videos for breaching its community guidelines between October and December last year as it tries to address criticism of violent and offensive content on its site.

The company’s first quarterly moderation report has been published amid growing complaints about its perceived inability to tackle extremist and abusive content.

YouTube, a subsidiary of Google’s parent company, Alphabet, is one of several internet companies under pressure from national governments and the EU to remove such videos.

It said the report was an important first step in dealing with the problem and would “help show the progress we’re making in removing violative content from our platform”.

ADVERTISEMENT

In a blogpost, YouTube said it removed more than 8m videos between October and December 2017. “The majority of these 8m videos were spam or people attempting to upload adult contentand represent a fraction of a percent of YouTube’s total views during this time period,” the post said.

YouTube said 6.7m were first flagged for review by machines rather than humans; of those, 76% were removed before they received a single view.

YouTube has also been criticised over content it allows. Days after the mass murder at Marjory Stoneman Douglas high school in the US in February, videos were promoted that claimed the survivors were “crisis actors” implanted to build fake opposition to guns.

One clip briefly became the number one trending video on the site before it was removed for violating policies on harassment and bullying. YouTube’s community guidelines do not specifically ban misinformation or hoaxes, although the company has announced plans to link to Wikipedia pages for the most obvious conspiracy theories.

Google has promised to have more than 10,000 people working on enforcing its community guidelines by the end of 2018, up from “thousands” doing the job last year. They will be largely, but not entirely, human reviewers working on YouTube. It will also include engineers working on systems such as spam detection, machine learning, and video hashing.

The current removal process requires suspect content to be initially flagged, before it is watched to see if it breaches community guidelines, before a decision is made on its removal.

The vast majority of videos taken down – more than 80% – were flagged as suspect by one of Google’s automatic systems, the company said, rather than an individual.

Those systems broadly work in one of three ways: some use an algorithm to fingerprint inappropriate footage, and then match it to future uploads; others track suspicious patterns of uploads, which is particularly useful for spam detection.

A third set of systems use the company’s machine learning technology to identify videos that breach guidelines based on their similarity to previous videos. The machine learning system used to identify violent extremist content, for instance, was trained on 2 million hand-reviewed videos.

YouTube said that automatic flagging helped the company achieve a goal of removing more videos earlier in their lifespan.

While machine learning catches many videos, YouTube still lets individuals flag videos. Members of the public can mark any video as breaching community guidelines. There is also a group of individuals and 150 organisations who are “trusted flaggers” – experts in various areas of contested content who are given special tools to highlight problematic videos.

Regular users flag 95% of the videos that aren’t caught by the automatic detection, while trusted flaggers provide the other 5%. But the success rates are reversed, with reports from trusted flaggers leading to 14% of the removals on the site, and regular users just 5%.

Human flaggers also spot a very different breakdown of videos to those reported by machines: more than half the reports from humans were for either spam or sexually explicit content.