Advertisement
Singapore markets closed
  • Straits Times Index

    3,224.01
    -27.70 (-0.85%)
     
  • Nikkei

    40,369.44
    +201.37 (+0.50%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • FTSE 100

    7,952.62
    +20.64 (+0.26%)
     
  • Bitcoin USD

    70,014.74
    -545.40 (-0.77%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • Dow

    39,807.37
    +47.29 (+0.12%)
     
  • Nasdaq

    16,379.46
    -20.06 (-0.12%)
     
  • Gold

    2,254.80
    +16.40 (+0.73%)
     
  • Crude Oil

    83.11
    -0.06 (-0.07%)
     
  • 10-Yr Bond

    4.2060
    +0.0100 (+0.24%)
     
  • FTSE Bursa Malaysia

    1,534.38
    +3.78 (+0.25%)
     
  • Jakarta Composite Index

    7,288.81
    -21.28 (-0.29%)
     
  • PSE Index

    6,903.53
    +5.36 (+0.08%)
     

Facebook is testing pop-up messages telling people to read a link before they share it

Image Credits: Bryce Durbin / TechCrunch

Years after popping open a Pandora's box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter's lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6% of Facebook's Android users globally in a gradual rollout that aims to encourage "informed sharing" of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Twitter introduced last June prompts urging users to read a link before retweeting it, and the company quickly found the test feature to be successful, expanding it to more users.

ADVERTISEMENT

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that's more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook's preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot-button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.