Advertisement
Singapore markets open in 8 hours 5 minutes
  • Straits Times Index

    3,410.81
    -29.07 (-0.85%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • Dow

    39,375.87
    +67.87 (+0.17%)
     
  • Nasdaq

    18,352.76
    +164.46 (+0.90%)
     
  • Bitcoin USD

    57,033.37
    -817.71 (-1.41%)
     
  • CMC Crypto 200

    1,181.47
    -27.23 (-2.25%)
     
  • FTSE 100

    8,203.93
    -37.33 (-0.45%)
     
  • Gold

    2,399.80
    +30.40 (+1.28%)
     
  • Crude Oil

    83.44
    -0.44 (-0.52%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • Nikkei

    40,912.37
    -1.28 (-0.00%)
     
  • Hang Seng

    17,799.61
    -228.67 (-1.27%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,253.37
    +32.48 (+0.45%)
     
  • PSE Index

    6,492.75
    -14.74 (-0.23%)
     

Edward Snowden: Researchers Should Train AI to Be ‘Better Than Us’

AUSTIN, Texas — Artificial Intelligence (AI) models might soon surpass humans’ capabilities, but only if we stop teaching them to think like us and allow them to “be better than us,” Freedom of the Press Foundation President Edward Snowden said at Consensus 2023.

The former National Security Agency whistleblower, who joined the conference virtually from Russia, shared his cautiously optimistic views on the future of AI, the technology that exploded into the mainstream after OpenAI’s ChatGPT rollout last fall. While Snowden at times echoed some experts’ warnings that AI technologies might empower bad actors, he also considered positive use cases for the emerging technology.

Snowden argued AI models could obstruct government surveillance rather than fuel invasive intelligence programs.

"Maybe they could stop spying on the public and start spying for the public,” Snowden said. “That'd be a net good."

ADVERTISEMENT

Read full coverage of Consensus 2023 here.

However, he also warned that the launch of ChatGPT and other increasingly sophisticated AI models, could fuel big tech and government-driven initiatives to encroach upon users’ privacy.

In order to prevent bad actors from co-opting AI technologies, people must fight for open AI models to remain open, Snowden told CoinDesk.

“People are going to be raising the red flag of 'software communism,' where we need to declare the models must be open,” Snowden said.

He aimed his criticism specifically at emerging AI models that are becoming less and less open, calling out OpenAI specifically.

"It's a poor joke, right? They refused to provide public access to their trading data, their models, the weights and so on – but they're a leader in the space. They're being rewarded. They're being rewarded for antisocial behavior."

How the technology is used, he argued, boils down to how researchers train AI engines. Much AI training currently entails spoon-feeding the AI large amounts of online content, including social media comments, which Snowden says is not ideal.

​​”They're training [AI models] on Reddit threads,” Snowden said. “It's like the internet equivalent of YouTube comments. But you want to create something decent, good, that's creative and useful.”

Current training methods, he also observed, revolve around teaching AI models to “think like us,” which could limit the technology’s potential to better humanity.

“As with children, we don't need machines to be like us,” Snowden said. “We need them to be better than us. And if they aren't better than us, we did a terrible job.”

Read more: What’s the Relationship Between Crypto and AI? Is There Any?