Advertisement
Singapore markets open in 2 hours 26 minutes
  • Straits Times Index

    3,404.47
    -6.34 (-0.19%)
     
  • S&P 500

    5,572.85
    +5.66 (+0.10%)
     
  • Dow

    39,344.79
    -31.08 (-0.08%)
     
  • Nasdaq

    18,403.74
    +50.98 (+0.28%)
     
  • Bitcoin USD

    56,829.38
    +264.62 (+0.47%)
     
  • CMC Crypto 200

    1,216.29
    +50.17 (+4.30%)
     
  • FTSE 100

    8,193.49
    -10.44 (-0.13%)
     
  • Gold

    2,366.50
    +3.00 (+0.13%)
     
  • Crude Oil

    82.22
    -0.11 (-0.13%)
     
  • 10-Yr Bond

    4.2690
    -0.0030 (-0.07%)
     
  • Nikkei

    40,780.70
    -131.67 (-0.32%)
     
  • Hang Seng

    17,524.06
    -275.55 (-1.55%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,250.98
    -7,253.37 (-50.01%)
     
  • PSE Index

    6,529.43
    +36.68 (+0.56%)
     

Security issues plague OpenAI as scrutiny increases

ANDREW CABALLERO-REYNOLDS—AFP/Getty Images

OpenAI is in an unwelcome spotlight this week after a significant security flaw in its ChatGPT was discovered, and a previously undisclosed attack has come to light.

The specific security flaw was first exposed on July 2, when engineer Pedro José Pereira Vieito pointed out on Twitter/X that the Mac version of ChatGPT was storing the conversations of users in plain text, rather than encrypting them, meaning hackers could read them with no effort. The app can only be directly downloaded from OpenAI’s website, meaning it does not have to go through Apple’s security protocols, which would have prevented this.

The company has since patched the app, encrypting the conversations.

Meanwhile, on Thursday, the New York Times reported the company was the victim of a hacker attack, where the offenders were able to access internal messaging systems in OpenAI, allowing them to obtain details about the company’s technologies. OpenAI has not previously reported this breach publicly, but did let employees know in April of 2023.

ADVERTISEMENT

The hacker was not thought to be associated with a foreign government, and the company did not alert the FBI or other law enforcement officials.

That breach led Leopold Aschenbrenner, an OpenAI technical program manager, to send a memo to the board of directors, expressing concerns the company was not doing enough to prevent foreign governmental adversaries from stealing its secrets. This spring, Aschenbrenner was fired from OpenAI.

The company told the Times the dismissal was not connected with that memo, but according to The Information, he was fired for allegedly leaking information to journalists.

The revelation of the security gaps come as OpenAI’s dominance in the world of AI continues to grow. The company is heavily backed by Microsoft and has signed a growing number of deals with media companies to incorporate their content into its large language models.

This story was originally featured on Fortune.com