Advertisement
Singapore markets closed
  • Straits Times Index

    3,332.80
    -10.55 (-0.32%)
     
  • Nikkei

    39,583.08
    +241.54 (+0.61%)
     
  • Hang Seng

    17,718.61
    +2.14 (+0.01%)
     
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • Bitcoin USD

    61,501.23
    +578.66 (+0.95%)
     
  • CMC Crypto 200

    1,276.30
    -7.53 (-0.59%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • Dow

    39,118.86
    -45.20 (-0.12%)
     
  • Nasdaq

    17,732.60
    -126.08 (-0.71%)
     
  • Gold

    2,336.90
    +0.30 (+0.01%)
     
  • Crude Oil

    81.46
    -0.28 (-0.34%)
     
  • 10-Yr Bond

    4.3430
    +0.0550 (+1.28%)
     
  • FTSE Bursa Malaysia

    1,590.09
    +5.15 (+0.32%)
     
  • Jakarta Composite Index

    7,063.58
    +95.63 (+1.37%)
     
  • PSE Index

    6,411.91
    +21.33 (+0.33%)
     

OpenAI sets up safety committee as it starts training new model

By Arsheeya Bajwa

(Reuters) - OpenAI has formed a Safety and Security Committee that will be led by board members, including CEO Sam Altman, as it begins training its next artificial intelligence model, the AI startup said on Tuesday.

Directors Bret Taylor, Adam D'Angelo and Nicole Seligman will also lead the committee, OpenAI said on a company blog.

Microsoft-backed OpenAI's chatbots with generative AI capabilities, such as engaging in human-like conversations and creating images based on text prompts, have stirred safety concerns as AI models become powerful.

The new committee will be responsible for making recommendations to the board on safety and security decisions for OpenAI's projects and operations.

ADVERTISEMENT

"A new safety committee signifies OpenAI completing a move to becoming a commercial entity, from a more undefined non-profit-like entity," said D.A. Davidson managing director Gil Luria.

"That should help streamline product development while maintaining accountability."

Former Chief Scientist Ilya Sutskever and Jan Leike, who were leaders of OpenAI's Superalignment team, which ensured AI stays aligned to the intended objectives, left the firm earlier this month.

OpenAI had disbanded the Superalignment team, earlier in May, less than a year after the company created it, with some team members being reassigned to other groups, CNBC reported days after the high-profile departures.

The committee's first task will be to evaluate and further develop OpenAI’s existing safety practices over the next 90 days, following which it will share recommendations with the board.

After the board's review, OpenAI will publicly share an update on adopted recommendations, the company said.

Others on the committee include newly appointed Chief Scientist Jakub Pachocki and Matt Knight, head of security.

The company will also consult other experts, including Rob Joyce, a former U.S. National Security Agency cybersecurity director and John Carlin, a former Department of Justice official.

OpenAI did not provide further details on the new "frontier" model it is training, except that it would bring its systems to the "next level of capabilities on our path to AGI."

Earlier in May, it announced a new AI model capable of realistic voice conversation and interaction across text and image.

(Reporting by Arsheeya Bajwa and Akash Sriram in Bengaluru; Editing by Muhammad Tasim Zahid and Vijay Kishore)