Advertisement
Singapore markets closed
  • Straits Times Index

    3,410.81
    -29.07 (-0.85%)
     
  • Nikkei

    40,912.37
    -1.28 (-0.00%)
     
  • Hang Seng

    17,799.61
    -228.67 (-1.27%)
     
  • FTSE 100

    8,203.93
    -37.33 (-0.45%)
     
  • Bitcoin USD

    56,540.33
    +1,343.11 (+2.43%)
     
  • CMC Crypto 200

    1,179.59
    -29.10 (-2.41%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • Dow

    39,375.87
    +67.87 (+0.17%)
     
  • Nasdaq

    18,352.76
    +164.46 (+0.90%)
     
  • Gold

    2,399.80
    +30.40 (+1.28%)
     
  • Crude Oil

    83.44
    -0.44 (-0.52%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,253.37
    +32.48 (+0.45%)
     
  • PSE Index

    6,492.75
    -14.74 (-0.23%)
     

Britain’s AI safety institute to open US office

By Martin Coulter

LONDON (Reuters) - Britain’s artificial intelligence (AI) safety institute will open an office in the United States, hoping to foster greater international collaboration on the regulation of a fast-moving technology.

Government officials said the institute’s new office in San Francisco would open this summer, recruiting a team of technical staff to complement the organisation’s work in London and strengthen ties with its U.S. counterpart.

WHY IT’S IMPORTANT

Some experts have warned AI could pose an existential threat to humanity comparable to nuclear weapons or climate change, underscoring the need for greater international coordination on the technology’s regulation.

ADVERTISEMENT

The institute's announcement comes days before the second global AI safety summit, to be co-hosted by the British and South Korean governments in Seoul this week.

CONTEXT

Shortly after Microsoft-backed OpenAI released ChatGPT to the public in November 2022, thousands of concerned onlookers – including Tesla mogul Elon Musk – signed an open letter calling for a six-month pause in their development, warning they posed unpredictable threats.

A year later, the first AI safety summit was held at Britain’s Bletchley Park, where world leaders and high-ranking business executives – including U.S. Vice President Kamala Harris and OpenAI’s Sam Altman – joined academics to discuss how best to regulate AI.

Tech leaders exchanged views with some of their sharpest critics, while China co-signed the “Bletchley Declaration” alongside the US and others, signalling a willingness to work together despite mounting tensions with the West.

KEY QUOTE

Britain’s technology minister Michele Donelan said: “Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety, which we will discuss at the Seoul summit this week.”

(Reporting by Martin Coulter. Editing by Gerry Doyle)