Advertisement
Singapore markets closed
  • Straits Times Index

    3,332.80
    -10.55 (-0.32%)
     
  • Nikkei

    39,583.08
    +241.54 (+0.61%)
     
  • Hang Seng

    17,718.61
    +2.14 (+0.01%)
     
  • FTSE 100

    8,228.02
    +48.34 (+0.59%)
     
  • Bitcoin USD

    61,437.09
    +226.50 (+0.37%)
     
  • CMC Crypto 200

    1,280.89
    -2.94 (-0.23%)
     
  • S&P 500

    5,482.87
    +4.97 (+0.09%)
     
  • Dow

    39,164.06
    +36.26 (+0.09%)
     
  • Nasdaq

    17,858.68
    +53.53 (+0.30%)
     
  • Gold

    2,340.00
    +3.40 (+0.15%)
     
  • Crude Oil

    82.57
    +0.83 (+1.02%)
     
  • 10-Yr Bond

    4.2880
    -0.0280 (-0.65%)
     
  • FTSE Bursa Malaysia

    1,590.09
    +5.15 (+0.32%)
     
  • Jakarta Composite Index

    7,063.58
    +95.63 (+1.37%)
     
  • PSE Index

    6,411.91
    +21.33 (+0.33%)
     

On AI regulation, the EU and the U.K. set wildly divergent courses

Alastair Grant—WPA Pool/Getty Images

Hello and welcome to Eye on AI.

It’s been another busy week for AI regulation. And we can see some pretty stark divisions emerging in the way different countries and regions are approaching the technology.

The European Union reached a deal late on Friday on the final text of its landmark AI Act. For more on the elaborate final negotiations between the EU's member states, the European Commission, and the European Parliament, I recommend this Politico story.

The final law is extraordinarily complex. If you want to get a sense of just how complicated it will be for companies to comply, check out this “one page” flow chart that Tom Whittaker and his colleagues at the law firm Burges Salmon put together to help clients navigate the act’s many requirements. (I guess it’s a one-pager if you’re using a papyrus scroll or something.)

ADVERTISEMENT

Needless to say, this will not be easy for businesses to navigate. And remember the EU AI Act doesn’t just apply to companies headquartered in the EU, but to international firms that have European customers and employees, too.

It is hard to look at Burges Salmon’s flow chart and disagree with critics who say the EU’s approach is too heavy-handed. There’s already a yawning gap between companies’ stated AI ambitions and what they are putting into practice. Companies are being held back by fears about cost, ethics, hallucinations, security risks, and more. And this law will further lengthen the time it will take to bridge that gap. It may also mean that some companies decide, as OpenAI has at times threatened, to simply not sell AI technology in Europe.

Companies “should start to think about compliance now,” Whittaker says, especially as AI systems may soon be fundamental to a company’s business model or, at least, the delivery of specific products and services. Adapting to the new law may be difficult and take time. And failing to comply, he cautions, could mean big fines and the possibility regulators will force your business to stop using or selling the AI in question.

That said, the EU’s approach does have some components that the U.S. and other countries could borrow. Calibrating regulatory requirements to different risk categories is a particularly worthwhile framework. The final text helpfully also makes it more clear which AI uses are unlikely to be high-risk, which should give businesses some peace of mind.

On the controversial issue of how to regulate general-purpose AI models, such as OpenAI’s GPT-4 and Alphabet’s Gemini models, the final text divides these models into two buckets. There are run-of-the-mill general-purpose models, which it defines as pretty much every LLM and chatbot currently on the market. These are mostly subject to transparency requirements: Tech companies have to tell the EU’s new AI Office, which was created as part of the final negotiation last week, how their models work. They also have to make public a summary of their training data and show that they have policies in place to respect copyright laws. That last bit may be hard for many AI companies to comply with. Most models to date have been built using copyrighted data taken without consent from across the internet. In a sop to France, free and open-source general-purpose models, such as those offered by Parisian AI darling Mistral, are subject to lighter risk assessments. (But only up to a point. More on that in a minute.)

Then the act creates a whole separate category for general-purpose models that pose a “systemic risk.” This would seem to apply only to models more powerful than those that currently exist. These will have to comply with a set of testing and safety procedures that the AI Office will develop. The software companies creating them will have to show the AI Office that they’ve thought hard about wider risks, from bioterrorism to the sort of killer AI scenarios that are a staple of science fiction, and developed ways to mitigate those threats. They also will have tighter cybersecurity obligations to protect these models.

The big question then is what happens to open-source? The trend has been that the open-source companies have been able to come close to replicating the performance of the most capable proprietary models, but with about a six-month lag. They have also been doing so with models that are smaller, making them easier and cheaper to use. At some point soon, the best proprietary models will cross into this systemic risk category. And with the AI Act written as it is, it isn’t clear that open-source companies will be allowed to follow them. There’s no real way for an open-source model to comply with some of what the EU seems to be envisioning—the whole point of open-source software is that users can do whatever they want with it (and while companies have licensing terms to try to prevent misuses, a lawsuit for license violations is not about to stop a bioterrorist or The Terminator).

Another thing that’s intriguing about the final text of the AI Act is some of the powers it gives to the AI Office. The new department will be able to rule on whether partnerships between AI startups and Big Tech giants, such as OpenAI’s relationship with Microsoft, and whether vertical integrations, such as Alphabet’s ownership of DeepMind, impede competition. So it is taking on a competition enforcement role usually left to the EU’s antitrust regulator or national competition authorities.

Meanwhile, across the English Channel from the EU, the U.K. government published its policy approach to AI today. It is strikingly different than the EU’s. The British government has ruled out updating any of the country’s laws to address novel issues raised by AI. It is instead planning to let existing regulators figure out how to apply existing laws to the new technology. In many cases, this will give businesses the freedom to implement AI faster. But in others, such as the government’s decision earlier this week to rule out changing U.K. copyright law to create an exception for AI, it may slow tech companies down.

In talking about its approach, the U.K. is championing “ease of compliance” as a major selling point. “Regulation must work for innovators,” the government says, and it has even created a service where tech companies and startups can ask regulators for advice on compliance before they launch a product. The government says it realizes new, binding rules for AI development may be needed at some point—but, it insists, that time is not now. (Never mind that the U.K. and the U.S. took a similar approach with the internet and social media, and look where that got us.)

Unlike the EU, which despite an AI Act text that runs to nearly 900 pages, left most of the specific standards and procedures companies will need to implement to be defined at a later date, the U.K. has already published some voluntary technical standards and assurance techniques. It also announced it is creating a new cross-government body on AI so that sector and industry-specific regulators can share expertise. And it's spending £10 million to help regulators boost their AI knowledge and capabilities as well as £9 million for a new partnership with the U.S. on responsible AI. But given that many top AI researchers are now commanding multimillion-dollar pay packages at the likes of OpenAI and Google, that £10 million may not buy much expertise.

Ironically, the only area where the British government has broken with this mostly laissez-faire approach is when it comes to the AI risk that remains most theoretical: the idea, as Sam Altman says, that AI could mean “lights out for all of us.” Here, the U.K. has led the world, spending £100 million on an AI Safety Institute that is going to come up with guardrails and safety tests for the most advanced general-purpose models. That Institute has managed to hire some impressive experts from academia and leading AI companies, including Google DeepMind.

Another irony: It is usually the U.K. that stakes out a position midway between the American and European approaches. But this time, it may be the U.S. that finds the middle ground, emulating aspects of both London’s and Brussels’s AI policy ideas.

For more on how AI policy is shaping up, I encourage you all to read my colleague Vivienne Walt's look at the global battle lines over AI regulation in the latest edition of Fortune magazine. You can check it out here.

With that here’s the rest of this week’s AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction, Feb. 6: An earlier version of this story misspelled the name of law firm Burges Salmon.

This story was originally featured on Fortune.com