Advertisement
Singapore markets closed
  • Straits Times Index

    3,332.80
    -10.55 (-0.32%)
     
  • Nikkei

    39,583.08
    +241.54 (+0.61%)
     
  • Hang Seng

    17,718.61
    +2.14 (+0.01%)
     
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • Bitcoin USD

    61,440.01
    +545.97 (+0.90%)
     
  • CMC Crypto 200

    1,276.42
    -7.41 (-0.58%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • Dow

    39,118.86
    -45.20 (-0.12%)
     
  • Nasdaq

    17,732.60
    -126.08 (-0.71%)
     
  • Gold

    2,336.90
    +0.30 (+0.01%)
     
  • Crude Oil

    81.46
    -0.28 (-0.34%)
     
  • 10-Yr Bond

    4.3430
    +0.0550 (+1.28%)
     
  • FTSE Bursa Malaysia

    1,590.09
    +5.15 (+0.32%)
     
  • Jakarta Composite Index

    7,063.58
    +95.63 (+1.37%)
     
  • PSE Index

    6,411.91
    +21.33 (+0.33%)
     

The key to effective AI regulation: Collaboration

Tackling the topic of artificial intelligence regulation often ends with more questions than answers. So how can we go about it?

Regulating technology – especially internet-based technology – has been a slow, gruelling, and complex, uphill battle. Though there have been strides made in protecting children and personal data, enforcing copyright, ensuring neutrality, and adjusting existing laws to include crimes that occur in cyberspace, regulating the ever-evolving way people use technology is a major challenge – especially when bad actors are involved.

It may seem cliché to compare the internet to the Wild West, but it is also apt. Advances are made at breakneck speed and information travels even faster. This unmatched ability to create and innovate at scale can be both to the benefit and detriment of society – and because of the benefits the internet and technology offer, regulation is a tricky topic. How do governments, companies, and organisations regulate technologies that can do so much good and spread critical information without hindering it?

With the rise of generative AI – an artificial intelligence technology that is able to produce text-based content, images, videos, audio, and synthetic data at scale – harmful disinformation and misinformation campaigns with the power to influence global public opinion like nothing we have seen before are being unleashed across social media platforms. Now is the time to take regulation seriously. And it is heartening to see that many governments are doing just that.

However, the task is a difficult one, and one that cannot be tackled hastily, despite the need for it to happen as quickly as possible.

The complexity of AI regulation

While discussions about AI regulation are happening worldwide, regulators in Southeast Asia are hoping to move as quickly as possible to create a framework as well as tools that will help the region use AI responsibly moving forward.

In February, ministers from the Association of Southeast Asian Nations (ASEAN) prioritised the development of regional “AI guide,” which they hope to have drafted by the end of 2023. And in Singapore, the IMDA recently established the AI Verify Foundation, which aims to use contributions from the global open-source community to develop a testing tool that enables the responsible use of AI, in the hopes of boosting AI-testing capabilities to meet the needs of companies and regulators globally.

ADVERTISEMENT

However, whether such a tool will be effective in fighting ill-intentioned uses of generative AI – such as the undermining of elections through widespread, AI-powered misinformation and disinformation campaigns, a concern raised by OpenAI CEO Samuel Altman – is still to be seen.

It is at least a step forward – and in the right direction.

Though regulating technology is always a challenge, tackling the topic of artificial intelligence regulation often ends with more questions than answers. Among them are three major questions: attribution, cross-border considerations, and development.

The attribution question

A mayor of a town in Australia is considering a defamation suit against OpenAI and its genAI-powered tool, ChatGPT, after the bot made false claims about his involvement in exposing a bribery scandal, painting him to be the perpetrator rather than the whistleblower. Despite their best intentions, more lawsuits against AI operators are likely to follow.

Perhaps the biggest question when discussing regulations around generative AI is who is responsible when AI is used maliciously? Different people will have different immediate answers to this question, ranging from the creators to the operators, to the training data to the entire system. It’s difficult to assign blame, which in turn makes it a massive challenge to even determine who should be held accountable, let alone how to regulate the technology. 

The cross-border question

Aside from attribution, regulation becomes even trickier when considering the cross-border nature of technology. Content created or manipulated by someone using AI tools in one nation can have a harmful impact on another – especially when exploiting high-profile political matters.

In 2020, during clashes between soldiers at the border of India and China, there were swaths of misleading viral videos and images shared, ranging from claims about an out-of-context video of soldiers crying to those about an older video of Indian soldiers dancing to Punjabi music played over Chinese military speakers, and an image of a large Chinese speaker that was said to be so loud Indian soldiers’ eardrums were bursting. These pieces of misinformation hugely impacted public opinion, amplifying the discord between both nations.

Content that moves across borders massively complicates the ability to regulate, let alone enforce regulations.

The development question
Perhaps the biggest concern of all when it comes to regulating AI is considerations surrounding development. Generative AI technologies advance and evolve rapidly, putting cybersecurity and cyber risk management companies, as well as regulatory bodies on the backfoot – rather than creating progressive, forward-looking products and regulations, they can only react to new models and uses.

This is such a concern that many tech leaders have signed an open letter calling for a pause in AI development in addition to calls around transparency for AI systems, especially those used in the healthcare and criminal justice industries.

Despite industry concerns, developers will not be hitting the pause button on all things AI so the rest of the industry and regulators can catch up. Because of this, more organisations need to empower themselves by adopting risk management technology that can monitor, analyse and mitigate the impact of AI-generated content. By tracking how far and fast narratives containing manipulated images spread, it is possible to effectively tackle this new generation of threats and successfully stay ahead of fast-evolving technology designed to shift human perception.

The key to effective regulation

The pace at which AI solutions develop means that to regulate this field effectively, regulations will likely need to be broad, flexible, and easily adaptable in order to keep up with technological innovations.

But the real key to crafting and enforcing effective regulation is very likely collaborative governance. While regional governing bodies like Asean face unique vulnerabilities and considerations, including the enormous regional population with varying levels of digital literacy in addition to different political systems, cultures, and languages, crafting effective region-wide regulations that address the real threat of AI-generated malicious content can help solve cross-border challenges in enforcement.

By working together, either regionally or even globally, governments can create regulations that help to enact real consequences for those using AI maliciously, demand transparency and legitimacy, and foster trust and accountability.

Brice Chambraud is the VP Global Operations and Managing Director APAC at Blackbird.AI

See Also: