Advertisement
Singapore markets open in 4 hours 19 minutes
  • Straits Times Index

    3,404.47
    -6.34 (-0.19%)
     
  • S&P 500

    5,572.85
    +5.66 (+0.10%)
     
  • Dow

    39,344.79
    -31.08 (-0.08%)
     
  • Nasdaq

    18,403.74
    +50.98 (+0.28%)
     
  • Bitcoin USD

    56,341.90
    -912.46 (-1.59%)
     
  • CMC Crypto 200

    1,209.41
    +43.30 (+3.71%)
     
  • FTSE 100

    8,193.49
    -10.44 (-0.13%)
     
  • Gold

    2,366.30
    -31.40 (-1.31%)
     
  • Crude Oil

    82.26
    -0.90 (-1.08%)
     
  • 10-Yr Bond

    4.2690
    -0.0030 (-0.07%)
     
  • Nikkei

    40,780.70
    -131.67 (-0.32%)
     
  • Hang Seng

    17,524.06
    -275.55 (-1.55%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,250.98
    -7,253.37 (-50.01%)
     
  • PSE Index

    6,529.43
    +36.68 (+0.56%)
     

Building cultural context is key for successful generative AI initiatives in Asia

There is a need to ensure seamless integration with the diverse cultural contexts prevalent in many Asia-based organisations.

The widespread adoption of Generative AI (GenAI) has been nothing short of remarkable, capturing the imagination of users worldwide and propelling AI into the mainstream. According to a recent IDC report, about 32% of the Asia/Pacific organisations surveyed expressed their commitment to invest in GenAI technologies, with 38% of respondents exploring use cases to implement using GenAI.

However, leveraging the foundation models of GenAI presents unique challenges, such as a lack of domain-specific knowledge, freezing in time, and the risk of biased information. This last point becomes especially relevant for a region as culturally complex as Asia Pacific and Japan (APJ). There is a need to ensure seamless integration with the diverse cultural contexts prevalent in many Asia-based organisations, all while addressing other priorities such as securing data, driving higher ROI and delivering value for emerging use cases.

ADVERTISEMENT

Integrating culture context with larger models

Some pressing enterprise challenges of developing GenAI include organising mountains of training data, and large-scale compute infrastructure for training and inferencing, which can easily exceed US$10 million in cloud costs alone.

While organisations can also adapt models trained by larger companies, fine-tuning and optimising them for their specific needs, there remains a risk of inheriting biases from the original content which has been mined from multiple sources. Additionally, navigating complex algorithms on large-scale infrastructure further complicates the development process.

Large language models can help contextualise the content. These models have a broader understanding of language due to their exposure to diverse cultural contexts. By learning from a wide range of sources, these models develop a deep understanding of how language is used in different cultural contexts and can generate content that aligns with those contexts. Furthermore, the larger models possess the ability to translate content into a specific cultural context by considering the nuances of language and cultural references.

Continuous evaluation and trusted infrastructure foundation

However, it's important to note that while language models can incorporate cultural and industry-specific contexts, they are not infallible. Continuously evaluating and improving the training processes and data sets is crucial to minimise potential biases and ensure fair representation across different cultures.

In addition, we also need to build a trusted infrastructure foundation and activate trusted methods that refine responses with guardrails and tuning procedures to deliver the right AI outcomes. The BloombergGPTTM for example is one of the first few industry examples of the successful verticalisation and deployment of a foundational language model for a specific purpose.

By fine-tuning its language generation capabilities to understand and generate content relevant to the financial industry, BloombergGPTTM empowers professionals with timely and industry-specific insights while adhering to the highest standards of accuracy, compliance, and trustworthiness within the financial domain.

Building responsible AI by design

As the adoption of GenAI becomes prevalent, regulations are catching up. The G7 initiated the "Hiroshima AI process, an effort by this bloc to determine a way forward to regulate AI. Meanwhile, Singapore is leading the way in APJ on AI regulation, creating toolkits to guide government agencies in deploying AI applications responsibly, based on explainable AI and data governance principles.

The public-private sector collaboration plays a pivotal role in promoting and implementing best practices and standards for responsible AI. We work with Singapore’s AI Verify foundation, alongside other industry players, to harness the collective power and contributions of the global open-source community to develop AI testing tools to enable responsible AI.

Besides enabling the responsible deployment of AI, we need to consider how we can ensure it by design. By protecting data on-premises, companies can reduce inherent risks and meet regulatory requirements.

This is where Project Helix comes in. It simplifies enterprise deployments with a tested combination of optimized hardware and software. It ensures trust, security, and privacy of sensitive and proprietary company data, as well as compliance with government regulations, all while converting enterprise data into valuable business outcomes.

Overcoming challenges and seizing opportunities

The democratisation of AI, particularly GenAI, is reshaping the region, unlocking immense market potential, and overcoming challenges related to data integrity, compute infrastructure, technical expertise, and bias is essential for the successful development and deployment of GenAI – especially in a diverse region such as APJ.

Continuous evaluation and improvement of training processes are necessary to address biases and ensure fair representation across cultures. As GenAI continues to advance, driving innovation, efficiency, and transformative change, we need to continue collaborating to harness the full potential of GenAI and shape a brighter, more inclusive future.

 Peter Marrs is the president for APJ at Dell Technologies

See Also: