Singapore markets closed
  • Straits Times Index

    -26.98 (-0.81%)
  • Nikkei

    +94.09 (+0.24%)
  • Hang Seng

    -170.85 (-0.94%)
  • FTSE 100

    -44.34 (-0.54%)
  • Bitcoin USD

    -714.01 (-1.06%)
  • CMC Crypto 200

    +6.00 (+0.42%)
  • S&P 500

    +12.71 (+0.23%)
  • Dow

    -65.11 (-0.17%)
  • Nasdaq

    +59.12 (+0.34%)
  • Gold

    +24.40 (+1.05%)
  • Crude Oil

    -0.12 (-0.15%)
  • 10-Yr Bond

    -0.0570 (-1.33%)
  • FTSE Bursa Malaysia

    -2.85 (-0.18%)
  • Jakarta Composite Index

    -96.73 (-1.42%)
  • PSE Index

    -7.13 (-0.11%)

Building cultural context is key for successful generative AI initiatives in Asia

There is a need to ensure seamless integration with the diverse cultural contexts prevalent in many Asia-based organisations.

The widespread adoption of Generative AI (GenAI) has been nothing short of remarkable, capturing the imagination of users worldwide and propelling AI into the mainstream. According to a recent IDC report, about 32% of the Asia/Pacific organisations surveyed expressed their commitment to invest in GenAI technologies, with 38% of respondents exploring use cases to implement using GenAI.

However, leveraging the foundation models of GenAI presents unique challenges, such as a lack of domain-specific knowledge, freezing in time, and the risk of biased information. This last point becomes especially relevant for a region as culturally complex as Asia Pacific and Japan (APJ). There is a need to ensure seamless integration with the diverse cultural contexts prevalent in many Asia-based organisations, all while addressing other priorities such as securing data, driving higher ROI and delivering value for emerging use cases.


Integrating culture context with larger models

Some pressing enterprise challenges of developing GenAI include organising mountains of training data, and large-scale compute infrastructure for training and inferencing, which can easily exceed US$10 million in cloud costs alone.

While organisations can also adapt models trained by larger companies, fine-tuning and optimising them for their specific needs, there remains a risk of inheriting biases from the original content which has been mined from multiple sources. Additionally, navigating complex algorithms on large-scale infrastructure further complicates the development process.

Large language models can help contextualise the content. These models have a broader understanding of language due to their exposure to diverse cultural contexts. By learning from a wide range of sources, these models develop a deep understanding of how language is used in different cultural contexts and can generate content that aligns with those contexts. Furthermore, the larger models possess the ability to translate content into a specific cultural context by considering the nuances of language and cultural references.

Continuous evaluation and trusted infrastructure foundation

However, it's important to note that while language models can incorporate cultural and industry-specific contexts, they are not infallible. Continuously evaluating and improving the training processes and data sets is crucial to minimise potential biases and ensure fair representation across different cultures.

In addition, we also need to build a trusted infrastructure foundation and activate trusted methods that refine responses with guardrails and tuning procedures to deliver the right AI outcomes. The BloombergGPTTM for example is one of the first few industry examples of the successful verticalisation and deployment of a foundational language model for a specific purpose.

By fine-tuning its language generation capabilities to understand and generate content relevant to the financial industry, BloombergGPTTM empowers professionals with timely and industry-specific insights while adhering to the highest standards of accuracy, compliance, and trustworthiness within the financial domain.

Building responsible AI by design

As the adoption of GenAI becomes prevalent, regulations are catching up. The G7 initiated the "Hiroshima AI process, an effort by this bloc to determine a way forward to regulate AI. Meanwhile, Singapore is leading the way in APJ on AI regulation, creating toolkits to guide government agencies in deploying AI applications responsibly, based on explainable AI and data governance principles.

The public-private sector collaboration plays a pivotal role in promoting and implementing best practices and standards for responsible AI. We work with Singapore’s AI Verify foundation, alongside other industry players, to harness the collective power and contributions of the global open-source community to develop AI testing tools to enable responsible AI.

Besides enabling the responsible deployment of AI, we need to consider how we can ensure it by design. By protecting data on-premises, companies can reduce inherent risks and meet regulatory requirements.

This is where Project Helix comes in. It simplifies enterprise deployments with a tested combination of optimized hardware and software. It ensures trust, security, and privacy of sensitive and proprietary company data, as well as compliance with government regulations, all while converting enterprise data into valuable business outcomes.

Overcoming challenges and seizing opportunities

The democratisation of AI, particularly GenAI, is reshaping the region, unlocking immense market potential, and overcoming challenges related to data integrity, compute infrastructure, technical expertise, and bias is essential for the successful development and deployment of GenAI – especially in a diverse region such as APJ.

Continuous evaluation and improvement of training processes are necessary to address biases and ensure fair representation across cultures. As GenAI continues to advance, driving innovation, efficiency, and transformative change, we need to continue collaborating to harness the full potential of GenAI and shape a brighter, more inclusive future.

 Peter Marrs is the president for APJ at Dell Technologies

See Also: