Advertisement
Singapore markets open in 1 minute
  • Straits Times Index

    3,367.90
    +29.33 (+0.88%)
     
  • S&P 500

    5,509.01
    +33.92 (+0.62%)
     
  • Dow

    39,331.85
    +162.33 (+0.41%)
     
  • Nasdaq

    18,028.76
    +149.46 (+0.84%)
     
  • Bitcoin USD

    62,059.57
    -818.00 (-1.30%)
     
  • CMC Crypto 200

    1,337.45
    -7.06 (-0.52%)
     
  • FTSE 100

    8,121.20
    -45.56 (-0.56%)
     
  • Gold

    2,340.50
    +7.10 (+0.30%)
     
  • Crude Oil

    82.98
    +0.17 (+0.21%)
     
  • 10-Yr Bond

    4.4360
    -0.0430 (-0.96%)
     
  • Nikkei

    40,337.49
    +262.80 (+0.66%)
     
  • Hang Seng

    17,769.14
    +50.53 (+0.29%)
     
  • FTSE Bursa Malaysia

    1,597.96
    -0.24 (-0.02%)
     
  • Jakarta Composite Index

    7,125.14
    -7,139.63 (-50.05%)
     
  • PSE Index

    6,358.96
    -39.81 (-0.62%)
     

Bringing AI everywhere with open ecosystems and heterogenous compute

AI requires a foundation to support various facets of model designs, development and deployment across different compute platforms

In every single conversation we have had with customers these past few months, artificial intelligence (AI) has been front and centre and has dominated the technology ‘mindshare’ as companies look to digitise and modernise their operations.

Despite all the hype, AI is not new. AI has been working alongside humans for decades, from reducing manufacturing errors to helping users take better pictures on a smartphone. Technology has revolutionised many industries in a myriad of exciting ways. Healthcare professionals in Singapore, for example, are using AI in multiple forms to diagnose diabetic retinopathy (SELENA+), and as part of SingHealth and the National University of Singapore’s (NUS) programme to prevent diabetes, hypertension and hyperlipidemia (JARVISDHL). KK Women’s and Children’s Hospital and NUS also developed uSINE, a world-first AI-powered ultrasound-guided automated spinal landmark identification system which improves the accuracy and success rate of first-attempt needle insertion during spinal anaesthesia.

ADVERTISEMENT

Yet, AI has dominated global headlines in recent months, thanks to the rise of ChatGPT, a generative AI application that takes vast amounts of data it ingests to mimic human-created content. What ChatGPT has done is make it easy for people to understand the power of AI, and that it is at everyone’s fingertips to explore.

This new wave of AI has propelled many businesses to look for ways to boost their AI capabilities. Companies now understand that AI is no longer just about voice assistants or cameras. Technology has become vastly more engaging and can help people in a range of ways, including coding a website to generating copy and visuals.

However, what is often not addressed is the complexity of the compute required to successfully deploy AI. From consumer electronics to the edge and cloud, compute demand will continue to soar as AI takes off.

Democratisation of AI starts with compute

In the rush for more advanced AI adoption, many companies are rushing into leveraging the technology and not setting enough foundational principles to guide how to integrate the technology. To start off on the right foot, organisations need to take a step back and ask themselves what the business challenge or outcome is that they are trying to solve or achieve, and how  AI can be used in an efficient and cost-effective way with the right compute and software required to enable it.

All these can only be meaningful if AI happens in real-time with accuracy, and compute is essential to providing the speed and performance needed to train models, make decisions or predictions, perform image and speech recognition, and scale AI systems. Think of compute as the “brains” that help machines make sense of the world and decide the actions they take next.

This is why, as AI and its algorithms advance, so must the “brains” to boost the capabilities needed. They not only need to accelerate the performance of AI, but also do so in a more efficient, secure, scalable, and sustainable manner. To achieve this and democratise AI, heterogeneous compute and an open ecosystem for different AI scenarios are crucial.

AI needs heterogeneous compute for better performance, cost, and energy efficiency

Ever-faster speed and performance will be the default expectation of AI from its users in the future. While this means the demand for compute power will grow exponentially, it would not make sense if businesses simply added more Central Processing Units (CPUs) and Graphics Processing units (GPUs) or built more data centres to enable AI.

To power AI’s proliferation, there are two key areas that organisations must consider. First, the identification of the type of AI workload needed. Is it for an AI chatbot to interface with customers, a large generative AI model like ChatGPT to create new content, an image recognition solution to find defects, or some other type of AI workload? Second, costs are also an important consideration to determine whether the AI solution can be easily accessed by all.

Contrary to the conventional belief that all AI workloads require GPUs, the reality is that often an alternative. More efficient way to get some AI-powered tasks done is with general-purpose CPUs, the same ones that are already powering many of today’s data centres.

Take for example the workload of training a language model such as GPT-3. Training such large language models can cost millions of US dollars for a single model, yet most organisations will likely not need to get to that scale and instead require training much smaller models. In fact, most organisations will only need to use pre-trained models and fine-tune them with their own smaller curated data sets, and this fine-tuning can be accomplished in only minutes using Intel AI software and other standard industry open-source software, running on general-purpose CPUs.

In the scenario where there indeed is a need to train a large language model, dedicated AI accelerators such as Intel’s Gaudi2 present an alternative to traditional GPUs. In fact, Gaudi2 can provide competitive cost advantages to customers, both in server and system costs. The accelerator’s MLPerf-validated performance plus upcoming software advances make Gaudi2 an extremely compelling price/performance alternative to GPUs like Nvidia’s H100.

Hence, solving the AI challenge requires a holistic approach that accounts for vastly different use cases, workloads, and power requirements. This means that different AI applications will require different compute configurations that are purpose-built with high precision and could comprise a diverse mix of architectures and hardware that could run the gamut of CPUs, GPUs, Field Programmable Gate Arrays (FPGAs), or other accelerators.

In short, there is no one-size-fits-all when it comes to compute, and it becomes more important than ever that the compute platform is flexible and scalable for the changing workload requirements to reach AI practicality.

AI needs an open ecosystem

On the other hand, AI is also a software problem. To democratise AI, we need an open ecosystem, and software is key to unleashing the power and scalability of AI. Without an optimised range of software frameworks and toolkits to support the hardware running AI workloads, the performance would not be able to meet optimal business requirements.

Developers need a build-once-and-deploy-everywhere approach with flexible, open and energy-efficient solutions that allow all forms of AI. One such tool is Intel’s oneAPI Toolkits, which enables businesses to write code once and run it on a variety of hardware platforms.

Tools such as this help businesses maximise the performance of their AI workloads while minimising the cost and complexity of managing multiple hardware platforms. AI based on an open ecosystem makes it more broadly accessible and cost-effective. It removes roadblocks that limit progress and enables developers to build and deploy AI everywhere while prioritising power, price, and performance, using the hardware and software that best make sense for each job.

Investing in the future of AI

Without a doubt, AI is becoming more powerful and has unlocked new possibilities that many businesses are seeing for the first time. Whether businesses are running their AI in the cloud or creating their own on-premises solutions, should be ready for a future where compute demand will skyrocket. And AI technology will require a foundation to support different facets of AI model designs, development, and deployment across different compute platforms as they continue to evolve.

Reaping the transformational benefits of AI depends on how businesses invest in the capabilities required to get the best out of AI – and a heterogeneous compute environment and an open ecosystem that protects and future-proof current investments will be important as businesses prepare for what is next with AI.

Alexis Crowell is the vice president of Sales, Marketing & Communications Group and general manager of Technology Solutions, Software & Services for Asia Pacific Japan at Intel

 

See Also: