Advertisement
Singapore markets open in 8 hours 34 minutes
  • Straits Times Index

    3,265.95
    +1.42 (+0.04%)
     
  • S&P 500

    5,202.75
    +15.08 (+0.29%)
     
  • Dow

    39,224.61
    +168.22 (+0.43%)
     
  • Nasdaq

    16,335.00
    +32.25 (+0.20%)
     
  • Bitcoin USD

    62,578.62
    +304.15 (+0.49%)
     
  • CMC Crypto 200

    1,342.05
    +41.95 (+3.23%)
     
  • FTSE 100

    8,381.35
    +27.30 (+0.33%)
     
  • Gold

    2,337.70
    +15.40 (+0.66%)
     
  • Crude Oil

    79.26
    +0.27 (+0.34%)
     
  • 10-Yr Bond

    4.4870
    -0.0050 (-0.11%)
     
  • Nikkei

    38,073.98
    -128.39 (-0.34%)
     
  • Hang Seng

    18,537.81
    +223.95 (+1.22%)
     
  • FTSE Bursa Malaysia

    1,601.22
    -3.53 (-0.22%)
     
  • Jakarta Composite Index

    7,088.79
    -34.82 (-0.49%)
     
  • PSE Index

    6,542.46
    -116.72 (-1.75%)
     

AI has ‘taken the world by storm,’ America’s Frontier Fund CEO says

America’s Frontier Fund CEO Gilman Louie joins Yahoo Finance Live to discuss generative AI, making AI safer and reliable, and the outlook for the development of technologies.

Video transcript

[AUDIO LOGO]

- The tech war between the US and China is heating up on a new frontier-- artificial intelligence. But pushing full steam ahead with the technology we know so little about has some concern. A group of tech leaders recently calling for a pause in AI use. Joining us now is Gilman Louie, who is America's Frontier Fund CEO with a look at where AI development should head in the US.

Gilman, thanks so much for joining us here today. As we've heard even more from some of the leading CEOs around major tech companies that, for themselves, could probably benefit monetarily in their own business model from generative AI, some of them still are sounding the alarm. Why do you think that is?

ADVERTISEMENT

GILMAN LOUIE: Yeah, I think generically, AI still transformative, and it's moving very, very quickly. And it's taking the world by storm. It's clear that the United States, major companies, OpenAI, Google, are leading in that particular space right now.

I think it's really important to understand that this technology is very early in its development. And so the issue is, how do we go forward and have a safe, ethical framework in which to use these technologies and how to develop it. And some may advocate for a pause.

But I think the pausing is the wrong framework. I think we actually have to speed up the train, but to speed it up in the areas around safety, around making sure that we have an ethical framework, that companies are putting the effort into making these AIs more responsible. And if we do that, I think the United States will continue its leadership role and quite frankly, a framework that the rest of the world can follow.

- Gilman, in a recent op-ed, you talk about a framework that would involve the different constituents, so talking about the companies that are developing it, perhaps some kind of governmental support as well. Are there any other instances that you can look to where that worked? I mean, I think in particular, of the struggle that the US is having right now in getting its arms around social media, for example. And this-- we're much later in that cycle than we are in the AI cycle. But nonetheless, when have we done this right? What examples can you look to?

GILMAN LOUIE: I mean, a great example is the development of our ability to go to the moon in our space program. That was clearly a collaborative effort between industry, science and academia, and with government. The same thing happened with the internet. The internet was originally ARPANET that the government invested in. Academia quickly adopted, and then it spread quickly to industry.

I think the lesson that we can learn from social media is that waiting is not a solution. Government involvement-- which is different than government regulation-- government involvement early with academics, with scientists, with companies can shape a much more orderly ecosystem. I think that's what we all want we want. We want to have an ecosystem that has all the advantages of fast growth and excitement of discovery and the use of these tools to push out the frontiers of technology and science and the positive uses, but we also don't want it to run amok. We don't want the wild wild west.

And that's why I think even the largest companies CEOs are saying no, it's time for all of us to be very thoughtful in how we build these technologies, and there is a role for government to play in this.

- It's amazing as we talk about so many different elements of artificial intelligence. Because I mean, this is clearly not our grandfather's AI that we're talking about, given the fact that AI has been around for decades now. Just ask IBM, if you will. But at the end of the day, it really comes down to where some of the new frontier, most investable parts of AI might be for someone who even wants to get AI in their portfolio early on. What would you be looking for there?

GILMAN LOUIE: Well, there are the foundational technologies that are clear [INAUDIBLE]. So when you hear about AI, I would distinguish between AI and generative AI, or sometimes you heard the terms large language models, which is where ChatGPT and Bonnard and many of the other technologies implemented today. To build that foundational technology, you need processors. You need fast chips, chips that Nvidia is working on. You need to have the right kind of cloud infrastructure to actually do this compute.

And on the other hand, there are some amazing applications that can be built off of this technology, everything from technology to help write computer code to use for scientific discovery, like medical and pharmaceuticals, and technologies, quite frankly, that empower everyday things like how to write a better essay or how to generate necessary presentations that are used every single day. This AI will affect us across the board in every industry. And so, what I would say for investors is not just look at the AI companies, but how existing companies are planning to leverage it, and what are their strategic plans, and quite frankly, what's their investment to make sure that they don't fall behind?

- And Gilman, finally to circle back to the beginning when we talked about that there have been some calls to pause development in some ways, what are the risks that you're worried about of AI? In other words, what are the things we need to control for as this industry is developed?

GILMAN LOUIE: I think we need to have a safety framework. We don't want AI to be applied before people are thoughtful about how that technology is actually used, and particularly, when you apply AI to the physical world. So coming up with a safety regime, and that's a responsibility of both industry and government, right.

So some of us are saying how does our federal agencies apply standards? Where are the safety frameworks? What's our version of the NTSB, National Safety Transportation Board for AI, if AI is actually going to be used robustly across our industrial and consumer base?

So I think there's real good thinking that's happening right now between industry and government. That dialogue is already beginning, but I also think there is a risk if you do it wrong. If you do it wrong and people don't understand how these algorithms work, they don't know where the data is coming from, and they don't-- aren't mindful of the things that could create these AIs to become biased. Those are negative impacts of poorly-implemented AI.

And so, it's to all of our interests, both the companies that are producing this AI, as well as the consumers and industries who are dependent on these AIs to do it in a way where it's responsible, where it's safe, and is productive, all at the same time.

- All right. Wise words there. Gilman Louis, who is America's Frontier Fund CEO. Gilman, thanks so much for taking the time today.