Advertisement
Singapore markets closed
  • Straits Times Index

    3,297.55
    -26.98 (-0.81%)
     
  • Nikkei

    38,814.56
    +94.09 (+0.24%)
     
  • Hang Seng

    17,941.78
    -170.85 (-0.94%)
     
  • FTSE 100

    8,146.86
    -16.81 (-0.21%)
     
  • Bitcoin USD

    66,211.39
    -15.74 (-0.02%)
     
  • CMC Crypto 200

    1,374.56
    -43.32 (-3.05%)
     
  • S&P 500

    5,431.60
    -2.14 (-0.04%)
     
  • Dow

    38,589.16
    -57.94 (-0.15%)
     
  • Nasdaq

    17,688.88
    +21.32 (+0.12%)
     
  • Gold

    2,348.40
    +30.40 (+1.31%)
     
  • Crude Oil

    78.49
    -0.13 (-0.17%)
     
  • 10-Yr Bond

    4.2130
    -0.0250 (-0.59%)
     
  • FTSE Bursa Malaysia

    1,607.32
    -2.85 (-0.18%)
     
  • Jakarta Composite Index

    6,734.83
    -96.73 (-1.42%)
     
  • PSE Index

    6,383.70
    -7.13 (-0.11%)
     

Investing in the next phase of AI race after Nvidia earnings

The AI trade is still alive and well! Chip powerhouse Nvidia (NVDA) beat its fiscal first-quarter earnings, reported on Wednesday, and CEO Jensen Huang believes artificial intelligence will continue to be a "giant market opportunity" for the company.

Creative Strategies CEO and Principal Analyst Ben Bajarin and Harvest Portfolio Management Co-CIO and Wall Street Beats Partner Paul Meeks sit down with Yahoo Finance's Market Domination to share their perspectives on Nvidia's positive quarter and where the company can go in artificial intelligence inferencing, or the process of training an AI model by running new data through it.

"This is where the works really gotta come in to really understand where these workloads are going to go. Looking at where a lot of big businesses and enterprises are going to want to run their workloads, either in the cloud or on PRAM, and I do believe there's something to this," Bajarin lays out. "AI factories, sovereign AI, that's going to happen. But does that mean that's totally Nvidia systems? I think that's that's still an open question. It's really going to come down to cost."

Bajarin and Meeks also weigh in on where other hyperscaler cloud service providers and PC companies fit into the greater AI race.

ADVERTISEMENT

For more of everything Nvidia, catch Yahoo Finance's exclusive interview with CEO Jensen Huang.

For more expert insight and the latest market action, click here to watch this full episode of Market Domination.

This post was written by Luke Carberry Mogan.

Video transcript

A I demand isn't going anywhere.

That's the clear message from NVIDIA stellar first quarter results and robust second quarter guidance.

Even as A I evolved past the training stage and video founder and Ceo Jensen Wang told us he still believes the company is well positioned.

Here's what he said, we have a great position in inference because inference is just a really complicated problem, you know, and the software stack is complicated.

The type of models that people use is complicated.

There's so many different types.

It's just gonna be a giant market market opportunity for us.

We're looking at how to navigate the future of A I with the Yahoo Finance playbook to discuss Ben Beer and CEO and principal analyst at Creative Strategies and Paul Meeks co cio at harvest portfolio management and partner at Wall Street Beats.

Thanks guys for being here, really appreciate it.

Um Ben, I wanna start with you what Shenzen was sort of referring to there and there has been some questioning from the investor community about this move from training to inference, right?

From when A I models are learning stuff to when they're doing stuff, right?

When they're analy analyzing and forecasting and whether NVIDIA chips would be best poised for that, especially given their cost, did this quarter and forecast, put that to bed or do we still have questions about the future growth rate?

Yeah, I mean, I don't think it put it to bed, right.

This is a highly competitive market.

You've got Intel A MD, you've got the hyper scalars doing their own um Infer chips and A I accelerators.

So clearly, there's a battle for these workloads.

I think that's the best way to really think about it.

That that said, I think the argument that he's making is really that the NVIDIA systems, right?

And it's not just GP US, it's the whole systems, it's the CPU that are on board, it's the networking, it's um the memory, it's everything that they build is going to be the best total cost of ownership for these workloads.

And so there's good argument for that.

I think you can make that case.

I, I really don't think we know the answers to that yet, but I do think this is becoming the most uh the most interesting thing to track, which is one, what are those workloads that are infer, you know, we want NVIDIA to give us more details of that and then secondarily does that number continue to grow each quarter and that they do run more infer workloads um on their, on their DGX systems and, and Paul le, let's bring you in here in here as well and kind of start big picture here, Paul interested just to get your take on that Invidia print.

You know, they reported they beat Paul investors.

Love it.

What, what did you make of the report Paul?

And what did it teach you kinda about the bigger broader A I theme in trade?

Well, the thing that I was looking for was not any metric in particular financially.

It was tell me the vision and make me feel more comfortable coming out of the conference call that there is a longer runway for NVIDIA in all kinds of facets of A I infrastructure build than we thought because the issue with NVIDIA is with the stock of 34 fold last year doubled already this year is do they ever uh run into a growth wall?

Because if they do only two fiscal years ago, this company only earned a couple dollars per share, then it goes to $28 this year, probably 35 to 40 next year.

But if they actually see at some point, and I'm not saying when any sort of slow down in, in spending, the stock used to be uh much lower because its earnings power was three, not 45 or 50.

So that is the absolute key.

And I think Jensen Wong did a nice job yesterday, articulating the vision only.

He can do that to make us feel that they've got their bases covered and they're not going to peak just as Ben said, necessarily with large language model building.

Well, and, and the interesting thing is even if spending doesn't go down, if it doesn't keep up at the same rate and it doesn't look like it necessarily will, going, is going to at least if you look at the revenue projections, then uh NVIDIA might not be in the same position, Ben, as you said, there's a lot of competition out there.

So does that mean investors at this point?

Is it too early to look to the competition to invest or to look to the bigger ecosystem to invest?

Yeah, I think this is where the the works really got to come in to really understand where these workloads are going to go, you know, looking at where a lot of big businesses and enterprises are going to want to run their workloads either in the cloud or on, on prem or, and I do believe there's something to this A I factories sovereign A I that that's going to happen.

But does that mean that's totally NVIDIA systems?

I think that's, that's still an open question.

It's really going to come down to cost, right?

We know that running in and inference is a big part of the workload and one of the most important parts about inference and, and what you need to do technically is you need to be at all the to handle, you know, millions upon millions of people at the same time querying and getting predictive response and doing all the, the A I things that we talk about him.

So can every other product handle those concurrent inference workloads?

Jensen brought this up on the call.

It's something that we hear regularly as people are exploring where to put those workloads um in the cloud.

So I again, I do think that there is an argument to be made for NVIDIA.

But, but again, as we said, right, there's a huge set of competition out there.

And, and this point, this inference point is the sole reason why Google is doing TP Microsoft is doing which is their accelerator and why Amazon has um inferential, their Inference engine is they're doing it to build these workloads for those.

So it's, it's very competitive, but I, I just don't think we have an answer to where all these workloads are going to go yet.

Paul uh beyond NVIDIA, let's talk about another name you like which is Dell.

That's another name Paul.

It's an A I play, it's another name.

Investors have piled in.

Why is Dell A buy here, Paul?

So I do think that there'll be a continued strength in server demand.

I've played very successfully Super Micro over the last couple of years.

However, that stock seems to be stretched to me.

So I'm looking to carry on with the same theme and when I take a look at the other players I much admire Dell.

I don't necessarily admire H P's management team.

Don't have confident or at least enough confidence in their execution.

So Dell is my way to play it because Super Micro has already been uh too good of an investment for me.

And, and Paul, what about, you know, the hyper scalars themselves?

Right?

Not necessarily the infrastructure as much, although I guess you could call the hyper scalars infrastructure.

But I'm just curious if you think that they're also a good way to play this.

Oh yeah, I own uh the uh hyper scalars.

Uh I will say, or take uh you know, uh some offense to a point made just now that there's competition coming.

I think a lot of these A I chip companies, yes, they've been talking about stuff on the podium.

They have it in press releases, they have it in powerpoint presentations, but frankly, I'll believe it when I see it.

So I think the threat on the A I chips is less for NVIDIA than people are making it out to be.

I really think uh A MD will get there before too long company super well managed under Dr Lisa su but here is another competitor, Intel, Intel has sucked for a couple of decades.

And so they're taking the company in the right direction.

But come on, uh when they talk about their manufacturing process, let's see it first.

But I do think the hyper scalar is a way to play it.

And of course, another way to play it is some of the other uh companies in the A I ecosystem.

I mentioned Dell, we got Super micro uh Arria Networks looks pretty interesting today down 5% overnight.

So those are some of the plays instead of just piling into NVIDIA, which I've already done.

Well, Ben, Ben respond to that if you would, I mean, you guys are smarter on this on the technical stuff than I am to be sure.

So Ben, I'm curious, you know, Jensen Wong would probably agree with Paul and say that this is very difficult to do um at the advanced level that we do it so really hard to come up with a new chip to compete.

But exactly how hard is it, I guess, II I think the best way to think about this and this sort of ties to a little bit of the segment that you had before about what's going on with A IP CS.

Um You know, I was at the, the Monday launch with Microsoft and Copilot P CS and they made a big deal of this new thing called the NPU.

And so why are all these A I workloads moving to the NPU or why are they making this a big deal in that you want to run these things on this workload?

And basically what it comes down to in terms of the, the sheer efficiency of running A I workloads for saving of power, right.

So giving you battery life, the NPU is the best way to do that.

Everybody that runs these workloads in the cloud just completely knows that GP US are amongst the largest energy hogs when it comes to these things.

And that's really where this, this, this inference debate goes.

So you can think of what Microsoft's doing with Mayo or what Amazon's doing with Inferential and Google is doing with TP US, which already have a tremendous amount of customer success that they, and they're already proving this in the market.

And they have a whole host of customer success stories, economic stories, money saving stories, performance per watt stories.

So there's a and, and the way I think of those accelerators is they're basically NP US and that's really what they are and there's a whole road map for these companies.

This is not stopping here.

They, they've got a long, long road map and the bottom line is they're optimizing their software so that when a company comes to them and says, here's my A I workload.

Their software artist is deciding where it goes and this is just the ultimate trend of the hypercar.

So yes, people can choose to run those on the GP US if they want or a whole host of businesses are just going to say I'm bringing my workload to your cloud.

I'm writing my cloud in your software and that abstraction layer at a Google at Amazon at Microsoft will then decide where those workload goes.

So there's a portion of this that is actually outside of Nvidia's hands.

They don't want to talk about that because the hyper scalars are gonna be in a, in a position to control those workloads.

And this is why they're investing in the strategy, it just saves money and it saves energy.

And we know that that's a huge deal in the uh in the hyper scalar route.

And that's why I think over the course of the next few years, you will see a far more diversification of workloads because it makes economic sense and it's better from a, from an energy saving standpoint as well.

Ben Paul.

That was a great discussion, great debate.

Thank you guys for joining us.

Thank you.

Thanks for having me.