Nvidia: The AI Dominate Player

Posted by Jacob Radke
August 26, 2023

Nvidia reported second quarter financial results on Wednesday evening. Here are the highlights:

  • Revenue: $13.5 billion vs $7.19 billion last quarter and $11.2 billion forecasted.
  • Earnings: $2.50 per share vs $0.83 per share last quarter.
  • Gross profit nearly 3x from a year ago.
  • Operating income more than 4xed from a year ago.
  • Data center revenue doubled.

Needless to say Nvidia had a great quarter.

What’s crazy to me is that Nvidia was able to go from a $1 trillion valuation to a $250 billion valuation, back to $1 trillion and continue.

What’s also crazy is that they raised expectations by nearly 100% last quarter and they were still able to surpass and continue raising expectations.

The problem is investors pay for growth like this. And by pay I don’t mean price per share.

This is the hard part about investing - growth doesn’t always come at a reasonable price.

What Does Nvidia do and Why is it Integral to AI?

Nvidia creates GPUs, graphic processing units, think CPUs but faster.

A CPU (central processing unit) is the brain of a computer that performs general-purpose computing tasks.

A GPU is a specialized processor designed to accelerate the creation of images, videos, and animations.

Nvidia’s GPUs are composed of hundreds of cores that can handle thousands of threads simultaneously.

They deliver the once-esoteric technology of parallel computing.

Architecturally, the CPU is composed of just a few cores with lots of cache memory that can handle a few software threads at a time.

AI workloads are often both data and compute-intensive.

Because of the massive about of data and computing resources needed, data centers are the go to solution.

GPUs are preferred over CPUs for AI and in data centers because they can handle thousands of threads simultaneously.

The Ever Presence of Data

The amount of data created, consumed, and stored in 2022 was 98 TRILLION gigabytes.

In 2010 that was only 2 trillion.

Since then total data created, consumed, and stored grew 49x.

In the next 15 years that number is expected to continue to exponentially grow.

That data needs a resting place, and that resting place is in data centers.

Those data centers, again, prefer GPUs because they can handle thousands of threads simultaneously.

The other complication that even though we created 98 trillion gigabytes in new data, we have very limited resources to make sense of all of that information.

Sure, you can create software that shows information to you, but you have nothing that lets you know what you should do with that information.

AI embedded in the data centers themselves offer massive amounts of informational leverage.

If your data is already being stored in data centers, it should be returned to you in a better format than it was sent in.

For businesses, like Coca Cola, this can help C-suite executives make better operational decisions, like how much Classic Coke to send to Minnesota versus Alabama or where to build new distribution centers.

This blog is a form of data, and you reading this blog then prompts Google, LinkedIn, Facebook, etc. to create a profile for you and tries to understand who you are and identify other people like you.

When they do that they are creating more data. They then use that data to identify other content you may like and show other people like you content that you like.

What is Parallel Processing

The reason Nvidia gets so much credit for “powering” AI is because of this term.

AI takes massive amounts of data and attempts to make sense of all of it at once.

When you prompt ChatGPT with “how do I create an excel function that combines two cells together?” ChatGPT will then take that prompt and run it through thousands of sources to find the best answer from all sources.

With regular CPUs this would take forever, but with GPUs and parallel processing it takes far less time (down to a few seconds).

And with it all being stored on the cloud (in a data center), there is no need for all of information to be store on your local device.

Back to the Coca-Cola example. When Coca-cola wants to identify where to send more cans of a particular product it needs to pull information from all places where Coke is sold at once and identify where the product sells best.

That is no simple computation. It requires parallel processing.

The Problem with Good Ideas

Just because something is a good idea, it doesn’t always mean it is a good investment.

The good idea can hold enough steam for some time to make it a good investment for early investors, but the further from “early investor” you get the more you need to justify your purchase.

Right now a purchase of Nvidia would require a lot of conviction that AI product creation would have no slowdown for the next few years.

Well, the simple risk of a recession that hits and companies pull back on capital expenditures could send that conviction to the grave.

Compare Nvidia to Cisco in 2001. Cisco was the Nvidia of the dot com bubble. It’s price to sales looked almost identical to Nvidia’s today.

That is above peers and then skyrocketing before collapsing to “normal” levels.

Buying companies and assets is easy. It’s knowing when to sell that is hard.

It requires massive amounts of knowledge and conviction.

Scale your financial life with Fjell Capital - get a dedicated team, 3 meetings a year, unlimited phone calls, texts, and emails, an annual progress report, meetings designed around our 29 foundations, and professional asset management.
Join the 900+ subscribers reading Running the Tape every week!
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Connect with me on social media!
Privacy Policy
Terms of Service