Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

126
What Investors Don't Understand About Nvidia's Recurring Revenue Model Via DGX Cloud - Oracle's Bombshell ER - Acres of Nvidia GPUs and AWS
Post Flair (click to view more posts with a particular flair)
Post Body

Oracles September 9th Q1 2025 ER:

"Oracle has 162 cloud datacenters in operation and under construction around the world," said Oracle Chairman and CTO, Larry Ellison. "The largest of these datacenters is 800 megawatts and will contain acres of NVIDIA GPU Clusters for training large scale AI models. In Q1, 42 additional cloud GPU contracts were signed for a total of $3 billion. Our database business growth rate is increasing as a result of our MultiCloud agreements with Microsoft and Google. At the end of Q1, 7 Oracle Cloud regions were live at Microsoft with 24 more being built, and 4 Oracle Cloud regions were live at Google with 14 more being built. Our recently signed AWS contract was a milestone in the MultiCloud Era.  Soon customers will be able use the latest Oracle database technology from within every Hyperscaler's cloud."
...
"As Cloud Services became Oracle's largest business, both our operating income and earnings per share growth accelerated," said Oracle CEO, Safra Catz. "Non-GAAP operating income was up 14% in constant currency to $5.7 billion, and non-GAAP EPS was up 18% in constant currency to $1.39 in Q1. RPO was up 53% from last year to a record $99 billion. That strong contract backlog will increase revenue growth throughout FY25. But the biggest news of all was signing a MultiCloud agreement with AWS—including our latest technology Exadata hardware and Version 23ai of our database software—embedded into AWS cloud datacenters. AWS customers will get easy and convenient access to the Oracle database when we go live in December later this year."

Acres and Acres of Nvidia GPU's. Can you imagine a field of Nvidia multi-million dollar GPU racks powering the next wave of super advanced AI? That's what Oracle just reported in a bombshell earnings report. I can't remember many earnings report where they literally flex another company's technology as the reason why they are accelerating and generating record breaking revenue but here we are.

But here is the thing that people don't get about Nvidia's revenue model for data centers and I think it needs repeating. It was something that Jensen mentioned on Nvidia's past earnings call. An analyst asked him why doesn't Nvidia make distribute their own chips directly. Jensen said no, "we work directly through are OEM/ORM's to fulfill our distribution and that's how it will always be".

But another analyst question was even more peculiar because I showed the analyst doesn't realize how Nvidia's data center business model works and I think it needs repeating. When major cloud providers, including Oracle, purchase Nvidia GPU hardware they have 2 hierarchy options of what they will do with Nvidia GPU's.

  1. They use the hardware for their own compute needs and produce an output through an offering such as an LLM, gaming, or other accelerated compute needs directly. And end product if you will.
  2. They provide GPU's as actual hardware to lease out to enterprises and businesses such as startups that want to do their own accelerated compute. The offering here is called DGX Cloud.

The second delivery method here is in fact recurring revenue in many cases. A startup doesn't have to worry about going out and buying a datacenter and install on prem Nvidia hardware when they can just lease a node directly all major cloud providers. Now here's the thing, when using Nvidia hardware you will purchase the underlying DGX platform capabilities including CUDA.

HPCWire March 2023

Renting the GPU company’s DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month.
...

The DGX Cloud starting price is close to double that of $20,000 charged by Microsoft Azure for a fully-loaded A100 instance with 96 CPU cores, 900GB of storage and eight A100 GPUs per month.

From the same article, circa 2023, look at how this exact sentiment hit home via today's Oracle earnings report!

Oracle is hosting DGX Cloud infrastructure in its RDMA Supercluster, which scales to 32,000 GPUs. Microsoft will launch DGX Cloud next quarter, with Google Cloud’s implementation coming after that.

Customers will have to pay a premium for the latest hardware, but the integration of software libraries and tools may appeal to enterprises and data scientists.

But Nvidia’s proprietary hardware and software is like using the Apple iPhone – you are getting the best hardware, but once you are locked in, it will be hard to get out, and it will cost a lot of money in its lifetime.

Mind you, this to date has been all done via the H100 platform. We haven't even started into the Blackwell GB200 Superchip offering. Blackwell will literally amplify this model probably 10X. Nvidia will offer the complete system of the DGX B200 SuperPODs and a much more powerful Superchip offering of DGX GB200 SuperPODs which has a configuration from $3 million or more depending on the setup and the number of GPU's included.

So while the Superchips cost could be $70,000 themselves a fully-equipped server rack that is leased will cost astronomically higher per month than $36,999 per instance for a month. The pricing for this hasn't been released but I assure you it could be in the $500k to $1 million per month range [speculation here]. Remember, the leasing will be for DGX Cloud and CUDA software licensing.

Remember, Jensen has said they will be more inclined to sell the full server systems.

Another thing to consider about Nvidia's B200 is that the company may not really be inclined to sell B200 modules or cards. It may be much more inclined to sell DGX B200 [GB200] servers with eight Blackwell GPUs or even DGX B200 SuperPODs with 576 B200 [GB200] GPUs inside for millions of dollars each. 

To learn more about Nvidia DGX SuperPOD with DGX GB200 Systems here https://www.nvidia.com/en-us/data-center/dgx-superpod-gb200/

If you're not familiar with what GB200 Superchips are here are some key highlights.

  1. GB200: Combines both GPU and CPU into a single superchip (referred to as the Grace Hopper superchip), featuring 72-core ARM-based CPUs along with Blackwell GPUs.
  2. GB200: Features higher memory bandwidth and capacity, with up to 624GB of total memory, leveraging HBM3e technology and LPDDR5X memory.
  3. GB200: Provides higher aggregate performance due to its CPU-GPU integration, making it suitable for both AI and HPC workloads in more unified environments.
  4. GB200: Typically offered as part of high-performance systems like the SuperPOD, providing a seamless combination of both CPU and GPU resources to meet large-scale AI and data science needs.

Why is the "G" along with the "B" so important. Imagine, all of the revenue that Nvidia has done TO DATE is solely with the H100; not even the H200/GH200 AI factory systems. lol, think about that. ALL OF THESE BILLIONS and BILLIONS OF DOLLARS have only been via the H100 chip. The H200/GH200 just recently came out so while customers are needing to purchase the H200's the real platform GH200 SuperPOD server systems probably have not even begun to take hold with a lot of anticipation for the more powerful GB200 systems.

So you see, when Jensen told that analyst that NO they won't deliver direct as a cloud vender is because they don't have to and they already are delivering as a cloud provider via stronger contractual agreements while allowing others to also profit and eat from the hardware purchase which is exactly what Oracle reported today.

Others buy the hardware and Nvidia reaps the benefit of that plus the platform instance leasing for the entire stack including software which will always be recurring revenue.

In this way, Nvidia won't have a hard landing and in fact will be one of the largest companies the world has ever seen and it already is. However, people just don't realize that Nvidia is a cloud company in it's own way. It's just doing it in a way where everyone eats at his table. It's really amazing when you think about it.

There you have it folks, acres and acres of recurring revenue through DGX Cloud and CUDA software licensing. Nvidia already IS a Cloud Provider and a very good one at that.

Author
User Disabled
Account Strength
0%
Disabled 2 weeks ago
Account Age
3 years
Verified Email
Yes
Verified Flair
No
Total Karma
383,327
Link Karma
318,428
Comment Karma
64,783
Profile updated: 1 week ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
4 months ago