Forget the Bubble Talk: NVDA, MSFT, and GOOGL Are Playing Completely Different AI Games
There has been so much noise lately on X about AI circularity that it is almost becoming its own cottage industry. Every chart gets passed around as proof that the whole system is some giant loop: NVDA finances CRVW, CRVW sells compute to OAI, OAI signs a quarter trillion dollar RPO with MSFT, MSFT eats multi billion dollar equity losses on its OAI stake, NVDA reinvests back into the same ecosystem, and ORCL shows up with its own OAI heavy RPO that looks more like a structured product than a cloud business. People stare at all of this and decide it must be a bubble.
I get the instinct, but it is the wrong conclusion. What is happening is not some fake it til you make it capital merry go round. It is much more interesting, and honestly a lot more rational. The circularity is just the visible surface of a deeper strategic fight where the major players are all trying to lock in power before the economics of AI shift. What makes the next phase even more fascinating is that each of the giants is trying to win a different game entirely.
NVDA is trying to entrench itself before inference slips away from GPUs. MSFT is trying to become the AI operating system, the place where agents live, where workflows happen, where distribution is controlled. GOOGL is setting itself up for the moment when inference gets cheap, small, and distributed, and when the economics of TCO start to matter more than who has the biggest model. META, for all the talk about efficiency, is trying to brute force itself into relevance by pushing a billion people into AI powered surfaces. And the neo clouds, CRVW, ORCL, are essentially derivatives of OAI’s revenue curve whether they admit it or not. You cannot understand the circularity without understanding what each company is really trying to become. And weirdly enough, the clearest window into that came not from an earnings call but from the long conversation with Satya on a recent podcast.
When he starts talking about MSFT’s AI architecture, you realize quickly that MSFT is not thinking about AI as a product or even a platform. He is talking like someone building a national power grid. He describes petabit scale networks stretched across regions and stitched together so a failure zone in Utah can leak its workload to Arizona without missing a beat. He talks about building AI datacenters in a modular format so they can pivot between GPU generations without locking themselves into any one vendor’s thermal, electrical, or interconnect constraints. He breaks down model parallelism and memory locality inside a datacenter. You can almost hear the frustration when he mentions that everything, from transformer architectures to inference graphs, changes so fast that you cannot optimize for the chip you think is coming next. He is clearly thinking about building the digital interstate of the next decade.
The part that really stands out is when he starts talking about agents. He does not use the usual assistant language. He talks about agents becoming actual users of MSFT products, not metaphors but literal usage. An agent that opens Excel, fetches a table, manipulates it, writes back to SharePoint, then triggers a Power Automate workflow. A world where entities generate the same kind of telemetry, licensing, and workflow gravity that human users do. It is a straight line to MSFT owning the distribution layer of AI, the same way Windows owned the distribution of PC applications. You can suddenly see why MSFT is willing to take multi billion dollar equity losses on OAI every quarter. It is not about the model. It is about the operating system underneath the agents that use the model. This is the part the bubble callers completely miss. MSFT is not paying for OAI’s revenue stream. It is paying for gravitational pull.
Now compare that to GOOGL because the contrast could not be sharper. One of the funny ironies of this whole cycle is that people keep declaring GOOGL behind in AI, when in reality GOOGL built exactly the infrastructure the next phase of AI needs. The recent earnings commentary basically screams the same thing. GOOGL’s capex is exploding not because they are scrambling to catch up, but because they are aggressively scaling a vertical stack, TPU v5e, v5p, v5lite, that is optimized for inference at a cost structure no GPU cluster can match. The market has been hyper fixated on model size and training throughput. But the real money, the real long term economic engine, is inference, and inference is heading toward smaller, domain specific, low precision workloads. If you believe even half the SLM research, the marginal AI query four years from now is going to run at the edge, not in a datacenter, and it is going to run on FP4, not FP16. That entire world plays into GOOGL’s strengths. Training is flashy. Inference is profitable. And inference at scale rewards vertical silicon, not merchant GPUs. So while everyone else is still wrestling with GPU thermals and rack densities because they are tied to a merchant chip roadmap they do not control, GOOGL is building the world’s only production scale inference ASIC supply chain. The fact that GOOGL Cloud’s backlog grew 82 percent year over year tells you how much of this demand is not speculative. It is contractually locked.
NVDA sees this. It is one reason their behavior looks increasingly aggressive. When they subsidize cloud providers through vendor financing, or guarantee to repurchase unsold CRVW capacity, which still feels surreal to type out, they are not doing it because they want to help. They are doing it because losing inference means giving up the largest volume workload in AI. That is why they are investing in OAI contingent on OAI deploying 10 gigawatts of NVDA systems. It is why they are locking down CUDA’s ecosystem harder. It is why they are renting CRVW capacity while also financing their expansion, practically blurring the line between supplier and customer.
Now here is the uncomfortable part. The entire infrastructure cycle is alarmingly tied to one company’s monetization curve. It is not that enterprise AI demand is fake. It is definitely growing. The hyperscaler results prove that clearly. AWS is stabilizing and re accelerating, AZURE’s AI contribution is showing up in the numbers, and GOOGL Cloud is finally scaling like people expected years ago. But downstream compute demand from OAI in particular is propping up entire sub sectors of the AI economy.
Think about ORCL. Two thirds of ORCL’s cloud RPO is tied to OAI. That is not a customer. That is counterparty risk. CRVW has roughly 40 percent exposure to OAI and NVDA owns more than 5 percent of the company. And MSFT, which can absolutely afford the risk, is still taking multi billion dollar paper losses because their equity in OAI moves like a biotech option, high upside, wild volatility, zero near term profits. The exposures are real, but they are manageable for the incumbents. The neo clouds are the weak link. Their entire revenue base depends on OAI not just growing, but growing at a speed that justifies the datacenter buildouts NVDA financed for them. If OAI’s monetization slips even slightly, the hyperscalers shrug. The neo clouds get squeezed.
Layer on the rise of SLMs and the economics bend even further. SLMs pull the marginal inference call off the cloud and onto devices or low power ASICs. Not enough to crush cloud growth, but enough to matter if your entire architecture depends on high utilization of GPU clusters. Every on device SLM erodes the long tail of GPU inference demand. And the erosion is not evenly distributed. It hits the companies that depend most on GPU centric inference architectures. This is exactly the scenario NVDA is trying to get ahead of. It is the scenario GOOGL is directly optimized for. It is the scenario MSFT will steer into with agents. It is the scenario META will try to monetize through usage scale. And it is the scenario that creates the most fragility for CRVW and ORCL’s AI cloud ambitions.
So when people say AI is in a bubble, what they are reacting to is the weirdness of seeing an industry build out infrastructure ahead of monetization. But that is what every platform transition looks like. The internet looked like that. Smartphones looked like that. Cloud looked like that. These systems require front loaded capex and back loaded returns. The real fragility is not the hype. It is the dependency. OAI is carrying a lot on its back right now. If monetization keeps scaling, this whole cycle will look brilliant in hindsight, an aggressive early investment push into technology that ends up embedded everywhere. If monetization slows, the cracks show first at the edges, not at MSFT, GOOGL, AWS, or META, but in the clouds built entirely around one customer’s shoulders.
Circularity is not the red flag. Dependency is. And the companies that understand this, especially GOOGL, AMZN and MSFT, are the ones setting themselves up for the next phase, not reacting to it after it arrives.


This is the best article explaining the fallacy of circularity risk for Msft , goog , nvda , and orcl . Each company has different risks and opportunities in this AI race . You got this spot on . Check out my substack for similar musings ..