ORIGINATION: CoreWeave VP discusses preference for grid power, expansion of asset-light strategy through NVIDIA partnership

*This story was originally published exclusively for NPM subscribers.

New Project Media (NPM) is a leading market intelligence & data platform covering US & European power, renewables & data markets and serving the development, finance, advisory & corporate community. Click here to schedule a demo or learn more.

CoreWeave prefers to build data centers connected to the electric grid rather than relying primarily on behind-the-meter generation, a strategy executives say improves reliability and efficiency for large AI computing facilities.

Speaking at the Cantor Fitzgerald Global Technology & Industrial Growth Conference, Nicholas Robbins, the company’s vice president of corporate development, said grid connectivity provides stronger uptime and reduces the need for redundant power infrastructure.

Behind-the-meter setups often require building far more generation capacity than the computing load itself to ensure redundancy, he said.

“If you’re building exclusively behind the meter, you might need to build 180 to 200 MW of power just to deliver 100 MW of usable capacity,” Robbins said, explaining that grid-connected facilities typically only need to solve for backup generation instead.

He also argued that the power bottleneck facing data centers is often misunderstood. In many regions, the issue is not a lack of electricity but rather the infrastructure needed to deliver it.

The constraint “is not that there aren’t enough electrons in the grid,” Robbins said, but rather shortages of transmission lines, transformers, and other equipment required to deliver power to data center campuses.

The comments come as CoreWeave rapidly expands its global AI infrastructure footprint to meet surging demand from artificial intelligence developers.

NVIDIA deal

Robbins also highlighted the company’s evolving relationship with NVIDIA, which extends beyond chip supply into a broader infrastructure partnership.

Under the recently announced arrangement, NVIDIA plans to take CoreWeave’s reference architecture and cloud software and validate them for wider use across NVIDIA’s cloud partner ecosystem and sovereign AI customers.

That approach could allow CoreWeave to monetize its software and operational expertise even when it does not own the underlying infrastructure.

“NVIDIA’s intention is to take our reference architecture and software, validate them and ultimately make them available to other NVIDIA Cloud Partner and sovereign customers,” Robbins said, describing it as “the asset-light monetization of CoreWeave cloud infrastructure in other people’s data centers where we can make money on other people’s GPUs.”

The partnership also includes collaboration on securing land, power and data center shells to accelerate new deployments.

Robbins said demand for AI computing capacity remains extremely strong, noting the company added nearly 2 GW of infrastructure capacity last year alone.

CoreWeave’s expansion is increasingly global as well. The company has entered markets including Canada, the United Kingdom, Spain, Norway and Sweden, largely at the request of existing customers seeking additional compute capacity outside the United States.

That international growth is tied to the firm’s broader asset-light strategy, which includes licensing its cloud stack and operating software to partners abroad, enabling CoreWeave to expand its footprint without necessarily owning every data center facility.

Robbins said the model allows the company to scale alongside customer demand while maintaining a disciplined approach to infrastructure investment.

Trusted by 450+ companies including

schedule demo or learn more

 
Scroll to Top