At the White House this year, OpenAI outlined its lofty ‘Stargate’ infrastructure project that would cost half a trillion US dollars and be developed with partners including SoftBank and Oracle.
Now, after some fits and starts, OpenAI executives say the joint venture is far more expansive than previously outlined and involves almost everything OpenAI does related to artificial intelligence chips and data centres.
Stargate was initially conceived as a new company that would invest US$500 billion (A$756 billion) in AI infrastructure. Now OpenAI executives say the parameters have expanded to include data centre projects launched months before Stargate was announced.
OpenAI, best known for its ChatGPT chatbot, argues that the AI revolution needs computing systems like Stargate to deliver on its immense promise.
OpenAI will pursue different creative financing options, some of which have only emerged within the last year, to secure chips for the data centres, the executives added.
Chief executive Sam Altman has repeatedly said that creating data centres is the key to progress, writing in a blog this week that he eventually aimed to get to the point of building a gigawatt of new AI infrastructure every week. Expanding chip availability was also the key idea behind the announcement of Stargate in January.

The initial vision for Stargate, however, ran into delays, executives said. Protracted negotiations with other parties and decisions on locations have bogged down the process, SoftBank’s chief financial officer Yoshimitsu Goto said last month.
The sweep of projects in OpenAI’s expanded vision all have the same goal: to help meet significant demand for its AI tools.
“We cannot fall behind in the need to put the infrastructure together to make this revolution happen,” Mr Altman said on Tuesday at a briefing with reporters, tech executives and politicians, including US Senator Ted Cruz, and newly named Oracle co-CEO Clay Magouyrk. The briefing was held at a massive data center in Abilene, Texas, where OpenAI and its partners are rapidly building a data center.
Despite widespread expectations that AI will fundamentally change the world, investors have voiced substantial concern about a potential bubble from building too quickly. Altman acknowledged those concerns while remaining optimistic.
“There will be a lot of short-term ups and downs, day-to-day quarter, whatever,” he said. “You zoom out enough and the charts look like this,” he said, gesturing with his hands sloping upwards.
A new partnership of up to US$100 billion with Nvidia announced on Monday is part of the project. OpenAI plans to use an initial US$10 billion in cash from the chipmaker to secure additional funding to use its products. OpenAI estimates that leasing instead of buying chips could save the company 10-15 per cent, a person familiar with the matter said.
Executives familiar with Stargate said it would help OpenAI tap debt markets for future sites.
Stargate’s projects will not include some companies, including longtime sponsor Microsoft, executives said. OpenAI negotiated terms with Microsoft to enable working with multiple partners.
The industry’s lifeblood
Computational resources, or “compute” in industry parlance, are the lifeblood of the AI industry. OpenAI executives have said for years that the company is significantly short on compute required to power its services, especially ChatGPT, and develop new tools.
Just this week, OpenAI decided to delay launching a product outside the United States due to a lack of compute, people close to Stargate said. OpenAI would like to quickly minimize such trade-offs, they added.
On Tuesday, OpenAI, Oracle and SoftBank unveiled plans for five new US AI data centres for Stargate. These include three sites with Oracle, two affiliated with SoftBank and expansion of an Oracle site in Abilene, Texas.
Altogether, OpenAI’s projects account for nearly 7 gigawatts of the 10 gigawatts of compute initially envisioned for Stargate.
Abilene, dubbed the flagship Stargate project, has been under construction for more than a year by Oracle and AI startup Crusoe. The site spans 1,100 acres (445 hectares) and employs thousands of construction workers. Cranes and hydraulic platforms are spread across the campus, with some hoisting American and Texan flags. The facility also includes fibre cable long enough to stretch from the earth to the moon and back.
Stargate projects also include initiatives for OpenAI to build data centres on its own or with partners.
Debt financing and Nvidia backing
After announcing Stargate in January, OpenAI held hundreds of meetings across North America with potential partners that could provide land, power and other resources. “It was a flood of people,” one executive said.
The expanded Stargate plan now includes self-built data centres and third-party cloud capacity. The new Nvidia deal is part of this broader strategy that allows OpenAI to pay for its chips over time, rather than purchasing them outright.
Of the roughly US$50 billion estimated value of a new data centre, about $15 billion covers land, buildings and standard equipment. Financing the GPU chips is more challenging due to shortages and uncertainty over the life of the chips in a fast-changing new industry.
Meta tapped US investment manager PIMCO and alternative asset manager Blue Owl Capital to lead a US$29 billion financing for its data centre expansion in rural Louisiana, Reuters reported earlier this month. This reflects a broader trend of massive data centres operated by cloud service providers, called hyperscalers, turning to outside financing to help cover the rising costs of building and powering centres for generative AI.
Companies rated below investment grade normally face higher costs when raising debt. Nvidia’s equity backing gives lenders confidence in OpenAI, executives said. OpenAI has not yet drawn on debt financing but plans to for future builds.
The buildout pace is limited by supply chain bottlenecks and GPU availability, executives familiar with Stargate said. Site configurations vary, and most workloads currently focus on reasoning tasks, so low-latency optimisation is less critical.
– Reuters
Do you know more? Contact James Riley via Email.