- The race for capacity, fueled by AI’s ascension, could lead to more bartering agreements in the cloud.
- In a pair of deals announced last week, Oracle wove its cloud infrastructure even more into growing services from OpenAI and Google Cloud.
OUR TAKE
With technological advancements and the development of distributed computing platforms, enterprises can choose to deploy different phases of AI workloads where their needs are best suited and cost-effective. This strategy not only optimises the use of resources, but also helps reduce operating costs and improve overall efficiency, giving businesses flexibility and a competitive advantage in a competitive market.
–Revel Cheng, BTW reporter
OpenAI CEO Sam Altman said OCI would enable OpenAI to continue to scale, running some of its workloads on that infrastructure.
What happened
Last Tuesday, Oracle, Microsoft, and OpenAl announced a partnership that extends Microsoft’s Azure Al to Oracle Cloud Infrastructure (OCI) to give OpenAI more capacity. Oracle also announced a partnership with Google Cloud to combine OCI and Google Cloud tech to accelerate app modernisation and migrations.
According to the statement on the partnership with Google Cloud, the agreement is meant to offer, among other resources, Oracle’s database and applications in tandem with Google Cloud’s platform and AI.
Sid Nag, vice president in the technology and service provider group at Gartner, says Oracle’s partnership with Google, comparable to a prior arrangement with Microsoft Azure, will let workloads in the Google environment communicate with Oracle’s database if needed.
“It kind of promotes the whole notion of multicloud,” he says. “A lot of cloud providers talk about multicloud, but they don’t really put their money where their mouth is. This is an example where they are actually doing something about it.”
Also read: Oracle expects revenue growth for fiscal 2025 on strong AI demand
Also read: Microsoft restructures Azure team, lays off hundreds
Why it’s important
“AI is going to drive a massive demand for capacity,” Sid Nag says. “Large language models (LLMs) are going to balloon in size.”
Another aspect of AI’s expansion across the cloud may be the desire to spread some compute workloads among more providers than hyperscalers. “This is obviously a canny move for Oracle in terms of having some of that capacity in their cloud,” says Spencer Kimball, CEO of Cockroach Labs. “Definitely Oracle’s OCI is less expensive than the hyperscalers.”
Flexibility and optimisation potential for AI workloads in a distributed computing environment. While large-scale enterprises may have advantages in data processing and production, the training phase of an AI model does not necessarily require large-scale computing resources. In this case, other regions or specialised AI service providers may find a more efficient way to meet the needs of AI model training.






