The Loop Nobody Designed But Everyone Is Paying For

The product layer is camouflage

People like talking about AI as if it is a product category.

Smarter search. A better assistant. Faster autocomplete.

That is the layer you are supposed to see.

But if you follow the economics of frontier AI and the politics of military spending, another picture appears.

The consumer story is shaky. The enterprise story is still mostly experimental. The infrastructure bill is enormous. And when commercial demand cannot justify that pace of investment, the most attractive buyer left is not the average user or even the average company.

It is the state.

More specifically, the national security state.

If you read my earlier piece on how consumer AI money flows upstream, this is the next layer of the same story. The loop does not end at the chatbot. It keeps going until it reaches budgets, procurement, and war.

The commercial math is still ugly

Start with the basic numbers.

OpenAI projected roughly $5 billion in losses on $3.7 billion in revenue for 2024.1 By the first half of 2025, The Information reported that OpenAI had generated about $4.3 billion in revenue while posting a $7.8 billion operating loss.2

Anthropic told investors it expected to burn about $3 billion in 2025.3

Meanwhile the companies building the underlying infrastructure started borrowing like a sector that had already found its monetization story. A Mellon analysis estimated that hyperscalers issued roughly $121 billion in new debt in 2025, including about $30 billion from Meta, $25 billion from Alphabet, and $18 billion from Oracle.4567

That would be less alarming if the revenue horizon were equally large. It is not.

Bain estimated that sustaining the current AI buildout by 2030 would require around $2 trillion in new annual revenue.8 Gartner put worldwide public cloud spending in 2024 at $678.8 billion.9

That gap matters.

It tells you this is not just a story about product adoption. It is a story about finding someone, somewhere, willing to absorb a level of spending the normal software market cannot currently justify.

When the market hesitates, the state steps in

This is where defense enters the picture.

The U.S. administration’s FY2026 national defense budget request topped $1 trillion for the first time.10 In the same June 26, 2025 briefing, Pentagon officials said the department was, for the first time, breaking out a $13.4 billion autonomy and autonomous systems line as its own section.11

A year earlier, in January 2024, OpenAI removed explicit language banning “military and warfare” use from its policies.12

Then the partnerships started moving into the open.

In November 2024, Anthropic partnered with Palantir and AWS to make Claude available to U.S. intelligence and defense agencies.13 In December 2024, OpenAI partnered with Anduril on counter-drone systems for national security missions.14

By July 14, 2025, the Pentagon had awarded contracts with ceilings of up to $200 million each to Anthropic, Google, OpenAI, and xAI for military AI work.15

So no, this is not some speculative future where AI companies might flirt with defense.

That line has already been crossed.

War is the cleanest product demo

Wars do something normal markets cannot.

They suspend price sensitivity. They accelerate procurement. They turn “move carefully” into “deploy now.”

And they provide the one thing military AI vendors desperately need: live operational proof.

Ukraine has become one version of that proof. In March 2026, Ukraine opened battlefield data to partners so AI models for unmanned systems could be trained on real combat conditions.16 This is not a lab benchmark. It is a wartime dataset.

Gaza is another.

A 2024 investigation by +972 Magazine and Local Call reported that the Israeli military used a system called Lavender that marked as many as 37,000 Palestinians as potential targets, with human review often reduced to a narrow procedural check.17

That is what matters here. The debate over autonomous and AI-assisted warfare is happening after deployment has already begun.

In December 2024, the U.N. General Assembly adopted a resolution on lethal autonomous weapons systems by a vote of 166 in favor and 3 against.18 The resolution is non-binding. There is still no treaty with teeth behind it.

So the sequence looks like this:

Technology companies need revenue. Defense bureaucracies need speed and strategic advantage. Active conflicts provide urgency, funding, and legitimacy.

And once systems are fielded, they become harder to challenge because they are no longer proposals. They are doctrine.

This does not require a conspiracy

This part matters, because it is easy to make a sloppy version of this argument.

I am not saying there is a room where tech executives, generals, investors, and politicians sit around a table and plan the century.

Power usually works in a less cinematic way.

It works through overlapping incentives.

A venture-funded AI lab needs enormous capital expenditure to make sense. A defense contractor needs better software, autonomy, and data infrastructure. A government wants to avoid looking slow against China, Russia, or whoever is next. Large investors benefit when both sides of that transaction grow.

That system does not need perfect coordination. It only needs alignment.

That is why “follow the money” can sound conspiratorial when the real problem is more banal and, in some ways, worse. Nobody has to control the whole machine for the machine to keep rewarding the same outcome.

Who actually pays for the loop

The costs do not stay at the top of the stack.

Your subscription helps validate the growth story. Your taxes finance military procurement. Your prompts, documents, and interactions improve the systems being commercialized and licensed. Your labor absorbs the transition as companies cut elsewhere to keep AI capex politically and financially defensible.

And if the buildout disappoints, the losses do not vanish. They spill outward into public markets, pension exposure, sovereign debt, and government backstops.

That is what makes the MIT NANDA finding so unsettling. In 2025, MIT researchers reported that roughly 95% of enterprise generative AI implementations had no measurable impact on profit and loss.19 Yet U.S. AI startups still raised $104.3 billion in the first half of 2025 alone, nearly matching the full-year 2024 total, according to PitchBook data reported by CNBC.20

At every level, the pattern is the same.

Risk is socialized. Upside is concentrated. Complexity hides the transfer.

The strongest version of the counterargument

There is a serious counterargument here, and it should be taken seriously.

Military funding has produced civilian spillovers before. DARPA helped fund the internet. GPS began as military infrastructure. Governments do absorb research risk that private markets often will not.

There is also a real geopolitical argument. If advanced AI capabilities affect cyber operations, targeting, logistics, intelligence, and autonomy, then states are not irrational for treating them as strategic assets.

So the honest version of my argument is not that all military AI spending is fake, or that every defense use case is illegitimate.

It is more specific than that.

When frontier AI remains structurally unprofitable, when capital requirements keep growing, and when geopolitics offers a moral and political rationale for unlimited spending, a self-reinforcing loop forms.

The technology justifies the budget. The budget justifies the technology. The conflict justifies the speed. The speed weakens the possibility of restraint.

Nobody fully designed that loop. Nobody fully controls it. But everyone below it funds it.

Why it matters now

These systems are being normalized before the public has really decided what the boundary should be.

Once autonomous and AI-assisted systems are embedded in procurement pipelines, command software, intelligence workflows, and military doctrine, they stop feeling exceptional. They start feeling inevitable.

That is the real danger.

Not that one company is evil. Not that one government is uniquely reckless. But that an entire stack of incentives is drifting toward the same destination while most people still meet it as a shiny app icon on a phone.

Complexity is one of power’s best disguises.

It lets the bond issuance disappear behind the product launch. It lets the procurement contract disappear behind the chatbot. It lets the battlefield disappear behind the phrase “innovation.”

That is why this question has to be asked now, while the system still looks reversible from the outside.

Because once it no longer looks like a choice, it usually is not.

References


  1. Axios - Why OpenAI burns through billions ↩︎

  2. MarketWatch - OpenAI is AI’s leading indicator. Does that make it too big to fail? ↩︎

  3. Reuters - Anthropic sees revenue potentially soaring to $34.5 billion in 2027, The Information reports ↩︎

  4. Mellon - Hyper-Scalers Hit the Bond Market to Finance AI Growth ↩︎

  5. Reuters - Meta to raise $30 billion in its biggest bond sale as AI expansion costs rack up ↩︎

  6. Wealth Professional - Alphabet raises $25B in bonds to fund AI and cloud infrastructure ↩︎

  7. Reuters - Oracle looks to raise $18 billion in debt ↩︎

  8. Bain & Company - Can AI Live Up to the Hype? Silicon Valley Has a Trillion-Dollar Problem ↩︎

  9. Gartner - Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach $679 Billion in 2024 ↩︎

  10. U.S. Department of Defense - Department of Defense Releases the President’s Fiscal Year 2026 Defense Budget ↩︎

  11. U.S. Department of Defense Transcript - Fiscal Year 2026 Department of Defense Budget Press Briefing ↩︎

  12. CNBC - OpenAI deletes ban on using ChatGPT for military and warfare ↩︎

  13. Anthropic - Anthropic partners with Palantir and AWS to provide Claude models for U.S. intelligence and defense agencies ↩︎

  14. OpenAI - OpenAI and Anduril partner to deploy advanced AI for national security missions ↩︎

  15. U.S. Department of Defense - DOD Announces $200M Awards to Leading Frontier AI Companies to Accelerate Department of Defense AI Adoption ↩︎

  16. Reuters - Ukraine opens battlefield data access to allies’ AI models ↩︎

  17. +972 Magazine - ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza ↩︎

  18. United Nations Digital Library - A/RES/79/62 Lethal autonomous weapons systems ↩︎

  19. MIT Initiative on the Digital Economy - The GenAI Divide: State of AI in Business 2025 ↩︎

  20. CNBC - AI startups raised $104 billion in the first half of 2025 and nearly matched all of 2024 ↩︎

Nischal Skanda

About Nischal Skanda

Nischal is a technology enthusiast and designer passionate about the intersection of AI, cognitive science, and human-computer interaction. He explores how emerging technologies impact our daily lives and shares insights on building better digital experiences.