Letters: Technological progress did not start with AI, it is merely the latest update for AI enthusiasts and professionals
Why treating artificial intelligence like a beginning rather than a milestone changes what companies should buy, build, and regulate.
A senior engineer leans back in a conference room and watches marketing decks try to make a chatbot sound like a platform shift rather than an incremental tool upgrade. Outside the window a factory still hums with robots that learned their choreography in 1990s control labs, not last month’s model release. The tension is between breathless newness and the stubbornly long arc of technological change.
Most coverage frames AI as epochal, a rupture that resets industry maps and talent markets overnight. The overlooked and more useful view is that AI is an accelerating layer on top of decades of general purpose technology progress which shapes who wins and how quickly the industry professionalizes, scales, and monetizes value for buyers and sellers.
Why the “everything changes today” story is seductive and wrong
A single viral demo, a spectacular funding round, or a dominant cloud provider can make a technology feel like destiny. That story sells headlines and investor memos. It obscures the fact that structural shifts require infrastructure, standards, and slow institutional change that precede market-wide transformation.
That matters because companies reacting as if AI is a sudden new era spend on shiny point solutions instead of upstream things like data hygiene, compute contracts, and supply chain resilience. Those choices determine whether AI becomes a productivity multiplier or an expensive distraction.
How economists classify revolutions that actually move economies
Economists call technologies that reshape whole economies general purpose technologies. Electricity, the steam engine, and the microprocessor fit that description. Researchers have been applying the same lens to generative AI because its spillovers are broad and still unfolding. This framing helps explain why adoption curves can be long and why early winners are often infrastructure providers rather than flashy end user startups. (arxiv.org)
Why now feels louder than before
Three forces make AI feel immediate. First, the raw compute behind large models has exploded since 2012, growing faster than prior hardware trends and enabling capabilities that were previously theoretical. Second, a handful of hardware and cloud incumbents concentrated capacity and distribution channels. Third, venture and procurement dollars flowed rapidly into tooling, making the market look reconfigurable overnight. The compute trend is central to this story. (openai.com)
The infrastructure winners and the competitive landscape
Companies that sell chips, datacenter systems, and model tooling have captured the bulk of AI economics to date. Enterprise buyers increasingly rent models rather than train them from scratch, shifting margins toward infrastructure vendors and hyperscalers. Nvidia’s recent filings show how much of the industry’s revenue pool moved into data center AI sales, which reshapes where value accrues. (fintel.io)
The cost structure people forget to model
Moore’s Law is slowing and manufacturing at advanced nodes is increasingly expensive, which changes the math of custom silicon and multicore strategies. Higher fab costs and complex packaging mean engineering bets are larger and harder to reverse, benefiting firms that can amortize infrastructure at scale. That reality makes capital intensity a more important barrier to entry than the latest model architecture. (congress.gov)
The numbers that actually move boardrooms
Training compute trends followed an extraordinary exponential path for headline models, with per-run compute increasing rapidly over a decade. That pace explains why a small class of cloud customers and chip buyers dominate deployment decisions. Companies that assume every AI use case will be cheap to run at scale may be surprised by the underlying investments required. (openai.com)
AI is not a new genesis story for technology; it is an intensive update that plugs into century old economic plumbing.
What this means in practice for businesses doing procurement
A realistic procurement scenario starts with compute and storage line items. A midmarket company choosing to fine tune a 100 billion parameter model should budget for multi month GPU rentals, engineering time to create productionized pipelines, and ongoing inference costs that scale with user volume. Buy or rent decisions change with scale: buying into bespoke hardware pays off only when utilization exceeds a high threshold, otherwise renting from a cloud vendor is cheaper and faster. A single mispriced GPU cluster can turn a project from viable to vaporware; act like procurement is accounting for the next three to five years, not this quarter’s demo day.
For most firms the concrete choice is between paying 2 to 3 times more for control or accepting vendor lock in and predictable unit economics. That tradeoff is exactly the reason infrastructure companies keep widening moats while the rest of the market chases use cases.
The cost nobody is calculating in public decks
Marketing often omits the hidden line items of compliance, model monitoring, and model refresh cadence. These are recurring costs that compound as models age and regulatory scrutiny increases. Treating AI as a software library rather than as an ongoing product lifecycle expense is a mistake that shows up in time to be expensive.
A useful quick calculation for a CFO is to compare annualized inference cost per active user against the expected revenue per user. If inference costs approach half of gross margin and model retraining is required every three months, the unit economics may not work unless pricing or automation changes.
Risks that stress-test the “AI as update” claim
Concentration risk in compute supply can create geopolitical bottlenecks, and rapid hardware advances may trigger sudden shifts in pricing that destabilize business models. There is also the human risk: skills needed to operate at scale are scarce and expensive, which favors firms that can attract and retain specialized talent. Finally, regulatory changes around data and model safety could suddenly raise compliance costs to levels that reshape the competitive field.
A cynical aside is that when regulators move faster than product teams, press releases become a form of therapy.
Why small teams should watch this closely
Small teams benefit by focusing on composability and integrations rather than on reengineering core models. That strategy preserves optionality and lets them migrate to better infrastructure when prices or capabilities change. For founders, being adaptable to vendor APIs is less glamorous than inventing a proprietary stack, but more survivable.
A practical closing prescription for prudent leaders
Treat AI like an expensive platform upgrade: invest first in data, monitoring, and vendor relationships; delay heavy hardware bets until utilization justifies them; and price offerings to cover ongoing inference and governance costs.
Key Takeaways
- AI builds on decades of general purpose technology development and should be budgeted as a recurring platform investment.
- Compute growth enabled current model capabilities but also concentrated power in infrastructure providers.
- Semiconductor economics and manufacturing cost increases raise the bar for new hardware entrants.
- Practical success depends on data hygiene, monitoring, and realistic unit economics rather than chasing the newest model.
Frequently Asked Questions
How should a small business decide whether to build or buy an AI model?
Calculate projected user volume, inference cost per user, and engineering hours to maintain the model. If projected utilization is low and talent is scarce, renting a managed model is usually cheaper and faster to monetize.
What are the hidden ongoing costs of deploying AI in production?
Expect monitoring, retraining, data labeling, and governance to require continuous spending and personnel. These costs often exceed the initial proof of concept expense within 12 to 24 months.
Will new chips make existing AI strategies obsolete overnight?
New hardware improves economics but rarely invalidates software investments immediately because of integration and validation time. That means architecture choices should prioritize modularity and migratability.
Can an enterprise avoid vendor lock in while still using cloud AI services?
Yes, by designing data and model interfaces to be portable and using standard formats for inputs and outputs. Portability reduces switching cost but may require accepting some short term inefficiencies.
Is AI growth likely to slow because of hardware limits?
Hardware limits change the shape of growth rather than stop it; software efficiency gains and new architectures continue to create room for improvement even as node scaling becomes harder.
Related Coverage
Readers interested in corporate AI strategy might explore stories about enterprise data governance, cloud procurement strategies, and semiconductor supply chain dynamics. Practical guides on model monitoring and cost optimization are also essential reading for teams moving from pilot to production.
SOURCES: https://openai.com/research/ai-and-compute https://mitsloan.mit.edu/ideas-made-to-matter/impact-generative-ai-a-general-purpose-technology https://fintel.io/doc/sec-nvidia-corp-1045810-10k-2024-february-21-19774-1729 https://www.congress.gov/crs-product/R47508 https://arxiv.org/abs/2106.04338