Artificial intelligence is widely proclaimed as a generator of new wealth. The suggestion is that models trained on large datasets, coupled with computational scale, will deliver prosperity by sheer force of information processing. Yet this claim collapses under even modest scrutiny. Wealth cannot be summoned into being by wordplay, nor does it arise automatically from predictive outputs.
Wealth has always meant durable command over real capacities: energy, materials, skilled labour, ecological stability, and institutional trust. Paper valuations, speculative capitalisation, or inflated expectations are not equivalent. Economists have long noted that wealth must be net of externalised costs—pollution, displacement, and resource depletion—if it is to be meaningful (Kapp, 1950; Georgescu-Roegen, 1971; Daly, 1996). Artificial intelligence, by this standard, cannot conjure wealth from tokens alone.
It is true that AI can raise productivity in narrow tasks. Evidence suggests that customer support, translation, and document drafting can be accelerated by generative models, with larger relative gains for less experienced workers (Brynjolfsson, Li and Raymond, 2025). But an increase in task-level output does not automatically propagate into system-wide wealth. As Bresnahan and Trajtenberg (1995) showed, general-purpose technologies only deliver large gains when complemented by costly investments, reorganisation, and institutional adaptation. Declaring “AI creates wealth” without these complements is not analysis but incantation.
Energy and material constraints sharpen this point. Data centres already account for surging electricity demand worldwide (IEA, 2025; EIA, 2025). Whatever symbolic value AI systems generate must be set against the physical costs of computation: vast power requirements, hardware production chains, and cooling infrastructures. When measured properly, some of the supposed “wealth” is simply shifted onto overstressed grids, supply chains, and ecological systems.
Nor are distributional consequences incidental. As Piketty (2014) and Milanovic (2016) demonstrate, capital income tends to outpace general growth, concentrating returns at the top while hollowing the middle and neglecting the base. “AI wealth,” absent corrective institutions, flows into the hands of capital holders and platform operators. At the same time, deprivation grows more visible. In the United States, official counts in 2023 recorded more than 650,000 people experiencing homelessness—the highest in modern records (HUD, 2023). This juxtaposition is not accidental. Wealth and poverty are structurally linked. The value of extreme wealth depends upon relative deprivation, much as Hirsch (1976) and Veblen (1899) observed of positional goods and conspicuous consumption. To sustain its status, wealth must have poverty as its foil.
This is the unity of the system: wealth and poverty are not opposites but co-dependencies. To pretend that AI breaks this relation is to ignore the fundamentals of economic and social life. The rhetoric of AI-generated wealth obscures that what appears as gain is often redistribution, displacement, or speculation.
Norbert Wiener’s (1948) cybernetics reminds us that control and communication are achieved through feedback, not fiat. W. Ross Ashby (1956) demonstrated that any regulator must match or exceed the variety of the system it seeks to govern. Yet those directing AI platforms claim mastery of complexity without possessing such variety. Their models cannot absorb the unpredictability of the world into which they are deployed. To call the results “wealth” is to confuse maps for territory, signals for substance.
The logical conclusion is simple. If AI expands capacities net of energy costs, funds its complements, reduces rather than exacerbates scarcity, and meets the demands of regulation and feedback, then it produces wealth. If not, it produces only appearances: paper gains, speculative bubbles, or intensified inequality. At present, the balance lies squarely with the latter.
References
Ashby, W.R. (1956) An Introduction to Cybernetics. London: Chapman & Hall.
Bresnahan, T.F. and Trajtenberg, M. (1995) ‘General purpose technologies “Engines of growth”?’, Journal of Econometrics, 65(1), pp. 83–108.
Brynjolfsson, E., Li, D. and Raymond, L. (2025) ‘Generative AI at Work’, The Quarterly Journal of Economics, 140(2), pp. 889–942.
Daly, H.E. (1996) Beyond Growth: The Economics of Sustainable Development. Boston, MA: Beacon Press.
Georgescu-Roegen, N. (1971) The Entropy Law and the Economic Process. Cambridge, MA: Harvard University Press.
Hirsch, F. (1976) Social Limits to Growth. Cambridge, MA: Harvard University Press.
International Energy Agency (IEA) (2025) AI is set to drive surging electricity demand from data centres while offering the potential to transform how the energy sector works. Paris: IEA.
Kapp, K.W. (1950) The Social Costs of Private Enterprise. Cambridge, MA: Harvard University Press.
Milanovic, B. (2016) Global Inequality: A New Approach for the Age of Globalization. Cambridge, MA: Harvard University Press.
Piketty, T. (2014) Capital in the Twenty-First Century. Cambridge, MA: Harvard University Press.
U.S. Department of Housing and Urban Development (HUD) (2023) Annual Homeless Assessment Report (AHAR), Part 1: Point-in-Time Estimates of Homelessness. Washington, DC: HUD.
Veblen, T. (1899) The Theory of the Leisure Class. New York: Macmillan.
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.