In recent years, large language models have moved along a gradient from research artefacts into everyday infrastructure—search, email, design tools, call centres, legal drafting, medical triage. They operate by predicting the next token in a sequence, trained on vast corpora of text and code. Their fluency comes from compression, not comprehension. They do not possess semantics. And semantics is quicksilver: slippery, generative, resistant to containment, as explored in Language Leads. They model how signs follow signs, not how meaning binds to things. This is why a triage chatbot can escalate a patient because of the tone of their phrasing rather than the facts, or a legal assistant can draft a memo citing a case that never existed, with the same untroubled composure. Hallucinated references, smooth incoherence, context slip, semantic inversion and rupture are called “errors,” treated as technical faults to be engineered away, even though they echo the structural enigmas discussed in Mirror, Mirror: Enigmas in Language. Institutions respond as if language were now just another optimisation problem. Policy, funding, education, and corporate practice reorganise around a narrowed image of intelligence that prizes regular output and speed, while demoting ambiguity and lived reference to background noise.
But these are not simply machine failures. They expose what language already is: a regime of symbolic turbulence, endemic, distributed, entropic across the entire surface of its field. Ambiguity is constitutive. Coherence is local and temporary, produced against a background of drift it never overcomes. Stability does not arise by eliminating contradiction, but by cycling it—differences oscillating across time and frequency, generating the manifold they never resolve. What appears as malfunction is often a magnification of language’s native condition: semantic diffusion, contextual bleed, meaning in constant phase-shift, aligned with your analysis in Semantics Follows Frequency: Language in the Spectral Domain. This is not a flaw imported by machines. It is a property made visible through automation. Like two radio stations slightly out of tune, meaning does not hold because one signal wins, but because both persist, interfering, never quite aligning, yet sustaining the field through their unresolved tension. (See: The Physics of Nothing: How Misding Information Holds Systems Together.)
Categories
Limited Language Models