Context: — Three ideas from linguistics that everyone in AI should know —
I do wonder if we might fundamentally misrepresent our relationship to these systems of symbolic communication as something of an irreducible function of language itself. Noting here that the persistent problems of these LLMs with bias, truth values and drift into effectively meaningless abstraction is as much an intrinsic property of linguistic communication and downstream cultural and psychological reflexivity; it orbits a hollow core.
It’s an unpopular position to take and tends on many levels to alienate others, but, as linguistic systems maximally self-propagate through their sentient human transmission mediums as a function of endemic uncertainty, ambiguity, semantic drift and irreducible logical indeterminacy, we should not be surprised that these language models reflect the kernel inconsistency of a dissipative communications system.
Human sentience is able to intuitively leverage the endemic dissonance here but this remains a blindspot that mechanical, algorithmic aspirations to control are unlikely to isolate, identify or reproduce. An enigma: that system through which we seek to mirror ourselves remains substantively unable to do so as function of the core discontinuities by and as which it maximally self-replicates.
