Is “does the AI understand meaning?” an inadvertently deceptive question? We are intent on chasing and catching butterflies with a jar made out of butterflies! Yes, that may just have to be the nature of any expansive philosophy of language but the least we might do is to acknowledge the core intractability of seeking explanatory closure in this manner.
Debates on machine intelligence and comprehension are fascinating, if ultimately (and arguably) unresolvable. Meaning being pretty much whatever it is defined as, we find ourselves trying to assert certainties about language, with language, while possessing absolutely nothing resembling an objective Archimedes point and inalienable anchor of semantic certainty.
We ask whether a neural network understands meaning, even while we ourselves seem quite content to use language without ever once fully understanding it. Endemic relational (as semantic) ambiguity and uncertainty in language is a core mechanism by and through which yet more language and communicative indeterminacy is invoked, assured, evolved.