Context: — Can computers understand complex words and concepts? Yes, according to research —
Is understanding a “simple” matter of inhabiting and adaptively observing or shuffling the relational properties of language?
This raises questions of meaning as only being the suspended tension wire of inductive resonance across a logical frame of gestalt linguistic self-reference, without anchor or endpoint as self-validating substance or reflexive subjectivity.
Why does this matter?
Because if meaning (ergo understanding) is a function of purely relative (or relativistic?) frames of reference, each having only Other similarly free-floating frames of reference upon which to ground itself, then there is NO ultimate meaning, NO understanding and NO sentience to boot.
There’s the rub: as soon as we have (a consensus agreement as to the presence of) plausibly sentient entities that inhabit language in these still relatively deterministic ways, complex algebraic dimensionality notwithstanding, then we have an explanation for language understanding that excludes the need for sentience.
If we prove sentience in machines, we abstract it from ourselves (as aspirationally meaningful entities).
The assertion of sentience in (or as) logic has a way by tautology of extinguishing the subjectivity of those who assert it.