The technologies of Artificial Intelligence have had stunning success with a hyper-inflating speciation of statistical in logical and inferential complexity as the savant-like Narrow AI that Gary Marcus references. The recognition of an ontological bootstrap and conspicuous absence of “common sense” being an optimistic step in a direction towards General as (at least minimally) sentient machine intelligence, this context is problematised by prevailing institutional and commercial cultures as much as by the normative mechanisms of hyper-inflating logical depth by which leverage is sought.
An effectively unwinnable as unbounded or unresolvable and asymptotic race towards closure in quantitative SOTA metrics and benchmarks shapes the conceptual vocabularies and technical expectations in ways that bias and self-validate salient artefacts, entities and systems. That is – there is an orientation towards teleological certainty and closure in the ascendant language and logic that can never perceive or define the General Intelligence it seeks; it has become quite axiomatically and foundationally built upon as though wrapped around and infused or compelled by this kernel blindspot.
The self-opening door of logical self-containment is very likely already wide open, if only we might recognise it.
Context: The biggest problem in AI? Machines have no common sense.