AI is big. Huge. Consequential beyond anything we might have previously known was even on the technological radar of accelerating sociotechnical self-propagation but it does not and never could provide teleological closure.
This is a reflection on foundational incompleteness and logical indeterminism.
I think that it’s instructive to remark that the aspirational trajectories (as individuals, communities, nations and planetary civilisations) we inhabit tend to be shaped and incentivised by what are often unacknowledged psychological and linguistic biases towards foundational control and closure that do not exist in nature or in the adaptive complexity of which both we and our environmental contexts are instances.
Why does this matter?
Because the missing link in Artificial General Intelligence and the plausible levelling up of disembodied algorithmic sentience lies not in the closure or brute-forced encapsulation of any reflexively subjective commercial endpoint or (what is now the) lowest-common denominator of statistical probabilities. It is somewhere in the distributed information and energy-processing (i.e. computational) self-gravitational field and warping, twisting vortices of endlessly extensible logical metamorphosis.
Some interesting work in this area is going on under the auspices of DARPA and will undoubtedly achieve interesting benchmarks but my gut feeling is that they are abseiling down the wrong rabbit-hole. Commonsense, whatever that word might actually parse and decompress out to, is a function of meaning and meaning itself is a function of non-locally distributed (as emergent) properties of language, information and energy-processing. Best left fully undefined here as the ontological tesseract of validating taxonomies on blogs can be a real time-waster but some of the associated resources (here) are instructive if not definitive in this particular technological domain.