Categories
Philosophy

Artificial General Intelligence

Oh yes, but which documentation, which axioms and assumptions?

Context: Artificial general intelligence: Are we close, and does it even make sense to try?

It seems to me that there is an unbridgeable discontinuity here between a belief in the ascendant utility of more data processed faster and the (at least superficially) antithetical systemic self-containment and enigmatic ontological bootstrap of a sentience that I may just still be naive enough to intuitively assert as foundational to (any instance of) General Intelligence.

The accelerating technical systems and commercial cultures developing around AI have certainly identified a rich seam of constructive utility and successfully generate their own hyper-inflating weather systems of self-validation and insight but is this because they have inadvertently unearthed a particular kind of useful adaptive, abstract complexity – as one among very, very many – or because they have unveiled the kernel that encapsulates and manifests in embodied intelligence as we know it?

Personally, I think there is something important missing here but that it is – quite counter-intuitively – the presence of a conspicuous logical absence and self-inflected antinomy that will get us across this final hurdle.

I see it as an emergent, bootstrapped General intelligence that is not “in” systems but is “of” them; a subtle conceptual sidestep

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.