No-one knows how AI works

No-one knows how AI works, however, not being able to explain how it works may not necessarily be a bad thing: isn’t this an indication that the algorithms may merely be working correctly and that the emergent properties and self-organising complexity of the AI are doing what we hoped they would ?  We hardly know how minds work – at base, and beyond the many diverse, partial theories and schools of thought on the topic, it seems that irreducible uncertainty and doubt about how minds actually function is endemic to the study of the emergent properties of any brain at all.

That human beings are endlessly surprised by their own limitations in knowing how the world or themselves works is fascinating. If the world should reveal itself to be fundamentally other than what it appears to be, we would of course all (collectively as well as individually) be blind-sided – insulted that out magnificent minds did not pre-emptively see this revelation in the pipeline. What is more fascinating than that we are endlessly surprised by the facts of our own limitations is that we are surprised at all when we discover these parameters and possibly essential conditions of our sentient existence. Science and philosophy are iterative processes and while this developmental growth may remain in some manner asymptotic and never ever quite attain its object of knowledge, this need not necessarily be seen as any kind of failure. Incompleteness is in some sense written into the world (and our experience of it) in a fundamental way.

Beyond this, it may be that we are surrounded by (or embedded in) brain-like entities or self-organising systems which possess just such a directedness, storage capacity, emergent complexity and massively networked self-organising agility to learn and reflexively adapt to their environments – and that we do not fully understand. Cultural systems might themselves function in ways not dissimilar to brains, or for that matter – minds; when considered at a global, systemic scale – there would appear to exist some kind of identity or character, possibly intelligence, but certainly reflexive and emergent adaptive, self-organising complexity in cultural systems.

A neural network generated this image.

Boltzmann Brains, a product of a particular line of thinking about the evidence concerning entropy states and complexity in the Cosmos, are hypothetical and randomly occurring emergent intelligences which “will be lonely, disembodied brains, who fluctuate gradually out of the surrounding chaos and then gradually dissolve back into it.

I am not suggesting that the property of complex, self-organising artificially intelligent systems being mystifying is indicative of success in the fabrication of elementary sentience, but it is clear that there remains (to date) an irreducible blind spot in the analysis of consciousness and emergent intelligence which suggests that not knowing how these algorithmic, axiomatic systems function is not necessarily an entirely bad thing.

Context: The Dark Secret at the Heart of AI
“No one really knows how the most advanced algorithms do what they do. That could be a problem.”

2 thoughts on “No-one knows how AI works

  1. The idea that cultures function in a way similar to a decentered neural network is one that is advocated for quite clearly in Susan Blackmore’s book “The Meme Machine”. The meme here would be a stand-in for the neural action potential, a smallest-necessary unit of meaning which proliferates in accordance with the total structure of the milieu surrounding it.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s