A philosophical perspective: I accept that conscious machines are plausible, but I have trouble believing that the diverse algorithmic and networked approaches under development are anything beyond sophisticated, limited, mechanical or incomplete axiomatic models.  It provides reflexive psychological comfort to assert (and believe) that the hard problem of consciousness is explicable via a reductive algebra and recombinatory taxonomy marked by increasing orders of magnitude in algorithmic (ergo – mechanical) complexity and processing power.

From a necessarily subjective viewpoint, consciousness elicits a certain unity or gestalt presence that displays a consistency and completeness that we have all known (since Gödel, 1931) is impossible within axiomatic, essentially mechanical systems.  An intuition is that the discontinuous and indefinitely extensible nature of material, logical, and information or energy-processing systems provides a clue as to how to proceed.   A little like Alan Turing’s approach to the Halting Problem, there is probably some logical sense in which the absence of a bootstrapped systemic unity can be recursively threaded through itself.  Everything else, perhaps, must remain mere approximation.

Context: Scientists are trying to build a conscious machine — here’s why it will never work

“I’m sorry Dave, I’m afraid I can’t do that.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.