AI Digital Philosophy Psychology

Artificial Intelligence: Simulating a “Theory of Mind”

What does an attempt to generate an artificially intelligent Theory of Mind (of the existence of intentions, experiences and inference of probable future actions) in other entities reveal about our own existential circumstances and the fundamental uncertainties of our own experience ?

Context: Artificial intelligence has learned to probe the minds of other computers

The short of it:

°° Attempts are being made, with some limited success, to allow Artificially Intelligent Machine Learning systems to possess, cultivate, inhabit (pick an adjective) a “Theory of Mind”. °°

◇◆□■○●▪{ ◇◆□■○●▪[◇◆□■○●▪▪●○■□◆◇]▪●○■□◆◇ }▪●○■□◆◇

The somewhat longer of it:

◆ A demonstrated set of behaviours designed to mimic a theory of mind is at base (only) a set of behaviours designed to mimic a theory of mind. If an AI is cultivated upon the probabilistic machine learning paths which lead it to be able to predict behaviour probabilistically, it is able to extract useful inference from observed or measured data but is not really anywhere near placing itself into another person or machine’s frame of reference, of grasping what a “Theory of Mind” should really have to mean: the ability to projects one’s self into another’s context as a gestalt, perhaps – second-order, simulated, emulated, virtual (again – pick an adjective) experience. My Theory of Mind allows me to notionally and partially interpret, translate and strategically adopt (for, let’s be frank, overt or unconsciously-motivated self-interested exploitation) another person’s point of view, not merely to predict probable outcomes from it. Blind-shuffling the rules-sets and symbols which indicate the existence of such an internal abstraction and cognitive insight may be just that: a blind-shuffling of the rules and symbols required to indicate or simulate cognitive insight; a sophisticated Turing Test of sorts might be problematised here.

◆ Intelligence, consciousness, awareness and an experience of an internally-mapped experience of reality (and of those apparent Others within it) is not a necessary guarantee of the existence any personal Theory of Mind or indeed tangible or consequential sentience. The appearance of a thing as being of a particular kind (i.e. of sentience – in Self or Other), of its exhibiting behavioural symptoms which indicate even high probability under consensus-validated determinations of what it is, or may be, for an entity or system to be recognised as “intelligent” or as of having a mind; none of these are necessarily or causally-linked beyond any perhaps ineradicable anthropo-psychologically reflexive aspiration to self-importance. We can not assert the existence of Other minds without doubt, so how can we abstract in this context a second-order digital cognition of such an awareness ? It is true that machines may not, beyond certain (current but not nevessarily intractable) boundary-issues of logic and axiomatic self-reference, be limited by the ontological constraints or organic and psychological biases innate to human beings (and minds). However, we most certainly are limited in these ways and as such almost everything we assert is entangled in the snares and razor-wires of an intransigent referential circularity: we can not be certain that machines have a Theory of Mind because we can not be certain that we ouselves possess anything more than a compelling simulation of the behavioural (or even internal, experiential) consciousness and sentience we lay claim to.

◆ The possession of an internal (experience of a) Theory of Mind and its constitutively self-reflective, self-containing set-of-self rendered, characterised, imagined (pick an adjectival choice or choose your own description) as containing a set-of-self may just be that singular inflection upon which both consciousness-as-awareness and Artificial General Intelligence are suspended in mid-air by their own metaphorical bootstraps. The successful internalisation of a projected Other is in a foundationally psychological or psychotherapeutic sense that kernel upon, around and through which Self is grown; it is in the internal logical spaces of self-inflationary, self-propagating referential circularity and endlessly extensible symbolic metamorphosis that Self becomes not only intelligible or plausible but quite possibly also probable.

■ The apparent existence of Other minds can never be proven, regardless of evidence or observed data. It is logically impossible, at base and beyond a leap of faith based in a critical suspension of disbelief.

■ The apparent existence of Self can never be proven, regardless of evidence or observed data. The problem of Self is actually identical to the problem of Other Minds in that (among other things) the circularity of all definitions precludes all certainty or rational closure on the matter. It may make some degree of sense to allow the existence of Others into our individual ontological taxonomies and frames of reference, just as for fairly obvious reasons it seems sensible to admit ourselves into this system of belief.

◇ The experimental systems (and results) referenced are brilliant and not without computer-science or philosophical consequence but they remain partial and perhaps aspirationally over-inflated. It is certainly heading in what appears to be the right direction towards the Holy Grail of Artificial General Intelligence.


Rabinowitz et al. (2018) Machine Theory of Mind:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.