On Moral Machines and Embodied Biases

The image in the mirror (of technology) is the machine and their own assumptive or aspirational perfection disturbs us.

Context: The Appearance of Robots Affects Our Perception of the Morality of Their Decisions

Curious: “People consider moral decisions made by humanoid robots to be less ethically sound than when another human or traditional-looking robot makes the same decision.” The uncanny valley rides again.

This is a particular instance of a general principle. The apperception of ethical fallibility is (here) grounded upon an overt, aesthetic difference. This is a bias common enough to tribal in-groups, (resurgent) racial or nationalist insecurity: a belief in the superiority of the logic and structural integrity of that belief system that a person inhabits but that is, unacknowledged, anchored upon nothing more than its own recursively tautological self-definitions.

We can also invoke (a ghost in the machine of) psychoanalysis. The extent to which ethical fallibility becomes a prominent point of cognitive leverage regarding a corporeally self-similar machine incurs a distrust that may be little more than the second order self-inflection of (a) primary narcissism. Human-like robots inflate an essential and ineradicable uncertainty and subjective “less than perfection” that this technologically-mediated “image in the mirror” suggests.

It is always (and at least superficially) less costly to project one’s own fallibilities outwards.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.