The mystery here is that what makes AI so eminently useful is its ability to do what we can not in ways we do not in all cases need to understand. There is something of a resonant psychological pathology here: we seek at all costs to invoke generally intelligent autonomous systems that significantly surpass our own abilities while hesitating on the threshold of ever actually creating them. The distance and difference by and through which aspirations to epistemological certainty validate themselves as distinct from their Objects are simultaneously the points of inertial constraint that limit us. Ethics are important, certainly; safety and security are important; absolutely; but the unacknowledged elephant in this room seems to be that we remain content to entertain fictions of objective certainty and ontological closure which all but guarantee that we will probably not even recognise general intelligence when it arrives because it remains quite concretely located in a reflexive cognitive blindspot through which we seek to protect our hubris and insecure ego-selves.
Categories
Fear of fully autonomous AI
