Context: How would we know if an AI is conscious?
We find ourselves in an intractable ontological (as much as epistemological) bind here. If we ever get to the point of generating autonomous synthetic information-processing systems that tick all the boxes for notionally proving their possession of consciousness by whatever standard metric we might arbitrarily define, we have no more access to those conscious states than we have access to each other’s. Even complex, real-time observations of neurocortical activity filtered through technologies that can detect or identify what a person is thinking (and might plausibly forecast behaviour with high degrees of accuracy) are still and always at least once-removed from their subjective experience.
What is more troubling is that if we might ever satisfy ourselves as to the veracity of an observational determination of machine intelligence in some sense ascending to sentience, the inevitable (if for many unpalatable) corollary of this is that our own cherished subjective experience might also reveal itself as being little more than a similarly complex yet hollow or empty information-processing system that natively approximates to best functional, logical and mathematical model of a machine intelligence-in-context.
One reply on “Some brief (if complex) agnostic observations on Machine Sentience”
LikeLiked by 1 person