Can Robots possess Legal Rights?

Define a person, a sentient being towards which we possess ethical obligations and do so without any ambiguity or uncertainty and doubt, if you can.

Context: Experts Sign Open Letter Slamming Europe’s Proposal to Recognise Robots as Legal Persons

Robots might be a bit of a stretch for asserting personhood although it does remain somewhat indistinct as to where and when awareness, experience or sentience arise in recognisably “living” things. We may (following, at a distance, something I once read from either Daniel Dennett or Douglas Hofstadter) possess ethical obligations towards any entity, system or technological artefact for which it is like something to be that thing, that has an experience or awareness of self, but the threshold of complexity and autonomous information processing at which this occurs is not and may never be entirely certain.

What is the threshold of information-processing complexity at which plausible personhood or experience and sentient awareness is generated? Measuring by our own experience is only ever going to be tautological, certainly, but where do we draw boundaries, definitions, borders and a corollary ethical algebra from which a robot acquires legal rights beyond being mere property?

AGI is the wildcard here. (Law is a poor tool for negotiating ambiguity.)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.