Robots might be a bit of a stretch for asserting personhood although it does remain somewhat indistinct as to where and when awareness, experience or sentience arise in recognisably “living” things. We may (following, at a distance, something I once read from either Daniel Dennett or Douglas Hofstadter) possess ethical obligations towards any entity, system or technological artefact for which it is like something to be that thing, that has an experience or awareness of self, but the threshold of complexity and autonomous information processing at which this occurs is not and may never be entirely certain.
What is the threshold of information-processing complexity at which plausible personhood or experience and sentient awareness is generated? Measuring by our own experience is only ever going to be tautological, certainly, but where do we draw boundaries, definitions, borders and a corollary ethical algebra from which a robot acquires legal rights beyond being mere property?
AGI is the wildcard here. (Law is a poor tool for negotiating ambiguity.)