Curious: “People consider moral decisions made by humanoid robots to be less ethically sound than when another human or traditional-looking robot makes the same decision.” The uncanny valley rides again.
This is a particular instance of a general principle. The apperception of ethical fallibility is (here) grounded upon an overt, aesthetic difference. This is a bias common enough to tribal in-groups, (resurgent) racial or nationalist insecurity: a belief in the superiority of the logic and structural integrity of that belief system that a person inhabits but that is, unacknowledged, anchored upon nothing more than its own recursively tautological self-definitions.
We can also invoke (a ghost in the machine of) psychoanalysis. The extent to which ethical fallibility becomes a prominent point of cognitive leverage regarding a corporeally self-similar machine incurs a distrust that may be little more than the second order self-inflection of (a) primary narcissism. Human-like robots inflate an essential and ineradicable uncertainty and subjective “less than perfection” that this technologically-mediated “image in the mirror” suggests.
It is always (and at least superficially) less costly to project one’s own fallibilities outwards.