The contemporary ethics of artificial intelligence is dominated by institutional reports, advisory panels, and compliance frameworks that focus on bias mitigation, transparency checklists, and downstream harm reduction. These efforts are not meaningless, but they are constrained by the same political, legal, and economic structures that fund them and define their remit. As a result, the questions asked are cautious by design, bounded by acceptable language, and framed to preserve institutional legitimacy rather than interrogate the systems that generate the problems in the first place. Ethics becomes a managed activity: measurable, auditable, and safely absorbed into existing power arrangements without disturbing their foundations.
What is missing is not another guideline but a willingness to examine the deeper conditions that make these systems behave as they do. Bias, ambiguity, uncertainty, and distortion are not bugs that can be patched out of intelligent systems; they are intrinsic to language, cognition, and meaning itself. Any apparatus built from symbols will inherit these properties, amplify them at scale, and then feed them back into the world as if they were neutral outputs. Ethical discourse that refuses to confront this ends up performing a ritual, a game of careful words about power that never questions how power is constituted, stabilised, and reproduced through language and technology together. The problem is not insufficient ethics, but an ethical posture that cannot look beneath its own operating assumptions, and so remains busy, respectable, and ultimately inconsequential.