Categories
technology

Biased AI & Inequitable Algorithmic Justice

Context: Can you make AI fairer than a judge? Play our courtroom algorithm game

Bias in AI only ever really focuses and amplifies existing inequality as a function and retrospective, combinatorial abbreviation of historical data. This itself is no great or new revelation: we generate automated decision-making systems that recursively reinforce aggregate cultural dependencies and the biases of the data they are trained on.

This is already how sociotechnical systems self-propagate through a transmission medium of (inefficient) cognitive grammars and normative behavioural or institutional orthodoxy: as autonomous bias towards gestalt system reproduction, regardless of truth or equitable outcomes and partially detached from any facts they seek to address.

A generalised bias towards inequitable outcomes in justice and assorted other systems of governance or administration is a function of psychological, cultural and organisational (i.e. information-processing) complexity, that is – integrated organisational systems reproduce precisely that constructive dissonance by and through which their own sustainable continuity and contextual tenure is assured.

Which aspects and assumptions can we remove without completely disassembling institutional competency? Biases are endemic, metastatic; solutions must reflect this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.