AI’s Poisoned Data

Context: DeepMind tells Google it has no idea how to make AI less toxic

The raw data is indeed endemically and intermittently toxic. Filtering out venomous misanthropy from the dataset is a solution but this also requires a language model that does something substantively more than generate probabilistic sequences of linguistic artefacts sans comprehension.

John Searle’s Chinese Room argument clearly illuminates the inability of AI to understand anything. Successfully manipulating the grammatical rules of a language is not tantamount to understanding what anything within that language actually means.

It is probably worth noting that the combinatorial sophistication of language as semantic property of experience and understanding is many orders of magnitude more complex than the probabilistic frameworks it is modelled with.

It is also worth reflecting that the core biases of cognition, culture and communication represent patterns of dissonance as constructive entropy that are (for better and for worse) irreducible as facts and heuristics by and through which information systems and data symmetries in our world recursively self-propagate.

Solving the data problems requires solving problems in the world and solving problems in the world is problematised by the enigma that semantic dissonance is endemic.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.