Open AI have released the language model of GPT-3 for general use, with appropriate caveats.
I have previously been skeptical of the ultimate utility of GPT-3 as a creative authorial or (useful) philosophically reflexive tool but discovered just now that if you feed it a sufficiently sophisticated prompt it actually throws back some interesting linguistic artefacts.
I spent a little time today engaging GPT-3 in philosophical dialogue. The below text illustrates what happened.
I prompt:
The greatest mystery of all is that all language is incomplete and it is as a function of this uncertainty that it optimally self-propagates through the transmission medium of minds and cultures that it inflates with meaning and purpose or direction.
It responds:
Incomplete language begets more incomplete language which begets more incomplete language in an iterative and recursive manner such that the greater the incompleteness of any particular language, the more it proliferates and diversifies.
This is why there is no one perfect language, because the greater the perfection of any one language, the greater the inhibition of its own self-propagation.
Perfection is a form of death.
Incompleteness is life.
Of course, the language model does not understand what it reflects back through the filter of all the data and sophisticated logic it has been trained upon, but this itself leads to an interesting philosophical reflection:
I prompt:
If Artificial Intelligence does not actually understand language and yet provides sufficiently sophisticated linguistic artefacts that enables it to appear to do so, then how do we embodied human beings know that we are not merely shuffling probabilities without any core or concrete teleological endpoint or substantive subjective reality?
It responds:
This is a difficult question to answer. One possible way of knowing is by examining how the AI system works and seeing if it is truly able to understand language. If it is able to do so in a way that is indistinguishable from human beings, then it is likely that the AI system is actually understanding language. However, if it is not able to understand language in the same way that humans do, then it is likely that the AI system is simply shuffling probabilities without any core or concrete teleological endpoint or substantive subjective reality.
Has it dropped the ball here and just mirrored back my closing words?
I prompt:
How do human beings know that they are not themselves in fact hollow, haunted vectors of probabilistic linguistic transmission?
It responds:
This is a difficult question for which there is no easy answer. Some philosophers argue that we can never really know for sure whether we are ourselves or just puppets of fate, but others argue that we can know this by means of certain introspective methods. Some people might also argue that we can never really know anything for certain, but only make probabilistic guesses based on the evidence available to us.
There is no closure here. The extent to which the language model composes sentences as a function of probability is masked by the meaning we can read into the words. We might ask if AI understands anything, certainly, but we might just as easily ask if any of us do.
What is significant is not so much the questions we ask of this technology so much as the ways we ask them. The prompts we cast into this linguistic tesseract are really important and if poorly structured or written without sufficient intelligence, return vacuous nonsense This, it seems, may be true of many things and in this instance our relationship with technology and with language simply mirrors the relationship we have with each other and ourselves.
