An unremitting curiosity of human psychology and culture is that, from about the time – perhaps of the Industrial Revolution (?) – when we acquired a degree of distributed and self-conscious awareness of the extent and consequence of our own procedural and developmental technological progress, each significant iteration has been regarded as the end of progress, an inevitable teleological resolution, and the final piece of the puzzle of power, politics and technological civilisation. In the late 19th Century, and with the discovery of the electron, it was widely believed that all physics was at an end. In retrospect, this is a somewhat comical belief that may have been perhaps forgivable but which should stand as a resounding admonition to any faith in epistemological closure.
While we might now be at least partially cured of our relentlessly optimistic (or is it tragic?) attachment to a psychologically-reflexive assertion and belief in even a possibility of material, technical or logical closure – it still bubbles and percolates up from under the many, essentially linear, narratives we inhabit. AI is powerful and, marketing hype and pop-cultural misdirection notwithstanding, is well on it’s way to irrevocably changing our world.
The indefinitely extensible nature of logic, of mathematics and (plausibly) also of physics implies that any salient feature on the technological landscape is only ever a transient waypoint on a path to something else more sophisticated, clever and constitutively complex. Take the turbulent complexity of cyber security as a powerful case in point – the indefinitely extensible nature of all these information and communications systems (that ironically also provide their power and utility) obligated us to accept that this is an endless tide of innovation, of measure and of counter-measure.
Artificial Intelligence as a technical and cultural artefact may shape and influence the most probable recombinatory paths of future technological and cultural or sociopolitical developments in profound ways but doubling-down on a fantasy of closure is to (continue to) be ill-prepared for the inevitable arrival of unexpected future technological inflections.
We should be as concerned that, while there is really not any one AI methodology or solution, the technologies through which we simulate or aspirationally recreate intelligence are recursively shaping and redefining ourselves in ways which may on the whole be positive, but which contain sinister seeds of authoritarianism, of limited biases in mechanistic self-representation, and of – again – an aspirational closure which is neither realistic, logical or reflected back to us from the actual complexity of the world.
My initial response to the article linked above:
There is only one thing worse than judging an article on the content of it’s title (which I have just done), and this lower [f]act lies in creating a Fallacy of Closure which attempts to assert ascendancy and teleological completion, if partial and transient, to a technology in this way. There is not one AI, there is no sense in which logical or mathematical closure exists beyond a very narrow and superficial comprehension of information and energy-processing systems (of which we are each and all an instance), and there is also no possible way that anyone on this Earth can predict what either the technological or geopolitical circumstances of this troubled Earth of ours will be in 2100.
What is in effect a title seeking to stamp authority and interest upon a domain of knowledge reads more like conceptually fuzzy twaddle in search of advertising revenue. I really should read the article to see how close I was but we would all do very well to keep an eye on the facts, the probabilistic drift of technological applications and innovation into unexpected configurations, and the extent to which commercial value is cultivated from narratives of certainty in domains in which there can be none.