Context: Would you trust an AI Operative in the field?
This (quote from the above article) is an assessment of limitations in current AI: “…tend to respond well to what they’ve been trained to detect, but responses can become erratic when confronted with unexpected circumstances…” is in fact, and perhaps not coincidentally, an operational and functional description of the overwhelming majority of “best practices”, certification frameworks and idioms of institutional and organisational convention or governance.
The clockwork, algorithmic and operationally-conditioned organisational and institutional idioms within which we all find ourselves (in various ways and at various times) suffer from pretty much exactly the same problems of inflexibility and incompetence in dynamic and rapidly-evolving circumstances. Ironic in that at precisely the time such (dramatically inflexible) semi-autonomous technologies emerge, we find ourselves in need of more adaptive aptitude and dynamic perspicacity than ever before.
We should not be surprised that our advanced autonomous systems bear the logical DNA of a discontinuity implicit to cognition and its hyper-extended organisational and technological artefacts. Technologies as indefinitely-extensible artefacts and systems are as infused by logical antinomy as are its creators. The recursive enigma of undecidability, uncertainty and distributed entropy is ubiquitous.