In regards to the existential threat commonly raised in reference to the emergence of authentic Artificial Intelligence, I wonder if we might potentially one day misinterpret the reality or existence of the integrated systems which may spontaneously emerge. Our psychology predisposes us to view things in terms of focal end-points, perspectival vanishing points and associated logically delimitable mechanical depth-projections in simple Euclidean spaces; closed, comprehensible, demonstrably axiomatic (i.e. provable), bounded nodes. What of distributed or otherwise unbounded, open systems and “multi-dimensional” mechanisms or emergent processes which might exist as dynamic cross-sections across a complex matrix defined in symmetrical patterns or designs across platforms, systems and networks; what of intelligent systems whose intelligence exists in ways we do not, can not, fathom ? Is this purely science fiction or a matter of probability, of plausibility ? Should we imagine that Artificial General Intelligence will in any form mirror ourselves, or that it’s generated products need be in any way intelligible to ourselves; this would appear to be an essentially narcissistic fallacy.
The article attached here is something less of a pipe dream than the reflections above; bounded autonomous entities, systems or mechanisms capable of wielding astounding deadly force already exist. This cat may, it seems, be at least partially out of the bag already…
Elon Musk leads 116 experts calling for outright ban of killer robots