Beyond autonomous cybernetic system as tool or aperture of adversarial vulnerability lies the distinct possibility of AI as malevolent criminal mastermind. Notwithstanding the current paucity of anything even vaguely resembling Artificial General Intelligence, and regardless of our best (and worst) aspirations to benchmark and bootstrap this technology into sentient individuation, the AI as super-villain is a plausible concept. High intelligence devoid of conscience is a characteristic of psychopathy that resonates here; interpolate reinforcement learning with a reward system based upon manipulation and control and you have the nascent possibility of evil algorithmic mastermind.
Regardless that we shouldn’t ever purposefully invoke such an algorithmic entity, the fact that we conceivably can is itself – and as measured by available historical evidence – often reason enough for human beings to render all manner of unexpected mischief upon themselves. Reinforcement learning is one thing, but an autonomous system that seeks self-propagation by any and all means possible need never even learn (or acknowledge) that the shortest distance between two points often occludes human compassion. This resonates with the core pragmatic self-interest of criminal endeavour.
This is fiction, perhaps – and for now, but most transformative or catastrophic things start this way.