If an intractable problem in any such rapidly-evolving technological or regulatory frame of reference is that an accelerating pace of change at least partially invalidates models, axioms and assumptions before they have ever even been operationalised, then we find ourselves (yet again) staring down an existential double-barrel of complexity and system entropy. The nature, diversity and proliferating complexity of AI as a concept (as much as a reality) generates a problem-space that in regards to planning and risk management is not a difference in kind so much as of degree to many other non-trivially complex logical conundrums.
A critical failure in many approaches to technological risk or regulation is that because it is always of necessity only ever going to be a provable or comparatively self-consistent guidance system, model or theoretical conjecture as seen or validated in retrospect – the endemic and effervescent percolation of unexpected developments that invalidate axioms is represented as anything other than the necessary blindspot that it must remain. Do we find ourselves inhabiting regulatory systems that cast the inevitability of unmanageable complexity as an intelligible or well-defined risk, not as the unacknowledged dependency on systemic entropy that it is?
More context: Europe eyes strict rules for artificial intelligence