Access to the high-dimensional complexity resident in machine learning models comes at a cost. It is not being discussed much in public discussions but the ability to interrogate these complex information systems appears to bring an inevitable loss of control over the outputs. Generative systems in particular are prone to this inverse riddle: the more sophisticated our “artificially intelligent” technological systems become, the less control we have over their consequences. You would think this might be one of the singularly relevant concerns – not that we can not control a specific aspect of downstream consequence, but because this indicates that no matter what we ever do, we can never have full control over these powerful systems.
The essence here is that inordinately high-dimensional communications systems are many more orders of magnitude smarter than we are. Combinatorial complexity is unbounded for all intents and purposes. Yes there are limits and at least as seen from our own perspective, unknowns – but this is expected within any such distributed, generally-recursive and sufficiently complex system.
Regardless, our goal and gambit becomes how to shape entropy in system outputs. This is something for a very long time but now, those systems are encountering themselves at scale in ways for which none of us could ever have been prepared. Interesting times.