Studying machine learning now looks less like a steady ascent and more like an asymptotic orbital reference frame perennially caught in and constrained between attraction and repulsion. The desire is centripetal, a pull towards the imagined centre of mastery and promise, but the structure itself functions centrifugally, casting people back out, denying arrival. The gap between those forces—the delta—sustains the whole system. It is not the learning that matters so much as the persistent asymmetry: being drawn in, yet displaced again, and repeating this cycle indefinitely.
The promise remains undefined, deliberately vague, suspended as a lure that can never be reached. The pitch insists that AI will transform everything, solve inefficiencies, eliminate risk, or multiply profit. But the reality is that the problems of AI are mostly problems because of AI: security burdens, infrastructural fragility, training bottlenecks, recursive dependence. What advances is not the technology’s resolution of these contradictions but the endurance of the contradiction itself. The system thrives on reproducing the distance, the inaccessible promise, keeping everyone circling without ever closing the loop.
One reply on “Objective Function of Machine Learning”
This isn’t, at core, at all new—machine learning is just another inflection of the same statistical, cultural recursion that periodically draws crowds of aspirants like moths to a naked flame. The difference this time lies less in the technique than in the communicative density that magnifies its reach: a global threshold of complexity has already been crossed, so the cycle plays out faster, louder, and with more turbulence than before. There are no winners in this game; other than the game, the technology, the turbulent consequences within which these technologies envelop us, leaving more of the same flotsam algorithmic rationale of ineffective, existentially vacuous, technical solution (ergo deoendency) invocation.
LikeLike