Categories
cybernetics

The Sociocognitive Dangers of AI

We are building machines that can talk, draft, diagnose, summarise, and imitate. Each month the interface becomes smoother; each week the latency shrinks. The friction that once shaped our thinking — the tiny deferrals through which thought organises itself — is being polished away. It feels like progress because the response arrives quickly and looks like intelligence. But what we are really refining is not thought so much as throughput, trading the internal room in which understanding forms for the appearance of instant clarity.

Modern artificial intelligence rests on a tight bet about what intelligence is. Take vast amounts of recorded behaviour, compress it, and learn which patterns tend to follow which. Press a button and let the model predict the next plausible move, the next token, the next click. The mathematics is impressive, the engineering intricate, the scale unprecedented. Yet the underlying definition is narrow. Intelligence becomes whatever can be captured as statistical regularity over past data, under constraints that make it cheap to run and profitable to deploy.

Once that definition is wired into products and platforms, it stops looking like a definition and starts looking like reality. Search becomes “what the model can retrieve.” Understanding becomes “what the model can paraphrase.” Creativity becomes “what the model can remix into novelty that still feels familiar.” The space of possible minds collapses into the subset that can be rendered as a probability distribution over tokens and logged at scale.

The cost of this collapse is not just philosophical. Human intelligence is not only memory and pattern recognition. It is also the awkward, slow, relational work of making sense with other people, in real time, when nothing fits the script. That fragile work depends on unscripted conversations, on silences, on misunderstandings that have to be repaired, on the long arc of trust. None of this converts neatly into training data or engagement metrics. So it is quietly sidelined.

If you narrow the channels through which people communicate, you narrow the kinds of intelligence that can survive. Interfaces that reward speed, certainty, and surface coherence train us away from forms of thought that cannot keep up. Reflection begins to feel like latency. Doubt looks like an error state. Ambiguity is reclassified as noise. A culture can be dumbed down not by making individuals less capable, but by progressively starving the relational environment in which their capabilities matter.

The trap is that we will not notice the loss clearly. When you grow up inside a bandwidth-limited environment, its constraints feel like the shape of reality. We will keep being shown machines that do sparkly, astonishing things. We will say, as many already do, “look how much smarter they are than us.” It will feel true because we have quietly accepted their operating definition of intelligence, a definition that excludes the parts of experience that do not fit on the wire.

Meanwhile, the real work of artificial intelligence is drifting out of sight. The headlines fixate on AI as assistant, co-pilot, or synthetic companion. But the bulk of the effort will sit elsewhere: in the submerged infrastructure that keeps these systems from collapsing under the weight of their own side effects. Models to filter the spam generated by other models. Algorithms to police the misinformation amplified by earlier algorithms. Risk engines to triage the harms produced by automated decisions. Stacks of code designed to manage the turbulence created by other stacks of code.

You can call this productive. The dashboards will. The graphs will curve upwards. New industries will form around making unwieldy complexity slightly more manageable through more of the same mechanisms that produced it. But there is a sense in which this is just monetising the smoke from an engine no one fully understands, selling fire extinguishers to the people living downwind and calling it innovation.

This pattern is not new. Finance built exotic instruments that outgrew comprehension, then built more instruments to hedge against the first set. Social media created feedback loops no one could steer, then sold “safety” and “trust” products to mitigate the worst outcomes. Artificial intelligence extends this logic. It turns society into a control system that is always chasing its own tail, deploying ever more automation to patch the holes left by earlier waves of automation.

At the centre of this chase sits a basic asymmetry. Corporations do not need a full theory of intelligence to profit from a narrow one. They need a definition that is cheap to compute, easy to quantify, and hard to argue with if you do not own the servers. Metrics that fit those criteria become the new common sense. The more we accept them, the more the rest of our experience is forced to justify itself against a benchmark that never deserved that authority.

The comparison will not be neutral. If a system can write a competent email in two seconds, what does it mean for a human to spend two hours on a difficult letter that reshapes a relationship. If a model can answer exam questions flawlessly, what happens to the kind of learning that involves wrestling with confusion over days and weeks. If a chatbot can pretend to care, how do institutions justify the time and expense of real human care that does not scale and cannot be logged neatly.

In each case, the slow, relational, context-heavy forms of intelligence risk being recast as inefficient, indulgent, or obsolete. The human is not measured against an ideal of human flourishing but against the throughput of a machine optimised for a different task entirely. That is not competition. It is misclassification. It is like calling a spotlight superior to daylight because the beam is narrower and more intense.

The most worrying part is not that these systems exist, but that we are rearranging our lives to suit their limitations. Classrooms train students to perform in ways that look good to automated assessors. Workflows are redesigned around what software can easily track. Social interactions are mediated by platforms that interpret every gesture as a potential data point. The grain of the world is subtly recut so that it aligns with the ways our machines see.

Yet delay is constitutive. Every signal is smeared across time. Light itself — the baseline of what we call information — is not an object but a deferral, a structured phase lag that makes experience possible. Space, in this sense, is just organised delay: the interval between emission and reception, between cause and trace. Meaning forms inside these offsets — between what happens and how it is perceived, between what is said and what is understood. Remove the lag and you do not get perfect knowledge. You lose the field in which understanding stabilises.

Artificial intelligence will continue to grow more capable. It will be woven into the infrastructure of everything from medicine to logistics to governance. But its real expansion is recursive and inward. The technology grows by circling its own consequences, pulling new dependencies toward the centre and generating fresh layers to manage the turbulence left by the last. Much of its future work will be spent sustaining this inward cascade: stabilising failures, extending infrastructure, converting each new gap into another point of sale. Whether this settles into a workable equilibrium or tips into an accelerating trap depends on what survives the pull — the slower, relational forms of intelligence that keep the whole system from folding in on itself.

There is nothing inevitable about surrendering those forms. Artificial intelligence could remain one tool among many, useful precisely because of its constraints and valuable because it does not experience the world as we do. That would require constant, explicit reminders that its definition of intelligence is contingent and partial. Every metric would have to be treated as a hypothesis rather than a verdict. Every automated judgement would need a surrounding space where human beings can say, simply, “this misses something important.”

If we forget that intelligence is larger than what can be harvested from data, then the narrowing will feel like clarity. We will mistake a shrinking horizon for a better view. The task, while we still feel the difference, is to keep open the parts of our shared life that resist compression: the conversations that do not trend, the questions that admit no quick answer, the forms of care that leave no log file. These are not luxuries. They are the stabilising mass of the system — the dark, relational substrate that keeps the orbit open and prevents the whole field from collapsing into its own centre.

One reply on “The Sociocognitive Dangers of AI”

I used to be good at navigation until GPS proliferated. I’m not sure what exactly AI will take away from me but based on precedent it seems inevitable. But I am certifiably better at getting places without idiosyncrasy. Dependent on tech sure. But I am materially better off. Maybe AI is a net gain. Maybe it is not.

Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.