Categories
cybernetics

Australia Bets the Future on Big Tech’s “Trust Us” Doctrine

Context: Artificial intelligence to be managed through existing laws as Australia unveils national AI plan

Australia has chosen to manage artificial intelligence through “existing laws” and “industry-led standards,” a position repeated across today’s public messaging: flexible oversight, voluntary guardrails, and a promise that the newly announced AI Safety Institute will advise, not constrain. Ministers point to productivity, innovation, and the fear of stifling growth as reasons to avoid a dedicated AI Act. Yet this stance mirrors a global pattern where governments — not through corruption, but through dependence — defer to the very corporations whose systems now structure communication, labour, governance, and public life. Big tech firms have become quasi-institutions in their own right, shaping policy through the expertise they supply and the infrastructures they control. What looks like cautious pragmatism is also a quiet admission: technology companies now wield administrative power at a scale that rivals the state, and voluntary guardrails are a concession to that reality rather than a safeguard against it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.