AI: Managing an Emerging Power

Chapter 41.

“The future of politics lies not in fighting technological progress, but in controlling it.” Jamie Susskind

The emergence of Artificial Intelligence represents the most significant structural change in power since the Industrial Revolution, forcing us to upgrade our political understanding to reflect a new, potent form of social control that operates not through coercion, but through code. This is not merely a new technology; it is, as political economist Shoshana Zuboff defines it, a “rogue mutation of capitalism” named Surveillance Capitalism, where private human experience is unilaterally claimed as free raw material to be translated into predictive behavioural data and sold on “futures markets” to modify our actions for profit. This new system, which we can call the Digitally All-Seeing, functions with an extreme concentration of knowledge and power entirely free from democratic oversight, following a Big AI echoing the Big Oil Playbook where a handful of entities—not the state—wield the control. The concentration of this power—the resources, the data, and the hardware—makes the market vulnerable to the same winner-take-all dynamics that characterise old monopolies, demanding a structural analysis akin to the study of the great corporations in the previous part of this book. This journey, which requires the individual to confront external manipulation, brings us back to the profound psychological and philosophical principles established in Part 2, Human Agency. It is here we revisit the fundamental work of authors like Viktor Frankl, whose observation that the ultimate human freedom lies in the choice of one’s attitude and response even to unimaginable suffering, provides the philosophical mandate for Digital Sovereignty in a world designed for passivity. The imperative of self-authorship, which is central to reclaiming one’s life, remains the last human freedom.

This new power is primarily exercised through Algorithmic Governance, a concept articulated by political theorist Jamie Susskind, who argues that code is power and that those who control the algorithms are increasingly able to control the rest of us by setting the limits of our liberty and defining what is just. This Digitally All-Seeing system operates through constant, opaque surveillance, creating an environment where the individual is perpetually logged and judged without their knowledge of the variables being used. This constant observation creates the modern manifestation of the Digitally All-Seeing, ensuring that individuals behave according to prediction and profit, rather than personal will. The entire system creates a profound attack on Human Agency, not only because our actions become predictable, but because the system is actively designed to manipulate our attention and intent. The algorithm achieves this through engineering attention via a hyperpalatable information diet, feeding us custom-engineered empty content—low-value digital output like cat memes and repetitive, low-context filler that provides negligible social or intellectual return.

This powerful process hijacks our primal need for information, turning us into passive consumers whose highest engagement is often outrage or excitement, perfectly captured by the analogy of consuming highly engineered junk food that satisfies craving without providing nutrition. This relentless pursuit of short-term engagement and the accompanying mental fatigue is why Johann Hari argues our ability to focus is being systematically stolen by the design of these platforms. The digital landscape, therefore, is not a neutral space; it is a manufactured fog that actively obscures the truth, compelling the system of fast, intuitive response—our System 1 thinking, a concept previously explored in the work of Daniel Kahneman—to drive our most important civic decisions, while intentionally preventing the deeper, deliberate analysis of System 2 from engaging with the complexity that democracy demands.

The consequences of this surrender of agency are catastrophic for a democracy. The algorithms, which are often not objective but rather, as mathematician and political critic Cathy O’Neil demonstrates in Weapons of Math Destruction, are “opinions embedded in mathematics,” reinforce and amplify pre-existing societal inequalities. They are opaque, scalable, and difficult to contest, creating self-perpetuating, destructive feedback loops—O’Neil’s eponymous WMDs—that systematically harm the vulnerable in areas like credit, housing, and hiring. This structural failure is reflected in the sociological phenomena of our time: the Posting Zero trend, where people retreat from social spaces because they are tired of performing for an audience of bots and marketers, and the chilling rise of the Dead Internet Theory, where algorithmic activity overtakes genuine human interaction. This proves that the system designed for human connection has successfully created the illusion of connection based on shallow, public performance, such as someone having hundreds of online friends but struggles to have someone to turn to when feeling vulnerable.

It is at this point we revisit the work of Brené Brown, whose extensive research into shame and vulnerability provides the necessary psychological diagnosis: authentic connection is only possible when we allow ourselves to be seen, really seen, despite the risk of rejection. The Digitally All-Seeing system encourages the opposite: it rewards performance and perfectionism, creating a fear of disconnection—the core definition of shame—which forces people to hide their true selves and leads to feelings of being unworthy of belonging. By facilitating this performance, algorithms destroy the vulnerability necessary for genuine trust. The economic logic of Big AI echoing the Big Oil Playbook is equally destructive: it justifies consuming immense resources just to maintain this architecture of attention. As research demonstrates, monumentally expensive resources are deployed to the trivial—turning on all the stadium lights just to find a pair of keys—when the technology consumes the energy demands of small cities. This represents a fundamental contradiction to sound fiscal responsibility, which requires that monumental resources be reserved exclusively for tasks that yield indispensable social and economic value.

The ethical imperative to act is clear: the most significant threat posed by the Digitally All-Seeing system is the erosion of Human Agency and the right to Causal Responsibility—the ability to be the author of one’s own life. The creation of a “Thinking Class,” who retain control over their attention and data, versus the unwittingly governed, is a structural injustice. The ultimate solution, the core theme of this chapter, is the development of Digital Sovereignty, which is the citizen’s supreme authority over their own data, attention, and online life. This is maintained not by rejecting technology, but by understanding the new mechanisms of power and engaging in what can be called Digital Sovereignty, the act of ensuring digital self-governance and accountability to the people. This act of reclaiming the mind aligns perfectly with the philosophy of Viktor Frankl, whose observation that the last of the human freedoms is the power to choose one’s response, provides the philosophical mandate for self-authorship in a constrained world. To be sovereign in the digital age is to consciously assert this freedom.

This impact of AI also forces a critical philosophical reflection on the limitations of the binary thinking that underpins our technological and societal systems. The fundamental digital logic of computers is binary—True or False, Zero or One. This binary constraint often mirrors the reductionist, either/or thinking that pervades our politics and problem-solving, leading to the false dichotomies that the entire Forward Futures section seeks to dismantle. Yet, the work of philosophers like Jan Łukasiewicz, in formalizing ternary logic (True, False, or Unknown/Possible), offers a more nuanced philosophical model for reality. Reality is often indeterminate, characterised by uncertainty and possibility, which simple binary choices fail to capture. The ternary system, which accommodates a middle ground, serves as a powerful metaphor for the level of sophistication required in human decision-making and ethical systems. AI’s true enabling power, therefore, lies not just in its speed, but in its capacity to process the vast, complex, and non-binary data sets that human systems often oversimplify. By allowing AI to handle this complex, non-binary representation of reality, humans can dedicate their energy to the higher-order reasoning, ethics, and values that define true competence and wisdom. The integration of AI thus compels us to adopt a higher form of rationality—one that embraces uncertainty, possibility, and complexity—to ensure that our tools are not simply automating the failures inherent in binary thinking.

The path to achieving Digital Sovereignty is long and multifaceted, moving from diagnosis to the ability to control. The first step involves challenging the logic of the machine itself. As the philosophical concept of ternary logic (True, False, or Unknown/Possible) suggests, human wisdom and ethical decision-making thrive in nuance and paradox, while the digital realm is often constrained by the binary (Zero or One). Digital Sovereignty is the act of refusing to let our human capacity for complex, messy reality be reduced to simple, binary choices that suit the algorithm. This demands Sovereignty of Knowledge: citizens must demand radical transparency regarding resource use (AI energy scorecards) and the design of algorithms, refusing to be governed by opaque, black-box systems. This is essential for protecting the integrity of the vote and the ability of citizens to make informed, rational choices. The solution is partially technological: supporting the development of Small LLMs and local-first AI models that run on the citizen’s device, thereby breaking the centralized data monopolies and returning power over data privacy to the individual. Finally, managing the emerging power is won by reasserting Human Agency in daily life through deliberate acts of cognitive resistance.

We resist the lure of the Digitally All-Seeing by creating “space” between the digital stimulus and our response, a principle Frankl identifies as the last human freedom. The strategic habits and principles that follow are not new concepts, but a necessary repetition and reinforcement of the critical lessons established in Part 2, Human Agency, now redeployed as weapons against digital manipulation. The entire system is designed to eliminate this space. Therefore, resistance requires disciplined, conscious action, which we will find in revisiting the power of attitude and transcendence. We must intentionally program our habits to automate beneficial resistance. This begins by enforcing a New Digital Etiquette, choosing to manage digital platforms rather than being managed by them. For instance, the conscious choice to text someone to arrange a convenient time for a phone call, rather than instantly interrupting them, is not mere courtesy; it is an act of digital boundary setting that reclaims the recipient’s agency and cognitive focus from the tyranny of the immediate. Likewise, setting strict boundaries for checking emails only twice a day or muting notifications on platforms like WhatsApp are essential micro-habits that liberate mental energy for deep work and slow thinking. This systematic reprogramming of digital life ensures that your technology serves your values, not the algorithm’s profit motive.

This resistance must extend into the cognitive realm. We must actively seek to choose nuance over binary; the system is optimized for the easy, high-engagement binary that fuels outrage and division, preventing the consensus needed for collective progress. By practicing the ternary system of thought—embracing possibility, complexity, and uncertainty—we deny the platform the emotional simplicity it requires to function. Furthermore, resistance requires engaging in activities that are the antithesis of passive, empty consumption. This means consciously choosing intrinsically motivated play, the kind of joyful, serious engagement Stuart Brown argues is vital for cognitive health, over the passively consumed custom-engineered empty content.

As we explored in the chapter Transcendent: New Habits, New You we need to engage in acts of physical defiance that actively rewire the brain for depth. This includes the disciplined practice of reading physical books, which trains the brain for sustained, analytical thought and counteracts the fragmentation caused by infinite scrolling. The physical act of handwriting and journaling—an external cognitive workout—engages broader neural networks, leading to deeper memory traces and promoting the self-awareness required to resist manipulation. By consistently applying these habits of resistance, we move beyond passive spectatorship to become the self-governing agent necessary to have the capacity to manage the emergent digital power.

Next Chapter: Charity: The Agile Gift

Bibliography

Brown, Brené. Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead. Gotham Books. 2012

Frankl, Viktor. Man’s Search for Meaning. Beacon Press. 1946

Hari, Johann. Stolen Focus: Why You Can’t Pay Attention—and How to Think Deeply Again. Crown. 2022

Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux. 2011

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. 2016

Susskind, Jamie. Future Politics: Living Together in a World Transformed by Tech. Oxford University Press. 2018

Wrangham, Richard. Catching Fire: How Cooking Made Us Human. Profile Books. 2009

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. 2019