AI: Ethics and Environmental Questions

Chapter 41.

“If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.” — Omar Bradley

As technology rapidly advances, understanding humanity’s pivotal role in shaping its future is paramount. This includes exploring how human creativity can symbiotically collaborate with artificial intelligence, delving into new frontiers of innovation while rigorously ensuring technology genuinely serves human flourishing and ethical progress. We stand at a critical juncture where AI is not merely a tool for efficiency.

This vision of AI, however, must be tempered with a critical awareness of its growing environmental footprint. While AI can be a powerful tool for good, its voracious energy appetite presents a fundamental contradiction. The training of a single large AI model can require hundreds of thousands of gallons of water for cooling and consume the same amount of electricity as over 100 homes in a year. This soaring demand from data centres could jeopardise climate targets, threatening to lock us into a cycle of unsustainable power generation. This outcome is not inevitable, but it requires a fundamental shift in our approach—moving beyond simply consuming power to strategically designing infrastructure and implementing new policies that incentivise the reuse of waste heat and require data centres to be powered by truly clean energy. This is not about rejecting technology; it is about demanding that it be powered responsibly so that innovation does not come at the expense of our forests, our air, or our communities.

The environmental contradiction posed by Artificial Intelligence is clearest when comparing its specific industrial demand against the unparalleled efficiency of the human brain. While the 20 Watt human brain excels at creativity, ethical reasoning, and learning from minimal data, AI’s merit justifies its massive resource demands by being the ultimate engine of speed, scale, and pattern recognition. The AI training process, which burns resources equivalent to 270,000 kWh of high-grade electricity and consumes hundreds of thousands of litres of clean water in a short burst, is only environmentally justifiable for tasks where quantity and velocity of data overwhelm human capacity.

AI’s value is found in high-stakes, data-intensive tasks: monitoring global financial fraud, autonomously navigating transport systems, sifting through millions of medical images for early diagnostics, and accelerating complex scientific research. These applications are essential multipliers for human intellectual output. However, the energy and water consumed to train these models are often instantly discounted by a human tendency to apply this powerful, expensive technology to the trivial. Historically, humans have a track record of misdirecting their own inventions—from using the global, instantaneous communication network of the internet primarily for cat memes, outrage, and digital “workslop,” —which describes the repetitive, low-value digital output that serves little purpose beyond filling time or meeting the demands of an overly bureaucratic system—to exploiting the sheer processing power of modern AI for generating repetitive, low-context content.

This pattern of trivialising technology means AI’s high resource cost is frequently being expended on outputs that provide negligible social or intellectual return. The ultimate environmental and ethical demand on society is not just to power AI efficiently, but to ensure that the monumental resources invested in its development are reserved exclusively for the tasks—like scientific discovery and resource optimisation—where its unique, massive capacity delivers indispensable value. This aligns with the fiscal responsibility, which requires that monumental, expensive resources be deployed only where they yield the maximum, indispensable social and economic value.

The relationship between human and artificial intelligence should be one of partnership, not a struggle for replacement, fundamentally shifting our perception of authorship and creation. Think of AI as the ultimate crafting tool, like a master artist’s sophisticated paintbrush, a sculptor’s precision chisel, or a writer’s most intuitive word processor. The human remains the undisputed primary author, the visionary, the wellspring of intent, emotion, and conceptual creativity. AI, in this collaboration, becomes the powerful means through which human ideas can be realised with unprecedented speed, scale, and precision. This allows us to transcend previous technical limitations, freeing up human ingenuity and channelling it towards higher-order thinking, complex problem-solving, and the deep conceptualisation that defines true innovation. This symbiotic approach facilitates a powerful amplification of human capabilities, pushing the very boundaries of what’s creatively possible and allowing our imaginations to truly soar. This shift reimagines the very definition of “authorship.”

When a human uses a tool, whether a pen or a digital editor, the creation is unequivocally attributed to the human. In the context of AI, the human, by providing the intent, the prompts, the direction, and the refinement, remains the author. The AI acts as a sophisticated, responsive instrument for execution, like a highly skilled artisan working under the direction of a master. This perspective ensures that human agency and intellectual contribution remain paramount, preventing the erosion of human value in creative pursuits. It invites us to consider a future where complex creative projects, previously constrained by time or technical skill, become vastly more accessible to a broader spectrum of individuals, fostering an explosion of diverse human expression and artistic exploration. Critically, by ensuring human intent and vision are always at the helm, we can cultivate an environment where AI elevates human artistry rather than simply automating it. The discussion of AI as a collaborator often raises questions about where the line of authorship is drawn. However, this is not a new dilemma; it echoes debates from past technological revolutions.

This collaboration also necessitates a robust ethical framework, particularly concerning critical issues like data use, potential biases embedded in AI models, and the rapidly evolving landscape of intellectual property. Ensuring AI tools are developed and used ethically means proactively addressing fundamental questions of fairness, transparency, and accountability. It also involves meticulously safeguarding the originality and ownership of human-led creations, even when AI contributes to their crafting. In legal terms, the US Copyright Office in Copyright and Artificial Intelligence, Part 2: Copyrightability clarifies that copyright only protects works of human authorship, setting the stage for discussions like that in Copyright Protection for AI-Generated Works by BitLaw, which stresses that the key to copyrightability lies in the degree of human creative control, not the tool itself. Firms like Dentons and DLA Piper, in their work on AI and authorship, consistently advise that human contribution must involve the selection, arrangement, or modification of the AI-generated output to qualify for protection. This view is echoed by Foley & Lardner LLP in Clarifying the Copyrightability of AI-Assisted Works and Marks & Clerk, who ask the crucial question of Who owns the content generated by AI?

The ethical debate extends to academia, where the Committee on Publication Ethics states that AI cannot meet the criteria for authorship, a position that informs the Authors Guild’s AI Best Practices for Authors which urges transparency and caution. Yet, The Use of Controlled Artificial Intelligence as a Co-Author in Academic Article Writing by Akın Sayğın, D., & Aydın Kabakçı, A. D., suggests a controlled co-authorship model is possible, contrasting with the challenges noted by Ryan, S., et al., in Can AI Be a Co-Author?, which explores the complex boundaries in educational settings. Enago, a specialized author support service provider, discusses the risks in AI as Unintentional Co-Author: Risks & Ethics, noting that the human remains ultimately responsible for the output, a consideration central to Lewis Silkin LLP’s advice on navigating the rights, risks, and rewards of Gen AI creative work.

By consciously designing, regulating, and deploying AI to serve explicit human intent and to align seamlessly with deeply held human values, we possess the power to steer its evolution towards outcomes that genuinely enhance, rather than diminish, our collective creative and intellectual output. The overarching goal is to avoid scenarios where AI merely automates existing processes or generates superficial, uninspired content; instead, it should act as a powerful catalyst, propelling human creative thought to new, previously unexplored territories of innovation and meaning. This framework must ensure that the human creator retains the ultimate rights and recognition for their work. It also means actively combating the perpetuation of societal biases that might be unknowingly encoded into AI systems, ensuring these tools are developed responsibly and inclusively.

This impact of AI also forces a critical philosophical reflection on the limitations of the binary thinking that underpins our technological and societal systems. The fundamental digital logic of computers is binary—True or False, Zero or One. This binary constraint often mirrors the reductionist, either/or thinking that pervades our politics and problem-solving, leading to the false dichotomies that the entire Forward Futures section seeks to dismantle. Yet, the work of philosophers like Jan Łukasiewicz, in formalizing ternary logic (True, False, or Unknown/Possible) in On three-valued logic (O logice trójwartościowej), offers a more nuanced philosophical model for reality.

Reality is often indeterminate, characterised by uncertainty and possibility, which simple binary choices fail to capture. The ternary system, which accommodates a middle ground, serves as a powerful metaphor for the level of sophistication required in human decision-making and ethical systems. AI’s true enabling power, therefore, lies not just in its speed, but in its capacity to process the vast, complex, and non-binary data sets that human systems often oversimplify. By allowing AI to handle this complex, non-binary representation of reality, humans can dedicate their energy to the higher-order reasoning, ethics, and values that define true competence and wisdom. The integration of AI thus compels us to adopt a higher form of rationality—one that embraces uncertainty, possibility, and complexity—to ensure that our tools are not simply automating the failures inherent in binary thinking.

Next Chapter: AI as an Enabler: Redefining Human Potential

Bibliography

Akın Sayğın, D., & Aydın Kabakçı, A. D. The Use of Controlled Artificial Intelligence as a Co-Author in Academic Article Writing, European Journal of Therapeutics, 29(4), 990–991. N/A (Journal/Article). 2023

Authors Guild. AI Best Practices for Authors. Authors Guild. 2025

BitLaw. Copyright Protection for AI-Generated Works. BitLaw. 2025

Committee on Publication Ethics (COPE). Artificial intelligence and authorship. COPE. 2023

Copyright Office, Library of Congress. Copyright and Artificial Intelligence, Part 2: Copyrightability. Copyright Office. 2025

Dentons. AI and intellectual property rights. Dentons. 2025

DLA Piper. AI and authorship: Navigating copyright in the age of generative AI. DLA Piper. 2025

Enago. AI as Unintentional Co-Author: Risks & Ethics. Enago. 2025

Foley & Lardner LLP. Clarifying the Copyrightability of AI-Assisted Works. Foley & Lardner LLP. 2025

Lewis Silkin LLP. Gen AI and creative work: rights, risks, and rewards. Lewis Silkin LLP. 2025

Marks & Clerk. Who owns the content generated by AI?. Marks & Clerk. 2025

Ryan, S., et al. Can AI Be a Co-Author?: How Generative AI Challenges the Boundaries of Authorship in a General Education Writing Class. Ryan, S., et al. 2025

Łukasiewicz, Jan. Various works on three-valued logic (referenced as a theoretical framework). N/A (Theoretical Framework). Early 20th Century