To understand the moment we face with Artificial Intelligence (AI), it is useful to step back and examine the historical arc that shaped our technological civilization.
The modern technological worldview arose from the intellectual revolutions of the 15th -17th centuries, when European thinkers began to redefine humanity’s relationship with nature.
Philosopher René Descartes introduced a powerful idea that would shape modern science: the separation between mind and matter - a philosophy often referred to as Cartesian dualism. In this view, nature became a mechanism that could be analyzed, broken apart and understood through rational inquiry.
Around the same period, Francis Bacon - one of the founders of the scientific method, argued that science should enable humanity to “conquer” and “subdue” nature for human benefit. Knowledge, he suggested, should be power - the power to control the natural world.
This intellectual shift helped launch the Scientific Revolution, followed by the Industrial Revolution. These movements unleashed extraordinary advances in science, engineering, and technology.
The promise was profound: technology would liberate humanity from scarcity, suffering, and ignorance. The Enlightenment vision held that rational progress would produce greater prosperity, happiness and wellbeing for all.
This promise seemed within reach.
Yet over the centuries, another force began shaping the trajectory of technological development: the rise of industrial capitalism and, later, the global economic ideology described as Neoliberalism.
By the late twentieth century, technological innovation increasingly served the imperatives of markets, efficiency and profit maximization. Instead of asking primarily how technology might advance human flourishing, socio-political systems driven by corporate interests asked how it might accelerate growth, increase consumption and generate returns on capital.
The result is a world of remarkable technological capability - but also one characterized by ecological strain, an economy driven heavily by consumption and social fragmentation leading to competition, conflict and wars.
Today, AI represents the newest and perhaps most powerful expression of this long historical trajectory.
The predicament we find ourselves in is both technological and civilizational – with an existential threat to our life on earth.
Is it a watershed moment or are we at crossroads?
Has the AI horse bolted and if so, can we rein it back in?
Will AI deepen the same patterns that have placed profit, efficiency and control at the center of our systems?
Can we consciously redirect this technology toward a more humane, ethical and sustainable future?
When I presented “AI Is Not Your Master: Reclaiming Human Power in the Age of Machines,” on 11th June 2025 to the Canadian Association for the Club of Rome (CACOR)[i] - I did so with curiosity about our history and where we are heading - deeply committed - through decades of work in ethical business, sustainability, leadership and mindful living - to preserving what makes us truly human.
As a trained engineer, fascinated with our history of technological and scientific evolution - I am not a technophobe, so this presentation was not about fear mongering - but to provoke though, ask questions and share my own thoughts and ideas.
Are we at Crossroads or is this a Watershed moment?:
I believe we are at a watershed moment: we can allow Artificial Intelligence to become a subtle form of dominance or we can choose to integrate it consciously - in a way that amplifies our humanity instead of eroding it.Neuroscientist and Philosopher
Curiously - Philosopher, Neuroscientist and Psychiatrist Iain McGilchrist calls it Artificial Information - arguing that AI mimics the functions of the left-hemisphere of the brain - manipulating symbols and tokens - without possessing true understanding, consciousness or connection to the lived world[ii].
The CACOR YouTube Video Presentation is here and these are the power point slides.
What is AI?
Today, what exists is largely what we call Artificial Narrow Intelligence (ANI): systems built for narrow tasks - facial recognition, language translation, pattern recognition, data sorting, automation.
The talk of stronger AIs - Artificial General Intelligence (AGI) or perhaps one day Artificial Superintelligence (ASI) - hovers in the realm of speculation.
That distinction matters as today AI is in play as ANI. For all its convenience and power, it remains a tool - not yet, not necessarily - a being.
This gives us a window of responsibility, a choice - to shape the future of AI not as avoidable passivity but active stewardship.
The Promise - Where AI may Serve Humanity
AI, properly harnessed, carries genuine promise:
- Help solve complex, global-scale problems - from poverty, wars, climate change to supply‑chain logistics, from disease diagnosis to resource distribution.
- Automate repetitive or dangerous work, freeing human beings to focus on what machines cannot - creativity, compassion, relationship, meaning.
- Help us make informed decisions more quickly, reduce human error, and extend our capacities to see patterns beyond our natural sensory and cognitive limits.
In that sense, AI could become - as some suggest - an amplifier of human potential: a co‑laborer in addressing the wicked problems of our time.
The Danger - What Happens If We Lose Ourselves to the Machine
This potential comes with deep risks - especially when we surrender our agency, our values, our sense of purpose to machines or the systems built around them.
In my talk I outlined several interrelated dangers.
- Loss of control. If we build AI systems that can operate autonomously - and especially if we allow their goals to diverge from human values - we risk ceding too much. A machine does not possess empathy, wisdom, moral imagination. Automated decision‑making could become blind to context, justice, human dignity.
- Weaponization & concentration of power. AI under the control of a few - whether corporate giants or powerful states - can become a tool of surveillance, control, inequality, coercion, even existential danger where it is used for war.
- Erosion of what makes us human. Over-relying on AI risks turning us into mere consumers, data points, cogs in algorithmic systems. We may trade empathy for efficiency, compassion for convenience, relationships for transactions.
- The existential gamble. The more capable AI becomes, the less certain we can be that its trajectory aligns with human flourishing. The hypothetical ASI raises moral and existential questions - are we designing for a partner or inadvertently creating a master or a monster?
Technical Issue or a Moral, Spiritual, Existential One?
In my work as a mindfulness practitioner, leadership coach and long-term student of human history and purpose, I have come to see that the conversation about AI cannot remain confined to technology or economics where consumption, convenience or optimization are central.
Human life gains its deepest meaning through connection, service, inner growth and shared purpose. For centuries, spiritual, philosophical and ethical traditions have emphasized virtues such as compassion, empathy, integrity and responsibility. These are the first casualties in the modern-techno‑industrial AI era.
If the industrial‑tech worldview - market fundamentalism, consumerism, commodification - has already eroded many dimensions of our humanity, AI risks accelerating that trend.
Therefore, we must decide:
Do we want AI to shape human life around algorithms, efficiency, and output? Or
Do we want to embed technology within a framework of values, dignity, human flourishing?
Toward a Conscious Integration - Principles for Using AI Wisely
Based on the reflection behind my presentation, here are guiding principles I believe we must hold as we navigate this pivotal moment:
- Human-centered purpose: Always ask - “For whom, for what?” AI should serve human flourishing. That calls to moderate corporate profits, abstract optimization and blind automation.
- Ethical foresight & accountability: Design and deploy AI systems with humility, transparency, and responsibility. Guard against concentration of power, opacity, socio-economic inequality and erosion of rights.
- Cognitive and moral self-awareness: Resist surrendering our judgment, creativity, empathy - the intangible capacities that define our humanity. Recognize that machines may simulate certain functions but cannot embody human conscience.
- Solidarity & equity: Ensure that the benefits of AI are shared broadly. Use AI to reduce inequality, amplify collective welfare and protect vulnerable communities. The benefits should not be concentrated in elite enclaves alone. This may require embedding values like equity, inclusion and solidarity into AI ethics[iii].
- Integration with deeper values & purpose: Anchor technological use in a larger vision - of human dignity, spiritual and ecological well-being and the common good. AI should be a servant to our highest aspirations.
We Must Act, Now to Reclaim our Power
The question is: who controls that power and to what ends?.
If we remain passive - accepting narratives that depict AI as inevitable master - we risk losing our humanity - and our very capacity for meaning, moral agency, empathy and connection.
If we act consciously, with courage, wisdom and compassion - we can shape a future where AI becomes a tool for human and planetary awakening. It can become a future where technology will heal the soul rather than diminishing it..
It is a call to responsibility - to reclaim our humanity, to remember our deepest purpose, to steer through uncertainty with steady values, compassion and collective care.
A Shared Responsibility for the Future
If we choose to be the master of AI and not its slave, then we must ensure it is a tool shaped by human choices.
The deeper question is - what kind of civilization do we wish to build?.
If we continue to design technology primarily around profit, competition and efficiency alone, we risk amplifying the very forces that are already destabilizing our world.
If we anchor technological progress in ethics, wisdom and human wellbeing, AI could become one of the most powerful tools humanity has ever created for collective flourishing.
That responsibility belongs to all of us and here are my humble suggestions;
To Young People and Students
Do not surrender your creativity, curiosity or moral imagination to machines. Use AI as a tool for learning and exploration.
Cultivate the deeper capacities that technology cannot replace: human interaction (eyeball to eyeball), critical thinking, empathy, wisdom and courage through mindful reflection and wise action.
To Parents and Educators
Help the next generation develop inner strength as well as technical skill. Complement teaching STEM with ethics, compassion, emotional intelligence and a sense of responsibility for the wider human community and the natural world.
Promote the study of the humanities and engagement in creative pursuits - music, poetry, theater and dance - as the arts cultivate character and personality, using metaphor and storytelling to reveal the subtle nuances of life.
To Academics, and Universities
Education should not be reduced to a pipeline producing workers for technological systems or driving GDP growth. Its deeper purpose is to cultivate thoughtful, responsible individuals who can ask the larger questions: What is knowledge for? What kind of society are we building?
By fostering critical thinking and creativity, education equips students with the resilience and imagination needed to navigate the existential challenges of the future.
To Organizational Leaders
The challenge is to act responsibly - to ensure that AI is not adopted solely for efficiency and profit. Today’s leaders must balance innovation with wisdom, efficiency with psychological safety and human dignity, and profit with purpose
To Policy Makers and Economists
Economic systems must evolve beyond narrow measures of success defined only by growth and financial return. Public policy should ensure that technological progress contributes to human wellbeing, equity and ecological sustainability.
To Political Leaders
Governance in the age of AI requires courage and foresight. Regulation must ensure that powerful technologies serve the common good rather than concentrating power in the hands of a few.
And to the Technology Elite
Those who design and control the most powerful AI systems wield unprecedented influence over the future of humanity. “With great power comes great responsibility” (sage advice from Spiderman’s Uncle Ben).
The choices you make today will shape not only markets and technology, but the wellbeing, dignity and flourishing of generations to come. Let ethical reflection, humility, and accountability guide your innovations - tempering the relentless pursuit of profit so that human welfare - and the future of our shared world - remains at the center of every decision
Rediscovering the Middle Path
Many wisdom traditions remind us that human flourishing arises from the middle path of mindful, wise and ethical action by understanding the nature of life.
The future of AI - and perhaps of civilization itself - depends on our ability to find that middle ground: a path where technological progress is guided by values and a deeper understanding of our interdependence with one another and with nature.
Our actions generate consequences.
In the language of Eastern philosophy, this principle is often expressed as karma - the law of cause and effect. The technologies we create, the systems we design and the values we embed within them will shape the world our children inherit.
If we act with wisdom, compassion and responsibility, AI can become a servant to human awakening rather than a master of human destiny.
The choice remains ours.
References
- Merchant, C. (1980). The Death of Nature: Women, Ecology and the Scientific Revolution. Harper & Row.
- Leiss, W. (1972). The Domination of Nature. McGill-Queen’s University Press.
- Latour, B. (1993). We Have Never Been Modern. Harvard University Press.
- Mohamed, S., Png, M., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. arXiv:2007.04068.
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
- Maslow, A.H. (1966). Critique of Self-Actualization Theory. Journal of Humanistic Psychology.
- Carney, M. (2020). Value(s): Building a Better World for All. PublicAffairs.
- Smith, A. (1759). The Theory of Moral Sentiments. London: A. Millar.
- Smith, A. (1776). The Wealth of Nations. London: W. Strahan.
- Gunaratne, L.A. (2025). AI Is Not Your Master: Reclaiming Human Power in the Age of Machines. CACOR Live, 11 June 2025.
[ii] https://firstthings.com/resist-the-machine-apocalypse/
[iii] https://arxiv.org/pdf/1910.12583