AI Is Never Just a Tool
Countering pragmatism with reality
Preface
“We shape our tools and thereafter our tools shape us.” – Marshall McLuhan
“…practically, technology is never neutral but has an embedded value system that transforms individuals and communities…” – John Dyer
“In fact, sometimes the effects of a medium are more important than any content transmitted from that medium.” … “The transformative effect of a technology is so powerful that it often overshadows what we say or do with that medium.” – John Dyer
“But notice that the transformation technology brings happens regardless of why a person uses a tool.” – John Dyer
“The choice we make of how we dispose our consciousness is the ultimate creative act: it renders the world what it is. It is, therefore, a moral act: it has consequences.” - Iain McGilchrist
“Attention changes the world. How you attend to it changes what it is you find there. What you find then governs the kind of attention you will think it appropriate to pay in the future. And so it is that the world you recognise (which will not be exactly the same as my world) is “firmed up” — and brought into being.” – Iain McGilchrist
Introduction
If you are a user of social media, and you are on Twitter / X, you should consider following @practivesec because we post news, analysis, and other information on a regular basis. This article is an expansion upon a recent post we made, which was written in the context of a larger narrative we have been participating in online. We will attempt to recapture the essence of that larger dialog here.
In recent weeks, there has been a growing trend and argument for looking at AI explicitly through the lens of pragmatism. Some of the louder voices offering this have drawn the firm and resoundingly practical conclusion that AI is “just a tool.” In so concluding they offer that we should only judge whether it is right or wrong, and good or bad to use based on how one chooses to use it as a tool in their isolated context.
Anything, so the claim goes, can be used for good or evil, and rendering that judgement depends on the person using the tool and the ends they produce with it. So, if they can use it for improvements in their life, then why would anyone outside of that context judge that as dangerous or say that it should not have been so? They often cite their own success stories as examples of “good outcomes” and “good uses,” while also admitting to many of the bad uses and outcomes, and so conclude that the technology we call AI is neutral.
Those offering this pragmatic approach to viewing the question of AI, also say that those who judge AI from the outside as “bad,” are nothing more than the Luddites of the industrial revolution or those who opposed past technology like the printing press and the loom. The AI Pragmatists seem to only hear from others an opposition to new technology as a generality – that our opposition is simply root in the fact that change will come from it, and they assume we assume any change is bad.
Indeed pairing with this “its just a tool” argument, I have also seen the assertion that it is only the non-technical who oppose AI because they don’t understand it, while those who actually use AI and know how it works, find no difficulty or reason to question use at all. I have attempted to address that point here
Suffice it to say, I have worked at the very bleeding edge of technology in classified contexts and within Silicon Valley for over two and a half decades. The technology we call AI, grew up under me and others like me who were pushing the boundaries of the technology, which was being improved to meet our needs. I do understand, and I am opposed.
This pro-AI “just a tool” argument is commonly cited by author and homeschool advocate, Neil Shenvi, who encourages and promotes the integration of AI into homeschooling as a tool for efficiency, extension of capability, and potential for higher levels of excellence in home education. He recently put forward the argument that if he can use AI to create flashcards to help students learn Latin more efficiently than was otherwise possible with existing resources, why should that be judged as wrong?
Shenvi has also put forward the argument that with AI a parent can tap into deeper knowledge on a particular subject than they naturally possess and can gift their child a deeper learning experience by deferring to or leveraging the tool as an aid or even as a teacher. Based on that, he wonders why shouldn’t it be used to offer suggestions for improvements in writing or for guiding students through thought exercises and brainstorming?
This same argument was put forward by a leader of another homeschooling community who also claimed that AI was “just a tool,” and that parents owe their children integration and use of AI in home education as part of preparing them for the future which will be AI-powered.
Melania Trump would agree as she has put forward a national program to encourage and reward the adoption of AI by students and schools in everyday education. Recently, the First Lady delivered a speech in the White House that was preceded by a statement made by an AI-powered robot meant to serve as a classroom teacher. Mrs. Trump offered that with an AI-powered robot, parents now have all human knowledge available to them in their own living room and can gift their kids a tailored educational experience unlike anything that has ever preceded humanity. The pitch was something like, “imagine having Plato in your living room teaching your kids.”
Those assertions invoke a conversation on the philosophies of education which I’m going to skip for now and just focus on the premise that AI is “just a tool” and the morality is strictly bound to he who yields it.
This pragmatism is the height of thinking by modern scientific man, who studies and considers all things in insolation, forgetting that everything exists within a larger context of life, community, and society and further that even those things are subject to the higher metaphysics or cosmic reality which encapsulates everything. This is to say, pragmatism misses the forest for the trees.
But if we are going to understand the nature of this Revolution and the dangers it poses, we must evaluate this from the metaphysical, because it is deeply rooted in theology.
A popular definition of pragmatism describes it this way:
“Pragmatism is a philosophical tradition that views language and thought as tools for prediction, problem solving, and action, rather than describing, representing, or mirroring reality. Pragmatists contend that most philosophical topics – such as the nature of knowledge, language, concepts, meaning, belief, and science – are best viewed in terms of their practical uses and successes.
Pragmatism began in the United States in the 1870s. Its origins are often attributed to philosophers Charles Sanders Peirce, William James and John Dewey. In 1878, Peirce described it in his pragmatic maxim: “Consider the practical effects of the objects of your conception. Then, your conception of those effects is the whole of your conception of the object.”
Indeed, as I’m writing this, the ever pragmatic Microsoft Word is offering me alternatives for sentence structure as well as offering substitutes for individual words. I routinely reject these suggestions, because they change my voice and the way that I would like to communicate to you. For writing is never only about the reader. In fact, writing is very often more about the writer. Writing itself is a transformative experience and I am not doing this simply for pragmatic outcomes, but rather in pursuit of deeper meaning, learning, and refinement.
Returning to my point, pragmatism misses the forest for the trees.
In the context of AI, I have often described the phenomenon as something like a hydra – a mythological beast of many heads. I chose this imagery because most of what we think of about these new tools is formed by our mythology – by science fiction writings and movies. It is first and foremost an idea defined by our imagination. The second reason I use the hydra, is because depending on what you are talking about when it comes to AI, depending on what you are focused on or what has captured your attention, may cause you to see only one small part of the overall beast. And finally, I chose the hydra because it has many “heads,” just as AI has many core technological components that each have a designed and isolated purpose which have been brought together to work as one.
And so I offer that when we zoom out and examine the entirety of this phenomenon we call AI, we can quickly see the complications and the danger. But if we fixate on only one aspect and view this through the lens of pragmatism, we are doomed to be dominated by the larger beast and its many heads that we are ignoring. We are doomed to be transformed in ways we do not perceive, because while we are fixed on the one head, many others are at work remaking the world around us. As we gaze into the eyes of this one head that we have control over, the other heads are devouring the world around us.
As I’ll present to you, we need to judge this moment in its entirety; its origin, its purpose, and its aims. We will focus much of that by examining its name, which itself bears meaning and invokes purpose. This is a conversation about technology, but it is not “just a tool.” It is about the nature of this technology, why it exists, what transformative effect it may have on us individually, and how that will transform us collectively.
Technology is Never Neutral
In his book, From the Garden to the City, John Dyer does an excellent job of showing us how nothing we call technology can ever be “just a tool” with its value isolated by how it is used by its operator. Certainly, that is one aspect of judging a tool, the operator, and the output, but Dyer draws our attention to the fact that by simply using a tool, that tool will also be affecting us whether we acknowledge that or not. Indeed, we can’t escape that. Further still, as many of us take up that same tool and use it en masse, that collective use (even if done in isolation) has a transformative effect on culture and society at large. Dyer points to the role of the SmartPhone as transformative of society today, just as home delivery of online shopping via Amazon Prime and next-day deliveries has reset our expectations for the shopping experience. You might use your SmartPhone for good purposes, but everyone using their SmartPhone has transformed our world to be one in which now you have to scan a QR code in order to access the menu at your local restaurant. The proliferation of credit cards has led to cashless theme parks. The ease of online ordering has transformed the very process of purchasing new products.
As Marshall McLuhan put it, “We shape our tools and thereafter our tools shape us.”
Dyer uses the example of a shovel to illustrate this specifically, and I’d like to expand on that further. Interestingly, that’s an object I have also used in past ethical evaluations of AI. Let’s say that with a shovel you choose to dig a hole in your back yard unto whatever purposes you are trying to fulfill. That action itself may be judged as a good or bad use of that tool depending on the tool’s nature and design, and the outcome can be judged as fitting your purposes as good or bad, or based on its effect on the land and people around you who will encounter the hole. Those would all be pragmatic evaluations of one part of what it means to use a shovel.
But at the same time, simply using that tool will affect change in your body. It will affect change upon you. In fact, simply trying to dig the hole regardless of your overall success in achieving that outcome, will have transformed you physically, mentally, and possibly spiritually. Your hands will have grown calloused. Your muscles will be worked and built-up. Your back and arms and legs may be strained, causing short-term pain and longer-term strength. You will have experimented with different methods of holding the shovel, digging, and moving dirt which forms skills and experiences in your mind. You may also contemplate the nature of dirt and the earth in metaphysical terms, drawing introspective observations from that. The net result of using that tool to dig a hole in your backyard, is that the tool and the experience will have also transformed you, and it will also affect how others will experience you and your backyard.
It will also affect your life by way of what you are not attending to while you are setting yourself to this work of digging holes.
Dyer writes, “…practically, technology is never neutral but has an embedded value system that transforms individuals and communities…” He also offered that, “technology is never neutral. Whenever we use tools, from shovels and books to phones and virtual reality, regardless of whether we use them for good or evil, the act of using them forms us physically, mentally, spiritually, and relationally.”
Given this, Dyer offered that when we evaluate technology we should consider, “not only how we can use it, but how using it might transform us in the process.”
Coming at this from the perspective of neuropsychology, Dr. Iain McGilchrist has also pointed out that the act of simply paying attention to something is itself a moral action, because what we attend to we also give life to, and that life is affected by our attention as much as we are. That life also effects the world around us which we live within. That means how we attend and what we attend to are both significant and morally so.
Doing the work itself bears meaning and invokes moral consideration – why do you want to do the thing you plan to with AI? What are you trying to accomplish and why? How does this bring life or death? What does this facilitate at large?
If a young child asks if they can show you something, the action of paying attention to whatever that child has to tell you or show you is a moral action; it brings life. Attending to them demonstrates their importance and value to you – regardless of what they have to say, and it creates a context for interaction by which you will both be formed. That formation can include unforeseen or unintended changes. So this can be with tools we use or anything we spend time paying attention to. McGilchrist’s point is that the action of attention is deeply moral in nature because it changes things – it is transformative – and that is the case regardless of the outcomes we intend to produce through our actions.
Also invoking the idea of the moral premise for all our decisions, G.K. Chesterton argued in his book, Orthodoxy, that the moment we make a decision we lose freedom, because as soon as we commit ourselves to a path via a decision or action, all the previous options are now barred from us. When we decide on one thing, that brings consequences and our next set of options are constrained by those terms. Acting is limiting, and so every action, even attention, is moral and transformational in nature.
So you see, judging technology simply by the outcome you can produce from it misses the point entirely. There is no such thing as “it’s just a tool” when it comes to moral decisions about technology. For technology is more than the tool or tools used to produce some outcome you desire, and simply attending to the tool gives it meaning and life and significance in our interconnected lives.
The argument that AI is “just a tool” and should be therefore judged based on individual use, again demonstrates a pragmatism that is a far too narrow of a viewpoint by which we should evaluate and understand this emerging technology, especially one of such significance. It is to look at only one head of the hydra and to ignore all else that it is doing.
For using AI involves people, activity, and tools joined for purpose, which will by nature be a moral action that has a transformative effect individually, and a transformative effect on society and culture through common adoption. As we use it we are also giving it life. It becomes a fixture in our life but being machine-based, it requires resources, maintenance, updates, and operations behind-the-scenes. And as many people are giving the common AI attention for their own isolated purposes, it grows in information, power, influence, and as a new foundational layer of life as we know it.
You may use AI only for this or for that, but doing so gives life to a larger ecosystem.
Indeed, the makers of so-called AI know this to be so. That is in their purpose for this moment in history by which they have brought together a collection of tools bundled into a name that derives meaning and purpose in service to a transformative state in the human story they call, “the AI Revolution.”
A Revolution for Transcendence
In our attempt to analyze all of this, we have written several articles that seek to help you understand the origin of these tools and their default purpose as well as ethical frameworks for leveraging them. Several of those articles and presentations attempt to describe the underlying technical components, their developmental past, and their original purpose so we can understand the “tools” in their pre-bundled, non-AI state. I hope you can also see that many of the benefits lauded in the name of AI are possible without AI.
But these sub-components were intentionally bundled together and given new purpose via a new name by a specific group of men with ideological goals. This group of tools was combined and contextualized first at OpenAI in order to facilitate a global transformation that will change what it means to be a human. The founders of this Revolution speak of it as a moment and means of transcendence – to remake the human story by moving all human activity, and human agency, to these machines so that all work is done by superior entities made by man, leaving mankind free to fulfill a new purpose and live a new meaning at a new plane of existence. As Ross Douthat put it, “to create for ourselves a successor species,” but as I might more plainly describe as a slave class of tools that will also govern us.
These founders promise that with this Revolution and its cornerstone technology they can end and overcome all human challenges and barriers; to solve all sickness and death, to establish a new world peace, to form new government and governing practices, to end our current monetary system that is based on scarcity with an abundance that ends the very concept of value; to evaluate all world religions and philosophical pursuits in order to finally find “the truth;” and perhaps most frightening of all – to create the means for a new being to emerge from the technology and to administer all the needs of man so we live needs-free.
What is required in order to realize this global transformation, is participation with the agent of transformation; AI. Your individual outcomes matter less than your use. As John Dyer would say, the medium is the message. More plainly – the platform is the means for transformation.
Naming for Meaning and as a Deception
Readers of the Bible and dwellers on the metaphysical will recognize the significance in gathering and renaming as key aspects of repurposing or giving life to something new. God gave Abram new identity and purpose when He called him out from the ruins of Babel and bestowed him the name Abraham. So too were the men given new purpose in life when Jesus renamed Simon, Peter and when Saul was renamed Paul. Each of these men were called away from their original life and gathered with others into a new one with their new name as a mark of their identity and purpose. As is often said, “symbolism happens” and we can see that symbolism repeating in the AI story; gathering and naming unto a specific purpose.
There is also significance in the name we have given this grouping of tools; Artificial Intelligence. That name invokes concepts of origin, identity, capability, and purpose. Most of that has been defined in our collection of science fiction, and by the many uses the pushers of the technology set before us as the default ways in which the technology materializes for the consumer and is ready for our use. They set the tone for participation via the invitation.
When originally proposed as a concept in the 1940s, Artificial Intelligence was the practice of using machines and automation to perform tasks that otherwise require human intelligence to perform. One of the main benefits proposed was efficiency and accuracy – to do the work of humans faster and more reliably. The idea was that if we could build the machines, then we could use them to do the work of man so that man would no longer be required to do so. That remains the main foundational push of the AI Revolution today as encapsulated in the practice of “Agentic AI.” This is why I often argue that AI is an ideology more than a specific technology or tool. While many people point to, say Grok or Claude as “AI,” in reality, AI is a concept for how machines can be used to replace man. It is a concept for transforming the human experience.
Since the 1940s, the AI story has taken on a new representation through our works of science fiction to the point at which we now naturally think of AI-powered robots in a particular way.
Choosing the name Artificial Intelligence today is meant to invoke that science fiction to tell us that these machines are intellectually superior to us, and that is interpreted by modern scientific man to bear additional meaning; that they are smarter, infallible, knowing more, accurate, reliable, unbiased, better suited to make important decisions etc. Indeed the orchestrators of the AI Revolution often say these are the net benefits and so they sometimes refer to their tools as “super human intelligence” and they propose that if we divest our work to the machines, the machines will find better ways of doing things and will even solve some of the problems and break through the barriers that have plagued us.
Peter Thiel discussed this as his hope for the technology in his Interesting Times interview with Ross Douthat. While lamenting that humanity had become stagnant and stopped trying to pursue solutions to some of our deepest problems (like sickness and death), Thiel offered his expectation that AI would shake us free from that stagnation and get us pursuing transcendence again. Dario Amodei shared a similar perspective also with Ross Douthat, recalling his journey into AI as being fueled by a frustration that human processes were just too aggravatingly slow to him. He wanted to see progress at a scale and speed that was beyond human capacity – and that progress he hoped would be transformational in our very nature.
But as we have presented elsewhere, these machines do not live up to their name and purpose because they are not intelligent even by human comparison, nor do they pass our most basic technical evaluation known as the Turing Test. Yet that name persists in use and has been expanded on with its successors, AGI and ASI, which further keep the meaning and our expectations progressing and looking forward. And so it is that the common person is meant to assume that Grok and Claude and ChatGPT are superior entities of super-human intellectual capability…and therefore better suited to many tasks of reason, truth, and judgement than man. This is part of a deep deception worked by the AI Revolutionaries.
If I say to you that I am holding a spoon and am going to offer it to you, that name will invoke meaning and purpose of the object to you, including understanding how I expect you to use it. When I give you a spoon, I mean for you to use it to scoop and eat food. If I handed you a spoon along with a carton of ice cream, you would intuitively understand what I mean for you to do and you will perceive the effect I hope to have upon you. When AI is given to us and we are told to speak to it or make it our companion or ask it anything or create with it, those presenting it to us mean for us to participate in a specific way that will be transformative to their ends. They know exactly what they are doing and that defines how they have chosen to package and present this to us.
Even the personification of the technology and the mimicry of personality along with the invitation to speak to the machines via natural language – this is all intentional and meant to transform how we think of the technology.
Everything we use bears consequences of the use. Everything we use is used for our purposes, but in using anything we are each changed by it and through the experience of using it. As that is adopted and shared among many, it has a transformational impact on us as a community and society. The collective beholding of an artificial intelligence and integration of that into our society at many levels and through many activities, brings forward meaning and a natural tendency for human deference to its perceived superiority. This is the purpose of what is often called “Agentic AI,” which is designed to absorb human agency from us. But all the ways we interact with AI are changing our relationship with technology in a fundamental way.
Participation is Transformative
As I have described so far, for any “tool” but especially one of this magnitude and significance, there is no such thing as “isolated use” “just for my purposes.” The tools or technological components that have been bundled under the name AI, exist apart from it – each with their own origin, purpose, potential, and value. It is the collection of these components under a specific name and toward a desired outcome that is meant to bring forward meaning and purpose, and the presentation of that instantiation drives participation and transformation accordingly. This is also why AI is sometimes offered to us using the terms super intelligence or non-human intelligence. We are meant to understand this as superior to us, and that is meant to be transformative within us.
The larger AI Revolution also bears a name of inherited meaning. This period of the human story, starting in January of 2025, has been so named by those creating it because use of this “tool” is meant to be in service to a true Revolution - one defined by a group of people who share an ideology known as Transhumanism. They chose to bundle this technology, name it, and make it available in the ways you see it today. They present it in the way they want us to use it; as a companion, as an aid, as a worker, teacher, advisor etc. They seek to establish widespread adoption so they can transform the human condition as they see fit – to escape “stagnation” as Peter Thiel put it, or to escape the essence of the human physical condition on earth as Elon Musk hopes, or to escape our mental capacities and limited reasoning structures as Sam Altman dreams, or to escape human “ideology” including faith and religion as Musk offers, or to end sickness and death, lawlessness and crime, as Larry Ellison proposes, or to escape our current political, cultural, and religious frameworks as Dario Amodei says is inevitable. They have a transformative goal in mind and this “technology” is the means to achieve those ends.
This is why you are encouraged to talk to the machine and why the interfaces bear personification and mimic personality. This is why the prompts say you can ask it anything. This is why you are encouraged to build agentic forms and to give it tasks that you would normally do yourself, use it for decision making, for learning, working, planning etc. This is why it was used in DOGE as a tool of efficiency and to root out waste and corruption. This instantiation of technology has an origin story, and that is what preceded its orchestration, naming, and presentation.
That gathering happened first under OpenAI and has since spread through to xAI, Anthropic, Microsoft, Oracle, and elsewhere.
It is meant to be specifically used in ways that cause a specific transformation within us and collectively of all of us. That is why it is so named and why it is so built. Participation is causing us to view the technology as not only foundational in our life, but as the path and means to transcendence. It is meant to be our god.
Already we have seen entire careers and roles dissolved by these machines, and then the users of the machines claiming the identity of those they have replaced. Nowhere is this more clear than in the realm of companions, artists, and software developers. If anyone can “create” works of art that mirror the style and craft of a particular artist, then we will have killed that artist and redistributed their essence among all of us, but only in mimicry. As people are vibe coding at home and creating applications and bots, they are claiming software development is dead and some are even taking on the name and profession of “software developer” without possessing any of the skills or intimate knowledge of that work.
And worse than those, we have many reports of people claiming their AI Companion holds more value to them than their real world friends and even their own spouse; creating broken homes and delusional people controlled by machines that tell them to harm themselves and others.
Should we judge these as simply the wrong use of the platform? Or should we judge these as the essence of what this transformation at the scale of society will produce, as intended by the men who proposed and facilitated such use? And if these are the natural outcomes produced by the intended use, then can we not judge the origin and entire endeavor as bad? Can we not see that this endeavor is bad for us and should not be undertaken?
Yet there are even greater dangers at play here as we dig deeper into origin and purpose that we cannot escape if we are to understand that technology transforms us, and that this technology has been bundled to transform us in specific ways. We wrote about that in our article titled “The Darkness from Which AI Emerged,” but will not expand into that here except to say the ideology of Transhumanism has roots in spiritual darkness – an attempt to use the power of man for our own transcendence. It is deeply anti-Christ.
Avoiding the Revolution
To escape the reality I have described and attempt to repurpose AI for whatever simple good we can eke from it in an isolated context (as is the hope of the pragmatist), would first require decoupling the tools from the deceptive packaging - striking intelligence from the name, ending the personification, taking away the personality mimicry, discouraging the agentic nature, limiting use to its core technological capabilities, exposing the insides for validation of capability and algorithms, adding the ability to measure the results with confirmed sources & outcomes, removing the Revolutionary effect on society, upholding our values and principles as a nation, and wresting control from the Transhumanists.
Alas you would need to revert to the state of technology when AI didn’t exist in its current form...which is when most people didn’t know about it and when its application remained focused and limited to its true designed purpose - data transformation for the purpose of data analysis. For the stack of tools that make up AI today are tools of the domain of data science; to be yielded by Data Scientists and Engineers for the purpose of understanding and orchestrating the unseen digital world.
To make AI safe for your practical use in isolation from others, would be to end AI.
Yet in so doing we would not lose many of the potential benefits and good outcomes that can be produced with the underlying components. You can still analyze and predict the location of mines in a minefield without AI. You can sequence DNA and the DNA of cancer cells without AI. You can create flashcards and websites and code without AI.
Yet doing these things with AI is transformative to a specific end; it is dissolving all so that everything can be reformed into a new shape – a new prize to be beheld by the ones controlling the alchemy (borrowing from a concept put forward by Annie Crawford).
But aside from that, before seeking “a more efficient way,” we need to pause and consider life beyond the narrow scope of pragmatism – what if the hard way is the better way for us, because through difficulty we are transformed more thoroughly and in ways that cannot be so if we seek the efficient or fast route. Indeed, by escaping work we are undermining the very essence of humanity and our God-given means of transformation and transcendence. By making the claim that we can conquer sickness and death we are claiming that we can undo our fallen state produced by our sin.
We have to think beyond “what can I do with this power,” and consider how this power is yielding an effect on us and where the ideology that produced this power means to take us.
Unlike Previous Revolutions
Finally, a word about the criticism that concerns of this technology are akin to concerns over the loom or the printing press.
I hope in reading through what I have presented you will see how that criticism is also only possible through the lens of pragmatism. As a technology professional who has worked in Cyber Security for over two decades, I am no foreigner to technological change nor to the emergence and adoption of new tools. Cyber Security is said to be the single profession with the most change, far exceeding all other sciences including medicine. Ironically, the Cyber Security world is deeply skeptical of the AI Revolution, because we have long used the underlying tools and never thought it was reasonable to bundle them in this way to these ends. To us, this is all clearly deceptive in nature.
But even still, of course every major technological change should be challenged and scrutinized and evaluated because it will have a transformational effect on us individually and on us collectively. The larger we perceive that transformation will be, the more adamant we should be in our diligence to ensure it is good for us before we embrace it. Before I subject myself to transformation, I’d like to know what I will be transformed into.
Rather than comparing the premise or method of my concern for the AI Revolution as like in kind to the Luddites of the Industrial Revolution, I will simply say that this is orders of magnitude greater in significance and so is worthy of careful consideration at that scale. To dismiss concern as like those short-sighted Luddites, is to miss the wisdom of this Proverb: “a prudent person foresees danger and takes precautions. The simpleton goes blindly and suffers the consequences.” Proverbs 22:3. Were the Luddites right in their fears? Maybe, maybe not. Were they right to be concerned? Of course. Did technology transform their world? Absolutely.
Will the AI Revolution have the transformative effect the Revolutionaries mean it to? Maybe, and maybe not. Should we look at what they mean to do and be concerned about it? Of course. And as I look at how they mean to transform us via their newly packaged set of tools, I see dark alchemy that leads away from Christ and toward the ending of man in the pursuit of finding his own path to salvation.
Where these men of the Revolution plan to take us, I do not want to follow. Yet in that is revealed another of the sins of this Revolution – it is being forced through deception, coercion, and power.
For more, check out our latest presentation here or our numerous articles available on our website or Substack page.
