Blog
Purism and emancipation of technology and power
4/1/2026
The impact of new technologies
It is highly debatable, and certainly debated, whether the impact of large language models (LLM's) on the world so far has been net positive or negative. While the technology is extremely powerful, and full of potential in specific areas, its downsides are also tremendous. Their creation has massively infringed copyright, their energy usage is enormous, and hallucination and bias are inherent to the technology. Whether we like AI or not, we should take these downsides seriously.
But a given technology's environmental footprint, or its societal risks have not halted adoption many times before. Mobile phones, social media, or a bit less recent; airplanes, cars and factory automation. As humanity, we are not good at mitigating the negative impacts of our innovations, however large their potential for good. If the economic promise of a technology is strong enough, our existing institutions are simply not built to weigh energy costs seriously. All of the above technologies are major features of our society today, and I expect LLM's to join them, for better or for worse.
This fact alone prompts us to take a stance on them.
Labour/Resources - Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Take a stance
Proponents and detractors (AI Boosters and AI Doomers) often make it seem like we have to choose: go all-in, or refuse to use the technology wholesale.
"There is no ethical consumption under capitalism."
- internet wisdom
Though seemingly impossible to attribute to a specific person, this saying is echoed much in online discourse. I've always found it fascinating, but this particular take tied it back to situations closer to my own work:
"There is no ethical software development under capitalism."
- FasterThanLime, Github Actions Feels bad
Even with our best personal intentions, the (capitalist) systems we live in will indirectly or directly force us to make choices that poorly affect our own situation, that of others, or of society and the environment at large. The quote may be cheeky, but it captures a lot of the problems with our current reality. However, in itself the sentence is descriptive, not prescriptive. It does not tell us what to do; whether we are aquitted of of moral accountability (as it is sometimes used), whether we should reject the system completely, or something in between.
Resistance against technology
Though all of the above technologies are major features of our society, we do not have to look far to find resistance. For mobile phones, consider "dumb phones" or even going phoneless. Social media is now more broadly known to badly affect mental health, leading many to delete their accounts. Many environmentalists don't travel by plane, or don't own cars. Ultimately I am sympathetic to most of them, because it is people answering real problems with real action.
There are varying levels of resistance, on which Luke Munn published an excellent piece in The Conversation. Varying degrees of moderation are found on the scale, and on the far end, complete abstinence. The latter is the most interesting to me, because it is so pronounced. I call it technology purism.
Technology purism
Technology purism: the categorical refusal to use a technology for principal reasons
If we take a stance like this, does that make us better than someone who does engage with the technology, and uses it for good while being - as one could argue - caught up with some of the bad?
On the one hand, it would seem like technology purism keeps our conscience clear. Our hands aren't dirtied, and we can keep up a certain innocence. One could take pride in such a stance, I can image as a vegetarian for over 15 years. However, abstinence leaves us outsiders to the technology. It does not necessarily make our actions good. It's not an active act of resistance. The non-participation of parties who are supposedly the most critical thinkers, likely increases the divide between supporters, users and even perhaps neutral parties (if those even exist).
It also should be said that being able to not use LLM's at all is a privileged position. Many are forced to use the technology, whether in harder (coercion by policy) or softer (manipulation, nudging, peer pressure) means. I strongly believe we should not judge others too harshly for being part of systems, where many are simply making a living. Not everyone has that opportunity to refuse.
A PSA about general human decency. It's weird to have to spell this out, but it seems like we need to be reminded.
Constructive criticism is fine, but the fact that you don't like AI generated images doesn't justify personal attacks on someone else who does use it. Similarly, the fact that you do not engage with this medium for moral reasons, doesn't mean that someone who uses it does so for (or is aware of possible) moral reasons.
The assumption of equal knowledge about the implications of the technology are important. These implications aren't universally known or understood, far from it!
Personal attacks will never convince someone to change their behaviour for the right reasons, but they will likely convince them not to talk to you again (because it is bullying).
Technology or power
Let's take a step back though. Is LLM technology the real problem? This quote from Munn got me thinking.
The point here is not whether AI models are racist or historically inaccurate or “woke”, but that models are political and never disinterested.
LLM's are the latest (and maybe the most powerful) in a number of technological innovations used to establish more strongly a cultural and economic hold. The company training the model holds the power to weigh in heavily on the truth of the world. These are not purely technical decisions; they are decisions about value and knowledge. Let me name three concrete areas where we can see this today.
Power/Profit - Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
1. Labour
LLM adoption seems to conveniently correspond to patterns of job displacement. Are AI agents replacing human jobs? Usually, not really. LLM's are very good at some human tasks, but are very far from being as generally intelligent as humans. The more likely explanation is that with an underlying drive of cost reduction, executives were already looking for opportunities to cut labor before LLM's arrived to provide convenient cover. The disruptive power is immense, and the utilisation of this technology for the promise of job displacement is happening right now and is very much not neutral.
2. Arts & culture
In illustration, design and other art forms, we see a similar pattern where the economic logic drives a race to the bottom. Platforms optimizing for engagement and cost-per-unit systematically crowd out work that is slower, stranger, or more demanding of its audience. These art forms were long able to resist the commodification, but now fall to the same industrial production logic that came for crafts and tradesmen over the past two centuries.
There is an interesting thought that by "freeing the hands" of artists, this technology will liberate art and create things that are so distinctly human that an LLM could never come up with it. Akin to how painters were "liberated" by the invention of the camera. Though interesting and even likely, we should still acknowledge the damage being done right now.
3. Surveillance
Right up the alley of the companies bringing you spying smart glasses and personalised ads while opening backdoors to the United States' government, LLM's offer new angles at surveillance. Because of their excellent ability to mask as conversationalists, therapists, even romantic partners - their ability to extract sensitive information is much better. Also, LLM technology makes it much easier to analyse unstructured data (images, video, combined data sources) and opens up a wealth of data - which was already an important factor in power, knowing the Panopticon but becomes even more important as training data for new models.
The more things chance, the more they stay the same. Munn:
Technology, in this sense, is a shapeshifter: the outward form constantly changes, yet the inner logic remains the same. It exploits labour and nature, extracts value, centralises wealth, and protects the power and status of the already-powerful.
User/Chimera - Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Technology emancipation
I am convinced, in general, that there is virtue and meaning to be found in fighting against oppression, or perhaps even Camusian absurdity. Perhaps a consequence of this position is that I do not care as much whether you are a purist or not, as much as I care that you take active effort to prevent the harm these systems do, and help shape and guide them. Whether that is through mitigation, legislation or education. Abstinence from a technology does not help fight the power. It does not clear our conscience either.
LLM technology is not going back in the box, and I do believe it can be harnessed for good. But I also believe that it isn't inherently neutral, and it needs the voice and active participation of all. Munn: "The risk of AI is not potential doom in the future, à la the nuclear threat during the Cold War, but the quieter and more significant harm to real people in the present." We need to recognize the abuse of power, and resist the surveillance, commodification of culture and job displacements instead of the AI that is covering for them.
It's not technology purism we should strive for, but technology emancipation.