AI, that promised god of the techo-elite, will ruin everything, and maybe not in the way that you think.
Ah, the classic rebuke of the (mad?) scientist:
“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”
Jurrasic Park seems like cartoonish hyperbole with its dinosaurs running amok due to the inevitability of human failures, but there is real wisdom in that little line of dialog, namely that aspirational projects are a vanity if they are not directed toward a good purpose, and vanity causes tragedy.
In the real world, the tragedies of scientism are not so obvious as being gored by large proto-avians but the slow destruction of things that are useful and beautiful in service to big humanistic ideas. The uglifying of our world through brutalist architecture and post-modern art is a loss not measured in human life but in deprivations of the spirit. Modern people are not eaten by dinosaurs but by piles of psychotropic pills developed by scientists to cure the malaise of sin.
But these things are still different from what AI has done and will do. The great fear of the 20th century—a self-aware computer hyper-intelligence like James Cameron’s Skynet—is a boogeyman that may never appear. Even if it does, long before he shows up to take over, AI will have cost much and have ruined virtually everything that we have enjoyed since the tech revolution. For some that are thoroughly Varg-pilled, maybe this is a good thing. For the rest of us, it is an unwanted collapse.
Let’s look at things realistically, as I am writing in mid-2024.
Focusing on generative AI, which is the big hot topic and relevant for me. What has it improved?
As near as I can tell, generative AI has improved absolutely nothing.
The only tangible benefit I have seen for things like Midjourney is to cut down the time it takes for me to make a book cover. If AI never existed, I would still be photoshopping covers for my cheaper or more esoteric books, and they would sell the same as always, if not better. If AI tools have made covers look better, the end result is the same as if they had never existed. The cover does not improve the reading experience; it merely changes how marketers compete with one another. The same goes for YouTube thumbnails, etc.
Has AI art improved art overall? No. I’ve seen interesting AI art but nothing that has the depth or meaning of real art, and I haven’t seen real artists improving their work in any way since the rise of AI art. Maybe some artists are working more quickly but that doesn’t make the art better nor the arts as a whole better. It just makes more art, and we had a lot already.
Has it made art cheaper? Yes, but that is neither good nor bad. It is good for people using art for market; it is bad for artists. Meanwhile, there is a constant and unending controversy over using AI art, to the point where artists get accused of using AI if their style happens to resemble a generator (which, ironically, may have been trained on their art).
Going deeper, we have to ask ourselves whether we really want artists to be replaced because that is the point of automation—to save labor. Is art the sort of labor we really want to be saved? Few people object to replacing a person installing a steering wheel with a robot since there is nothing particularly spiritual or humanizing about manufacturing work. Yes, the worker wants a job, but other jobs with similar pay will serve the same purpose for him. However, art is communicative. It exists between people, not on its own. Art requires a person to comprehend it and to assign it meaning, and it is the goal of the artist to reach across this gap.
Suppose you could make a robot capable of writing something of similar style to Lord of the Rings. Would you really want that? Tolkien’s novels are as much about the story of the creator as they are about the creation itself. It’s a kind of divine act for him. Would you really be satisfied reading a robotic synthesis of high fantasy? It’s a bit like the arrogance of Aule, making automatons because of a desire to have rather than receive. If you could have a robot write and sell the book (for you, presumably), what does that say about you? You can’t create.
Moreover, where is the shortage that generative AI will fix? Before ChatGPT and similar Large Language Models (LLMs) grew to prominence, there were already a million books being published on Amazon each year, all written by humans. Isn’t that enough? The same goes for art. Art websites were packed to the gills with visual art from all over the world. Now, you have an infinity of both of those things in the form of computer generated product, and most of it is bad or disposable, so it is that much more difficult to find anything that you can trust to be good. Why do we need a robot to make uncanny valley portraits of big-titty anime girls when you could download a terabyte of hentai whenever you want? There was never a “shortage” to be solved by increased efficiency.
The AI bros were so interested in seeing whether they could get a robot to make art they never asked whether they should, not because of special dangers, but because nobody ever really wanted or needed that. The only exception I can think of is people who want to be authors but can’t write or people who want to be artists but can’t paint. But these people should not be indulged and it is not charitable to make them think they are good at something when they are not.
The aspiration of generative AI is grasping toward nothing.
Meanwhile, the existence of the tools has wrecked the already floundering internet in countless ways. Searches are no longer as useful as they used to be, as results are now flooded with AI articles written to take advantage of SEO (search engine optimization) keywords so that a typical consumer can’t find anything written by a human. Does that matter? Yes, because information is not just factoids in a background but has a human context where trust in that information matters. People listen to what I have to say about writing or music because I am experienced with those things. Do you trust a robot to extract properly all my lessons and give them to you without error or prejudice? Why would you think its advice works when it hasn’t actually experienced anything?
What most people do now for a search is type it in, then jump down to Reddit to see what humans are saying on the topic. However, that is just a stopgap; eventually, the chatbots will overrun all sites, and you won’t be able to trust anything that a person didn’t tell you directly. The internet really will be like a new, uninhabited planet (“It’s inhabited by robots!”) as we flee to old books to find some modicum of trustable information.
Seriously, why would you trust a chatbot, given how much brainwashing and interference governments already do on social media? Back in 2017, we called Twitter the land of robots because virtually every author my brother-in-law and I came across was an automated account. All they did was spam links to books and retweet other robots. Noise was already overwhelming social media, which was so enticing originally because it allowed us to connect with other people, not interact with robots. The natural interaction between bots and algorithms is an amplifying entropic feedback loop that turns everything into unreadable, unverifiable, digital goop. Puddles of stochastic interference are now becoming oceans of unnavigable noise.
Did we really need AI bots on Twitter? Did we want them? No! We wanted less of all of that.
I’ll tell you what people actually wanted intelligent robots for: doing the tasks we don’t like, which is why our homes are filled with robots (of a kind) already. Washing machines, dishwashers, even cute little vacuums—what we want is a robot to do the laundry so we can spend more time playing Legos with our kids.
The AI solution is like selling you a robot to auto-build new Legos and play with your kids so you can do the important work of mowing your lawn.
All of this gets worse when you consider the cost of AI in more concrete terms. Energy is not free, and GPU farms use lots of it. Huge amounts of electricity have to be created to fuel these complex language models. Research is being poured into AI, and we don’t know where those resources might have flowed in the absence of it. Maybe to something useful. Demand for expensive hardware necessary to run LLMs hits consumers who have alternative uses for it and influences future design so that, again, consumers get something other than what they really wanted and pay more for it. A current-gen video card is now over one thousand dollars.
The research focus on AI really seems to boil down to imagining uses for a product that doesn’t yet exist, and a product we don’t know can ever be created rather than identifying problems and finding ways to solve or lessen those problems. Maybe an AI could help me do my taxes…but maybe the tax code should be simplified so we don’t need expensive LLMs to calculate how much money the government should get so I can avoid jail. Maybe an AI can do things humans can’t – but do we need those things? To what end are we working? I haven’t seen one yet.
Microsoft is dead set on including Copilot with new versions of Windows; only every user immediately disables it because they have absolutely no use for it. Truly, large language models are a solution that is looking for a problem in every attempted use I can think of so far. More effort has been spent finding uses for AI than effort saved by the use of AI.
One of the ideas I have seen recently is that AI might have already matured or even peaked, which means improving the LLMs will be vastly more expensive than in the past and won’t yield much increase in power or usefulness (if we can find any real use for them). There isn’t any more good data to give them to improve them, and tail events necessitate real people supervising and checking everything they do. And at that point the energy used to power them must really be brought into question. Check out these videos for more:
(the first half is relevant)
So why the aspirational push toward AI?
Well, first, I think lots of engineers are obsessive and imaginative, and they would want to develop these things anyway. Passionate amateurs have created huge leaps and even large companies like Apple Computer began as small garage projects.
Secondly, AI is a catchphrase for a lots of technologies, and most of the useful ones (like your voice assistant on your phone) have already matured. AI to a purpose has already been around for awhile, while nerds try to make general intelligence golems for no good reason.
More importantly, I cynically think AI is a way of continuing the tech industry’s resource consumption. AI requires lots of chips of higher density and complexity, while most consumers’ computing power needs have been flat for a decade. Smartphones, the biggest tech revolution since the personal computer, were a mature technology seven or eight years ago. VR hasn’t caught on (and I still think it will only catch on when enforced as a corporate tool). The best-selling game console is the most anemic available (the Switch), and lots of Sony users just don’t see a reason to buy a PS5. The PS4 is good enough. Ray tracing isn’t worth it. How do you keep the R&D machine going? AI!
So tech is desperate to put LLMs into everything because it points toward some area of growth that isn’t driven by lagging consumer demands. Most people don’t need an RTX card for anything they do on their PC. Bitcoin miners would scoop them up and flush massive energy into that tulip, but the bottom fell out of that speculation some time ago. But AI… AI can make use of all that complex technology… perhaps if we build it, they will come. People will want AI once they figure out how to use it. That’s the hope, anyway.
For now, it’s more like a nifty toy. AI music, art, and books won’t fix anything besides scratching the itch to hear a vulgar song sung by a fictional 1950s band.
If you want to entertain yourself, read my book on demonic AI. I love thinking about the “what ifs” and the weirdness of AI art, which can certainly inspire lots of things, but while we consider those, we should also consider whether there is any real use for it or if anyone actually asked for it.
I am an independent artist and musician. You can get my books by joining my Patreon or Ko-Fi, and you can listen to my current music on YouTube or buy my albums at BandCamp.
The book below takes a different view of technological overlordship: people forget how to do things without the internet and become generalized drones that exist to serve a non-progressing internet super organism. But that’s in the background! This book is a Star Trek-style space adventure.
For my entire life, creators and publishers/producers have been looking for ways to streamline content creation and forge beltline product that would infinitely pump dollars into their money bins. The natural endpoint to this thought process of sanding off your edges and producing the lowest common denominator "art" is to just have a list of simple clichés and formulas fulfilled by a machine. This is what excusing autotune, flash animation, Save the Cat, and everything else created to "make things easier" for artists was always going to lead to. It was done to create product beltline, not make better art. And we know it wasn't because it hasn't made better art. It's made for faster product.
I really don't think anything short of a full collapse and reexamination of what art means and is supposed to be will lead to any sort of positive change beyond more of this. We will just end up in this same spot again.
The problem as I see it is that AI, unlike the locomotive, automobiles, and auto-manufacturing robots, aims to replace the one thing uniquely human: thinking. Somebody uses AI to generate an e-mail and sends it. The recipient reads it and uses AI to generate a response. At some point, the people involved forget how to communicate.
I hear a lot of people talk about ChatGPT coding. OK, so you got some code that does something. What did you LEARN? I know I know, calculators didn't break math. But calculators are purpose built. AI is not. What I see is two classes of people emerging: Those who think and learn and those who let AI do all the heavy lifting.
The problem there is that AI cannot teach itself. It's like making a copy of a copy - it's degenerative. This has already been discussed at length in the AI community. So you'll have a bunch of people stuck with all the answers from 2019. But what about innovation? That's where the rest of us come in.
And that leads us to intellectual persecution. Those who are still thinking will be subjugated by the AI crowd. Free thinking won't be a problem. Retaining your thoughts will.
The innovation curve will ramp down as fewer people actually innovate. They'll have reams of python code. But they won't question the inane concept of a programming language that can't process a multi-dimensional array. (Seriously, Python, what's up with that?)
Stagnation.
I'm all for automation and making things easier. But when we replace the human action of thinking, learning and creating, we've gone a little too far.
And yeah, I'm already sick and tired of the AI articles that all have the same three paragraphs that don't say anything. SEO is dead. Time for bio-verified publishing. (That will be a thing - you watch.)