← Get back to Notes and Articles
aiphilosophy

How AI might get us to read more books

If AI makes the internet boring, people might stop using it.

One of my central concerns with the diffusion of Artificial Intelligence in our media, online spaces, and published works is for AI-generated slop content to become the overwhelming type of content on the internet. Human-architected content and art will either be drowned out or pushed to exclusive spaces: platforms that curate or cater to a sort of anti-AI agenda. Generally, mainstream platforms like YouTube, Meta, and Google Search will be infected with endless AI-generated, algorithm-orientated content that will no longer be artful or intellectually-stimulating. Before large language models, It was commonly known that the algorithms large content platforms employ incentivize creators to fall in line in order to grow their reach, engagement, and revenue. Even so, there are creators that play a different tune. When their content catches some wind in their sail, they provide audiences a much needed break from being force-fed their interests.

What will happen when a majority of “creators” on mainstream platforms can create content that precisely caters to the whims of the algorithm? My guess would be an onslaught of bare minimum, AI-generated content, and, that being the most profitable for those algorithms, highly unoriginal.

Let’s take a look at how human-LLM relationships looked just two years ago. When GPT-3.5 came out, users had to do a lot more work to take raw outputs from that model and transform them into a usable input for their purpose. That work either came in the form of additional re-prompting with the model, re-writing the responses, or manually re-formatting the plain-text response into something digestible (CSV, markdown tables, JSON, etc.). Yet, as models improved drastically month over month, the distance between initial prompt and usable response has shortened. As this distance continues to close, the need and willingness for users to be involved in the process will become non-existent. We’re seeing this a lot more with new agentic user experiences. In programming, for example, engineers two years ago would have had to use a chat interface to ask questions based on a copied-and-pasted part of their codebase, wait for a response, see if the response makes sense or contains hallucinations, manually paste that back into their text editor, and fix any bugs as a result of the copied in code (renaming variables, linting, etc.). However, with the advent of agents like Claude Code, Gemini CLI, and OpenAI Codex, these LLM experiences have full-access to the codebase, terminal, and tools like web search. Users need only to act as reviewers after prompting. This doesn’t mean that hallucinations are a solved problem. They still occur. Or that LLMs can’t code itself into a wall. It certainly can. The difference between now and two years ago, is that the distance between initial prompt and usable output has come to an all-time low, and as a result, the need and willingness for user-participation is at an all-time low. There’s no wonder why “vibe-coding” has grown as large as it has; when people don’t have to do anything to achieve an outcome, they won’t. Humans are creatures that crave the path of least resistance.

AI has also affected slices of the online content market that care only for generating revenue. Not that long ago, LLMs broke the limitations of plain text and models were created and fine-tuned to generate other mediums like voices, images, videos, and music. For music, full songs can be generated with just an inkling of a thought or a description of a genre, style, or artist. While I’m sure that such a technology could be used to aid the creative process somehow (though I highly doubt it), the more common use case was to pump out hours and hours of instrumental music for genres like lofi and upload them on YouTube and Spotify. These songs conquered the YouTube homepage, Spotify playlists, and the eyes and ears of an audience that may not have been able to notice the difference. Ironically, the choice of lofi and similar genres was driven by 1) these genres are highly instrumental making them easier to mimic and use generative AI for, and 2) YouTube’s algorithm loves lofi music. The content was low-effort for generative AI and high-interest for the algorithm, resulting in a high-volume output from the AI maestros trying to cash in on the current path of least resistance. Outside of music, “faceless channels” are highly automated YouTube channels that scrape Reddit or similar sources for content ideas and use generative AI for voice-overs and b-roll videos. Channels of this sort use this low-effort medium to quickly put out videos that the algorithm will pick up, even though they add no value and steal content ideas. Interestingly enough, before the advent of AI, these types of channels hired overseas contractors for voice-overs, animations, and script writing. What’s changed is that generative AI has made it easier, faster, and cheap to pump out ‘content’. So much so that you’ll find loads of ‘gurus’ teaching others how to make hundreds of thousands of dollars a day pumping out AI-generated music or building faceless channels. As the quality of generative AI improves and as the barrier to entry continues to decrease, slop will be pushed out en mass.

Turning away

I’m finding a lot of my peers, including myself, see the negative ramifications AI is having on various domains and are making conscious decisions to unlock themselves from it. They are anecdotes sure, but the fact that these anecdotes exist means something. In software engineering, I’m reading of engineers giving up LLMs for personal projects or scheduling in “no AI days” for their weekly schedules to give themselves an opportunity to think again and exercise their brain - that big muscle that will atrophy if not used. Only recently has programming become a career for me, up until I graduated Iowa State a year ago, it has been one of my most cherished forms of self-expression. Like everyone else, I crave the path of least resistance and so I too have bitten the apple, especially at work when working on tasks I have zero previous knowledge on. Yet, LLMs have made it difficult to actually write code and enjoy this hobby I’ve had since middle school. I turned off GitHub Copilot 18 months ago and this month I cancelled Anthropic’s $200 a month plan for Claude Code after using it religiously for 2 months for personal projects. This sort of protest is by far a minority position in my industry, but it is still a position. My view is certainly biased and my sources fit my own perspective, but it is still interesting to find people choosing to take the path of more resistance, to reject algorithms and curate their own feeds, to buy records, iPod classics, and film cameras, to still buy thick, paperback books and read them and talk about them, to think and use their hands to express their condition to each other.

My hypothesis, and my hope, is that when the internet becomes a majority AI-generated venue and most of the content users will find in their feeds isn’t crafted by other humans, that a cultural shift might ensue where people choose to detach and “go back” to slow, original, human-made art like books, long-form podcasts, movies, serial television shows, and maybe even a revival of physical newspapers (they’re quite nice to hold). Content will be made by bots, for bots, critiqued by bots, responded to by bots, and humans will be somewhere else, hopefully, off-line at a book store or a coffee shop or some other third place we build to maintain our culture. AI-generated content is not art and most of us know this instinctively. Art is one of those “basic functions” of mankind. We eat, we sleep, we communicate, we move, and we make art. We’ve made art since we could and we will continue to make art until we can’t. I’d like to strongly believe, that as a species, we will continue to cherish it and seek it out like treasure amongst the filth that sophisticated algorithms can create.

Guppy typographic logo wide in a black gradient