27 min read

Ailom: How AI Permanently Makes Everything Less Meaningful

Goodbye, world.
Ailom: How AI Permanently Makes Everything Less Meaningful


1. Introduction.

A traditional assignment for beginning programmers is to write a few lines of code that instruct the computer to display “hello world” when the user presses a button. Although the dictionary definition of “hello” doesn't change, this programmed “hello” isn't actually the same as a “hello” spoken by a human. The computer displays “hello,” but it means nothing.

As long as this went no further than “hello world,” the loss of meaning didn't matter. After all, the world never took the computer's “hello” seriously in the first place. However, since AI has learned to convincingly mimic human expression the problem has become more serious. AI now strips meaningfulness from every form of expression—including expression that comes directly from humans.

I call AI-induced loss of meaning “AILOM.” Most of us have experienced some degree of ailom by now. But because we've lacked a clear explanation for what we were experiencing, we've just felt a vague sense of unease, and brushed it aside without being able to put our finger on its cause. That's the gap I propose to fill here. In this essay I'll describe ailom, and then consider what a world increasingly afflicted by ailom means for ourselves as individuals, and for our society as a whole.

2. Ailom goes on a date.

Your girlfriend broke up with you months ago. Life drags on, but you just can't get over her. Then one day, out of the blue, she DMs you a “hello” on social media. She said she'd never talk to you again, so you're surprised. But you have a good conversation. You chat for hours. At the end she even sends you a kiss emoji.

Pretty soon you're chatting every day, just like you used to. She writes the same way she always did—everything in lowercase, never any commas. She says she's not ready to call you on the phone yet, but she's realized how much she loves you. She just needs a little space, and time. Your heart overflows. You're feeling fantastic about your future together. When she tells you she needs money for a flight to see her ailing mother overseas, you send it right away.

Then you finally get that phone call. And, it is your ex. When you first hear her voice you're elated. But she's not calling to tell you she's ready to meet again. She's calling to tell your her social media account was hacked. “I hope you haven't gotten any strange messages.”

It turns out all those DMs were from a chatbot that had been trained on years of her messaging history. It sounded so much like her, and knew everything. You had no idea an AI could be that convincing.

You realize your “conversations” online over the past few weeks were completely meaningless. You weren't talking to anyone at all. Just software programmed to chat you up until you sent money. It's the worst feeling you've ever felt.

The next day you can't bring yourself to go into work. You stay in bed miserably scrolling social media on your phone. If a chatbot can fool you for that long, you wonder, how can you know which posts and replies are real and which are written by a human? There's no way. Now everything just seems meaningless.

Your feed turns into a blur, but you keep scrolling aimlessly. You tell yourself you should be talking to more real people—in person, and only in person. But all of your friends are online. You met your ex online too. You've arranged every date online for years. You don't know how to do anything else.

Then you start to think more positively. You're an optimistic guy, and if there's anything history proves it's that technology is always good for society. Without gain-of-function research, we'd still be living in caves! Maybe you can leverage this chatbot thing. After all, what worked on you could work on someone else. And what you need more than anything right now is a woman who will help you forget your ex. Both versions of your ex.

You've always been nervous on dates. Especially first dates. To be blunt, they go badly. But today, finally, you have a solution. You buy a small earpiece, a voice-recognition app, and a subscription to a “romantic” AI chatbot. All of it runs through your phone. Then you schedule a date using another app.

When she arrives at the restaurant you're already set up. You've strategically placed your phone upside-down on the back corner of the table. Your phone records your date's voice, sends it to the romantic chatbot, and the chatbot reads its confidently romantic response into your earpiece. Then you repeat that response to your date. There's a little lag, but your date isn't a fast talker either, so it's hardly noticeable.

By the time the waiter brings your meal your nervousness has melted away—and by the time you've finished eating, you feel invincible. Now you always know what to say, and you're never at a loss for words. Sure it's a little weird when the waiter offers you “fortune” cookies and the chatbot responds with lines about volatility in the Chinese stock market, but your date is surprisingly unfazed. She comes right back with a sharp observation about how the Japanese carry trade affected last week's IPOs.

Pretty soon you're talking about finance news that's way over your head. Your date is some kind of genius autist. Her detailed understanding of copper futures is astounding. She even seems pleased to hear you rattle off the exact coordinates of every uranium mine in the Congo, and compliments you on your knowledge. Well, technically your chatbot's knowledge; but what really matters is that the date is going so well. It's easily the best first date you've ever had, and you'd have been totally lost without AI help. You forget all about your ex, and all about the hopes and dreams and fears you'd confided in her—or was it in the AI version of her. Well, either way, it doesn't matter now.

Eventually the waiter brings the tab and sets it on the front corner of the table, where your date's phone is lying upside-down. She excuses herself to the powder room, and you sit there for a while, folding and refolding the fortune from your fortune cookie. It was blank—another junk product from China, you suppose. Your date made a joke about it when you pointed it out. To be honest, her joke was pretty terrible, but nobody expects women to be funny anyway, so you didn't think much about it at the time.

You reach for the tab, and then on a whim, turn over her phone. Strange. It looks like she's running the same voice-recognition software you are. You feel, somewhere near that acid-drenched fortune cookie, a sinking feeling.

For a moment the restaurant seems to scroll around you like a glitchy social-media feed moving too fast to read. The torrent of syllables emitted by the diners has no more meaning than the clinks of their glasses and silverware. But you shrug it off. You're not a luddite. The world changes. And we have to change with it.

You turn her phone back over and smile when you see her returning. Your chatbot is already explaining what you need to say. And it will tomorrow too. You're no longer sure who's talking with whom, or rather what's talking with what. But the one thing you are sure about is that it will know the exact right moment to tell her, “I love you.”

That story is just an introduction. Parts of it are plausible, and parts of it are exaggerated. (For now.) Let's change gears and do an experiment instead.

Take a look at the image below. Assume, for a moment, that this image was created by an AI. What does it mean? Nothing, or almost nothing. It's just two rectangles.

Well, the truth is that this is a real painting painted by a real human. So look again. Take your time.

What do you see in it now? Maybe, you think, there's something there after all. You start to guess at emotions behind it. You wonder if, perhaps, it isn't quite so meaningless.

Now read the following story behind the painting.

After reading the story, it becomes obvious that the painting isn't meaningless. You might not like it (I'm hardly known as a fan of abstract expressionism myself), but because you interpret it with a human subject in mind, it seems rich with meaning—and rather sad.

Wait a minute. Sorry, folks—this actually isn't a real painting. I just randomly generated some painted shapes with an AI, and then falsely claimed it was a Rothko to mess with you.

What now?

Suddenly the meaning disappears. It was, you realize, a figment of your imagination. All you're looking at is a couple of rectangles some trickster foisted on you like an aesthetic Rorschach test. And you feel like a fool.

Congratulations: you've experienced ailom. AI-induced loss of meaning isn't much fun, is it?

You might dismiss this experiment because the painting is, well, junk. It's a trick anyone could have played at any time, since anyone could paint this painting. The problem is that you can now have a similar experience with much more sophisticated art. The kind of art that, prior to 2020, could have only been created with serious human dedication, but can now be created automatically, without any human involvement at all. I just chose this particular painting as an example because it brings our innate inclination to see human subjects within their creations into the strongest relief.

And I could make the same point with music. When you listen to music, you commiserate with its human creators. You share their joys and sorrows and visions. If I tell you those human creators don't exist and an AI made the music, with whom are you commiserating? Not the AI, certainly. It becomes impossible to enjoy it in the same way; in a sense, the bottom falls out. AI music might be well put together, but it signifies nothing. It's not even sound and fury—the AI can't feel fury. It's just sound.

4. Ailom matters to everyone.

The examples above, I realize, won't have the same significance to every reader. Most people aren't interested in paintings at all. Many people don't care if they have a genuine conversation with their date—otherwise they wouldn't go to clubs that play music at a volume too loud to talk over. And some people only want to hear beats they can dance to, and don't care whether they're made by man or machine. Nevertheless, in each of these examples we can see that AIs simulating human expression reduce the meaningfulness of that expression. Since this is happening in every domain, almost everyone is experiencing ailom in some respect they do care about.

I've started with these examples because they're striking and easily understood. In the next few sections I'm going to give a thorough and logical explanation of how and why AI is reducing meaningfulness. This will be a bit harder to follow. However, I want to show that my claims about ailom have a firm, rational basis, and aren't just driven by emotional anti-tech sentiment. As it happens, I'm not a luddite at all. I'm just not willing to write blank checks for every innovation.

5. Semantic meaning and human meaning.

To better understand ailom, let's figure out exactly how a computer's “hello” differs from a “hello” spoken by a human.

When a human says “hello” to you, this seemingly simple word isn't just the formulaic greeting defined in the dictionary. It's an affirmation of your relationship. It means, “You and I are two people in such and such a relationship with each other, where we have certain expectations and feelings toward each other, and by greeting you today I'm acknowledging that.” A missing “hello,” on the other hand, would have meant, “Our relationship isn't important to me, and I don't consider you worthy of acknowledgment.”

The way a human says “hello” contains a great deal of meaning too. If he says “hello” in a cheery way, he's telling you, “I'm a cheerful person in a good mood today, and so you can expect me to behave in an active and helpful manner going forward.” If he says “hello” in a dejected way, he's telling you, “I'm not feeling well today, perhaps you could show me some sympathy or help with my struggles.” Even if he says “hello” in a completely neutral way, he's still telling you something about his personality and his mood. A neutral “hello” says, “This is just another day, and I'm feeling okay about it, expect me to behave in a very normal manner.”

A human's “hello” also gives you a small window into what he's experiencing. You form a mental picture of what it's like to be conscious of the world in the way he's conscious of the world. For instance, you know what it feels like to have a bad day, so if he says “hello” in a dejected way, you imagine feeling dejected. And you empathize with him on the basis of the mental picture you've formed. Naturally, this affects your behavior as well. You probably give him a little extra consideration that day.

So, beyond the dictionary definition of a statement (the semantic meaning) there's what I'll call the human meaning. The human meaning is more than just the connotation. It comprises implied information about the relationships, personality, mood, internal experience, and expressive intent of the person who makes the statement. We process human meaning so intuitively that we don't notice the complexity and quantity of information we absorb every day from a single word like “hello.”

Now, in stark contrast, when a primitive computer program says “hello,” none of this human meaning exists. You don't have a “relationship” with the program. The program doesn't understand what “hello” is even supposed to mean; it's just printing out words. It doesn't have a tone of voice. It doesn't have emotions. It doesn't have an experience. You can't empathize with it because there's nothing to empathize with. If a human “hello” is the tip of an iceberg full of hidden meaning, the program's “hello” is more like a piece of white styrofoam floating on top of the water.

Today's chatbots complicate the issue. While it's still not clear that they understand what they're saying, they can convincingly mimic a human in discussion. They can even produce a voice and speak in a cheery tone. Nevertheless, their statements aren't qualitatively different from the primitive program mindlessly printing out “Hello world.” They remain devoid of human meaning. When an AI chatbot says “hello” in a cheery tone, it's not actually experiencing any mood at all, let alone a cheerful one. It's not affirming a relationship with you, because it doesn't have relationships in the way humans have relationships. In fact, there's no reason to assume it's conscious, let alone worthy of empathy. And even if it explicitly (and dishonestly) elaborates on its “hello” with the sorts of things implied when a human says “hello,” none of this changes. Because whether or not it speaks in a human voice using human words, it's not human—just as a mockingbird who imitates a car alarm isn't a car.

Here I anticipate objections from extreme technophiles. “My chatbot really loves me, you can't prove me wrong.” I consider this total nonsense, and you probably do too. It's like trying to drive the mockingbird down the freeway. But because the nature of consciousness is perfect grist for philosophical speculation, one can easily be drawn into lengthy and distracting debates about metaphysical uncertainties. The fact is that today's LLMs aren't anything like the human brain, nor are they trending in the direction of being anything like the human brain. They're mimics with a completely different internal structure and a completely different material substrate. I see no reason to believe they're conscious at all, but even if they were conscious they'd still be unable to generate human meaning in several of the key respects I've listed.

Chatbots simply don't experience emotions in the way humans do. The phrase “I love you” consists almost entirely of human meaning, so if a chatbot says it, it's almost entirely meaningless.

"But I really love you!"

In short, while AI-generated statements have a semantic meaning, they lack a human meaning. Because of this they're inherently less meaningful than the exact same statements coming from a real person. Certainly this deficiency is less relevant for statements concerning abstract logic or pure matters of fact—for instance, the current position of Jupiter's moons—but those are only a subset of the statements we care about.

6. Semantic and human meaning in the arts.

Next I'm going to show that this same argument applies to statements made by fictional characters, and in the arts more broadly.

Suppose you and I are having a conversation. You ask if I know what the weather's like, and I respond, “Jenny said it's cold outside.” I'm quoting Jenny, but the full statement, “Jenny said it's cold outside,” is my statement, not Jenny's.

Suppose we're having another conversation. As part of this conversation I recount the fable of the scorpion and the frog. Within this fable, of course, the frog and scorpion have their own conversation. But the fable isn't the frog's statement nor the scorpion's statement, just as “Jenny said it's cold outside” wasn't Jenny's statement. Once again, it's my statement.

Now suppose you're in the audience and I'm on stage acting out a character in a play I've written. We'll call him Hamlet. I say “hello” to another character, whom we'll call Hecuba. This “hello” has a fictional human meaning, but since neither Hamlet nor Hecuba actually exist, it doesn't have a real human meaning. Nevertheless, there is a real human meaning here. Hamlet's dialogue with Hecuba is once again part of my statement, just as Jenny's words, and those of the scorpion and frog, were part of my statements in the previous examples.

To generalize, the fictional characters in a story are internal components of a statement made by the human author or teller. And that statement follows all the same rules of meaning we discussed in the previous section. When the speaker is human, it entails implications about his personality and mood, about his relationships with us and other people, and it provides a window into his experience of the world. Those implications might be very indirect. But they're there nonetheless.

Of course, I'm not saying this is all a play or a story amounts to. The playwright isn't at the forefront of our minds as we watch; we're busy grappling with his creation. But the knowledge that there's someone behind it, a real person who lived his life and decided to write the plays that he did, remains there in the background even if we know nothing whatsoever about who he was and simply infer it all from his play. His existence is what gives the play a real human meaning in addition to the fictional human meanings it contains.

Let's push the example further. Suppose we watch an animated version of Hamlet. Does the real human meaning disappear? No. In fact, it expands. Despite the fact that almost every element of anime Hamlet is the product of artifice, the artists who create it—the playwright, the animators, and the voice actors—are all real people, and anime Hamlet is their collective statement. It's born from their experiences, their intuitions, and their sweat. Indeed, the fact that they valued their creation enough to put effort into it isn't irrelevant. It's part of the meaning.

Let's go further still. Suppose we watch an animated, musical version of Hamlet. Is this any different from our previous example? Clearly not.

Suppose we watch an animated performance of a song excerpted from this Hamlet musical, where all the instruments are electronic and the singing voice is synthesized electronically too (expressively programmed by a human, not generated by AI). Is this any different from our previous example? No, in fact. Because there are still human creators behind every element. Whether the instruments that produce the sounds have a material existence is, well, immaterial.

Suppose we listen to a song created entirely with electronic instruments and voice synthesis, and “performed” by a fictional performer, whom we'll call anime pop-idol Hamlet. This song pushes artifice to the maximum, but it's still crafted in every element by human writers and composers. And because of that fact, it has just as much human meaning as the live-action spoken-word play where I, in the guise of Hamlet, said “hello” to Hecuba.

The real human meaning of an artwork always comes from the root level—from the artists who create it. Not from the medium they work in, nor the techniques they use, nor the fictional characters they conjure up. Not from violins or synthesizers, or pencils or typewriters. And this human-produced art is a link that connects the viewers to the artists, as other humans who are sharing a creation that's informed by their own separate experience of the world.

Now, let's contrast our human-produced play with a similar play generated entirely by artificial intelligence. In this AI-generated play, Hamlet's “hello” to Hecuba still has a fictional human meaning, exactly as before. However, this time the real human meaning that was present in all the previous examples—the meaning deriving from the playwright, the animators, and the actors—is gone.

There's something eerie in this absence. Yet because it's a story and the fictional characters are fictional either way, we can't readily explain why we're bothered. Perhaps, we begin to think, we shouldn't be.

Well, the eerie loss of meaning isn't just a figment of our imagination. We're confused because a story has two levels of human meaning—a fictional one and a real one, and it takes careful thought to distinguish them clearly. But once we do distinguish them, we can easily see that if the human playwright doesn't exist, the play's real human meaning can't exist either.

Ultimately the AI producing a play is the same as the chatbot saying “hello.” And that's why AI art is afflicted by ailom. AI-created artworks float untethered from any human creator, leaving us disinclined to assign them any significance at all. We feel this AI-induced loss of meaning keenly. It is a noise-filled aloneness.

7. The indistinguishability issue.


Ailom affects us more broadly than it might seem to thus far. The problem isn't only that AI-generated output is less meaningful than human output. It's that once AI-generated output reaches the point where it's indistinguishable from human output, and is shuffled into the same pool as human output, it casts a cloud of suspicion over everything, whether it's AI generated or not. Human output therefore becomes afflicted with ailom too, simply because it might be AI. In effect, AI poisons the well.

We've already seen an example of this in the story that opens the essay. After being fooled by a chatbot, our hero realizes he can't distinguish AI speech from human speech, and therefore comes to see all statements as potentially the product of chatbots, bereft of human meaning.

While exaggerated to illustrate the point, that story is true in essence. In the early days of LLMs, chatbots wrote in a distinctive and easily identified style not used by normal humans. When I analyzed replies to some popular threads on social media, I found a substantial fraction of them were written in this style. In other words, written by AI. Today the fraction of online writing produced by chatbots is undoubtedly much higher, and their style is harder, or impossible, to identify. And if bots generate content faster than humans and no method to filter them out is discovered and implemented, the fraction of indistinguishable AI-generated content on the internet and social media networks will grow asymptotically toward unity.

In other words, in the absence of effective mitigation measures, we should expect both the potency and concentration of poison in the well to continually increase. And the same thing will happen with plays, paintings, and music as soon as indistinguishable AI-generated output is shuffled into the same pool. All forms of expression will increasingly exist in a kind of limbo where they may or may not be AI generated. Everything will become less meaningful, and ailom will be our default experience rather than an exceptional one.

8. The indistinguishability issue in the arts.

Most of us would prefer to separate AI art from human art. But because the public hasn't taken the time to understand where the human meaning of art comes from, they often try to draw the line in the wrong place. Their first instinct is to look toward performers who demonstrate visible enthusiasm while they're playing physical instruments in a live setting.

Our earlier examples have already proven that this isn't the useful gauge it appears to be. Anime pop-idol Hamlet's song has just as much human meaning as a song played on the guitar by a folk singer.

When all expressions could be AI-generated, you're alone. And you can't stop it.

And perhaps more.

I recently read the story of a folk singer who's having AI write his songs, but performs them on his acoustic guitar. These songs have less human meaning than the songs composers and producers have made for the imaginary anime pop-idol Hamlet. But because a real person is giving a live performance and sensitively contorting his brow over a real acoustic guitar (made out of wood!), the public's instinct is to believe the opposite.

Along the same lines, a punk drummer hitting the snare on two and four may be expressing hardly anything at all, or only the most primitive and indeed animalistic emotions, whereas an electronic musician programming beats into his laptop may be expressing a very substantial and sophisticated musical reflection of his human experience. The apparently important difference between them—that the former is creating the music in realtime with his body while the latter is creating music with premeditation at his keyboard—is just a red herring. It doesn't tell you anything about the level of creativity, musicality, or expressivity involved, and it has nothing to do with whether there is or isn't a human meaning in the music.

Consider the two examples below. While the simple drum performance in the first song has its place, the programmed drum part in the second piece is more substantial in every respect.

So people can create art that has a great deal of human meaning using technology and artifice, and people can gyrate through enthusiastic live performances that have very little human meaning, or musical substance, for that matter. Insisting on the latter type of art doesn't solve the indistinguishability problem. It just imposes an unnecessary and harmful restriction on artistic expression.

9. Refuting criticism.

Back in the 20th century, postmodern literary critics declared the “death of the author.” Have I perhaps erred in the 21st by claiming that the existence of a human author is important and necessary for his work to have a “human meaning”? I think not. The fact that we can never know for sure what the author meant, and indeed that even the author himself may not be not fully conscious of everything he meant, doesn't imply that his existence as a human is irrelevant.

When someone speaks, we perceive his speech in the context of an underlying assumption that he means something by it, even if his speech doesn't perfectly reflect that meaning, and even if we can never perfectly determine that meaning. And we also perceive his speech in the context of an underlying assumption that he has a personality, and emotions, and relationships to other humans, and that he has his own experiences of the world. None of these assumptions are ever completely verifiable. But complete verification isn't necessary. They're essential to the way we understand and relate to human speech regardless.

In short, we understand human speech as having a subject behind it to whom we have imperfect access, but without whom human meaning would be lost, tossing us into ailom. Postmodern theories about the nature of authorship don't change this fact in any respect.

Another likely criticism is that I've overemphasized “humans” and ignored other entities. Suppose, for instance, there were a silicon-based brain that's in every other respect identical to a human one. Could such a brain not produce human meaning?

Perhaps it could, but this kind of objection isn't relevant at present, so I prefer to continue using the everyday term “human” in a conventional way for the sake of simplicity and clarity, without implying the exclusion of gods or angels, or futuristic silicon-based humans.

The last set of objections I'll address is the most tricky and interesting. What about humans who use AI-generated speech or art in modest quantities here or there. Is it really any different from using the technologies that have come before it, like drum machines? This is a complicated question that requires me to add more nuance to the points already made.

The process of creating substantial artworks is one that happens in degrees. The better the artist, the more control he has over its details. An apprentice artist often writes details that are rote or helter skelter. One example is placeholder lyrics. When writing a song, meaningless words often pop into one's head along with the initial melodic idea. For instance, “alone” and “home.” These rhymes are automatic, and arise in the mind only because they're common and easily grasped, not because they have a meaning congruent with the artistic intention. It would be wrong to say they're meaningful; they could just as easily be the product of an AI. Now, a lazy or unimaginative beginner will leave these meaningless words in the final version of his song. This is bad practice. However, it doesn't always mean the song as a whole is entirely meaningless. It's simply flawed, because it includes components that don't fit a holistic vision. These rote or helter-skelter elements were “generated” by an artist, but they weren't really chosen by an artist as such.

Would this song mean any less if it were AI generated? Obviously.

AI-generated elements of an artwork that is in other respects the product of human labor fit into this same category. They don't necessarily make the work as a whole meaningless, though they certainly can. A use case I would view very negatively is where an artist writes half of a piece and asks the AI to finish it. This is not hard to analyze. The AI wrote half of your piece. If you don't care about your piece enough to finish it yourself, why are you writing it in the first place? A use case, on the other hand, where some AI-generated elements would make a great deal of sense is an open-world video game. For instance, the shapes of trees might be generated by AI on the basis of a style or model determined by the designers. Drawing a million different trees by hand would be a pointless waste of resources, as little of the game world's substance resides in these variations.

None of this, however, excuses prompters who claim to be artists. If you tell the AI to write a song about love, your level of control over the output is very close to zero. Even if you write a long and detailed prompt, your control is very, very far short of the level a genuine musician would have, and different in kind as well. By pretending to be a musician you're just engaging in ludicrous pretension that leads nowhere. I suggest you start by learning where middle C is, and come back in ten years. I've used AI to generate album covers because I consider them advertising material that's external to the artwork, and I can't afford better; but that doesn't mean I think I'm a painter.

This should cover the main objections to my argument. Now let's dive deeper into pessimism.

10. Mitigating measures

Ailom is not much fun. As humans we prefer to have meaning-rich interactions with other humans, not meaningless interactions with computers. So how can we mitigate AI-induced loss of meaning? Well, the obvious answer is to completely segregate AI-generated output from human generated output, so we always know which is which and can avoid the AI-generated output when we want to.

Unfortunately we just don't have a good way to do this. As we've already discussed, AI-generated output isn't clearly distinguishable from human output. On top of that, the easy means humans intuitively use to separate the one from the other miss the mark, sometimes badly. For now the highest levels of artistic quality do seem to be beyond the reach of AIs, so well-developed aesthetic discrimination can sometimes answer the question. But whether the quality gap will last is hard to say for sure.

The best not-good way to partly get this job done is to ensure that every real human has a single verified online identity so they can be distinguished from bots. Only partly done, because obviously these real humans will still be able to upload AI-generated material; but at least the quantity will be limited and innumerable bots will be prevented from flooding the zone. Such verification will also have the considerable negative side-effect of curtailing online anonymity. When online anonymity is no longer possible, political actors will use social- and financial-pressure tactics to restrain free speech they dislike while allowing it to remain technically legal—tactics we've already seen applied to political dissidents speaking under their real names. But verification of humanity is the approach our society will likely end up resorting to despite this serious defect, because there isn't really an alternative beyond learning to love the bomb.

Well, what about learning to love the bomb? Maybe the best mitigation measure is to ignore ailom, treat it as a non-problem, and enjoy all the new AI-generated creations. It's entirely likely future generations will do just that. But it doesn't change anything. A loss is a loss even if you forget that the loss happened. If someone burns your house down and the next generation grows up living in a tent, they might not know what they're missing, but they're still living in a tent. The loss of meaning in conversation and art appreciation might not be a material loss, but it's still a genuine loss. Besides, the hero of my “Ailom On A Date” story already tried learning to love the bomb, and I don't think you were too impressed.

On a personal level, identifying ailom as the reason your interactions with AI feel eerie and uncomfortable is already a mitigating step. If you want to avoid the source of negative emotions, it helps to know what the source is. And that, dear readers, is about the best I can do for you.

11. Ailom is widespread and permanent.

We're used to viewing technological advances positively. But there's no fundamental reason net-loss technologies can't exist. And exist they do. I already cited the example of gain-of-function pathogen engineering. It might be a technological advance, but we'd be better off having never made that advance.

AI as a whole will, I pray, prove to be a net gain for humanity. But whether LLMs, and especially AI art, are really a net gain for humanity seems questionable at best. Art was never particularly expensive compared to other segments of the economy, and LLMs haven't proven as materially useful as their proponents claim. Besides which, what does “materially useful” even mean when none of us are starving? Beyond the holy grail of rejuvenation biotech and cures for diseases, the reason we want technology to advance and the economy to grow is so more of us have the opportunity to fulfill higher goals in the hierarchy of needs. Namely, we want less drudgery and more meaningful experiences. And yet, as I've explained in this essay, AI is causing us to have less meaningful experiences than we did before it was invented, because it saws at the expressive links that connect us to other humans. That's not a step forward. It's a step backward into a solitary void, hemmed in by senseless babble.

So I think of the invention of AI art as digital gain-of-function research. For tech bros it's of course very exciting to hear each announcement of a new function: “Now ten times more contagious in humans!” Hooray. Well, nothing's technically sweeter than the design for a weapon of mass destruction.

As we've learned all too well, once the gain-of-function research has been done and its results released into the wild, we have little choice but to live with the disease. AI isn't going away, and neither is ailom. In fact, the powers that be are happy to force them down our throats in one way or another, whether for cost-savings or for the more sinister purpose of social control. I think they'll get their way on both counts.

You'll want me to conclude with an uplifting and spiritual message of some kind. I can't think of one. If you feel it's necessary, why not ask AI to write it? Maybe you won't even miss me.

J. Sanilac

March 2025


Also by J. Sanilac


Ultrahumanism: How We Can All Win

Critique of the Mind-Body Problem

The Computer-Simulation Theory Is Silly 

Natural Encounters

Dispelling Beauty Lies

An Introduction to My Music

Memoirs of an Evil Vizier

Trust Networks: How We Actually Know Things

The Illusion of Dominance: Why The “Redpill” Is Wrong

GIMBY: A Movement For Low-Density Housing

End Attached Garages Now

A Pragmatical Analysis of Religious Beliefs

Against Good Taste

Amor Fatty

Milgram Questions

How to Seduce a Billionaire