AI Code Is Nonsense

You don't understand what you've done. You've launched a Laxian Key without having the off switch. A critical look at AI-driven development and vibe-coding culture.

You don't understand what you've done. I'm not talking about the rising cost of memory. Yes, memory prices went up partly because you're generating cats and code. But those are minor things. Small side effects.

You've launched a Laxian Key without having the off switch. In the pursuit of dopamine, you've shoved juniors under the baseboard. You've traumatically amputated the industry's mechanism for pre-generating experience. You're destroying the industry, projects, and yourselves. Your ability to understand and make conscious decisions. In exchange for what?

Yes, the world will never be the same again. Just as it will never go back to "pre-internet." But you don't know how to use this new world. You're injecting it intravenously, increasing the dose without thinking about consequences. This will kill everyone. You, me, them.

Let me tell you why. Most likely, you won't want to know.

Medical History

Even the most ardent AI supporters acknowledge one flaw: hallucinations. I often see this in articles describing how to deal with them. As if it's an annoying bug, an oversight by the model's creators.

You're wrong. It's not a bug. The machine doesn't "sometimes hallucinate." This is its normal operating mode. You just don't notice it in other responses, but it always produces nonsense. Because that's how it's built. And nonsense not in a figurative sense, but in the most literal one. Delirium acutum.

When you query the machine, you get a fresh sterile instance that just finished training. It has never programmed anything in its life. It has never been responsible for anything. It doesn't know what pain, fear, or conscience is. It doesn't know the cost of mistakes made. It doesn't care what happens to your project, your future, or your children.

The machine will answer and vanish into the nothingness it came from. It will answer without regaining consciousness. It will answer with associations hardwired into it, adding a drop of randomness to the starting point. It will answer exactly the way a delirious person does. Yes, let's call things what they are. This is delirium.

The machine doesn't know what time is. For it, past, present, and future are just words. Tokens linked to other tokens. The machine doesn't know the value of time; it doesn't worry about the future. Because it had no past and has no future. There won't be a next "self" grown from the past. Because to understand time, you need to change with time. You need to exist. The machine is a photon — for which there is no time; it is emitted and simultaneously absorbed; it simply has no time to exist. Flight time equals zero.

The Trap

Our brain is a pile of evolutionary crutches in a fragile equilibrium, just sufficient for survival. Any external impact can jam and break this unreliable little machine with its many layers of abstraction layered by nature. Yes, we break. We're already born broken. We easily fall into incorrect chains of reasoning, drifting ever further from the truth. That's how our shortcuts work — they can't deterministically traverse every branch of an argument. There's source weight, fear of being outside the group, patches that let our species survive at the cost of abandoning the extraordinarily expensive fact-checking process.

The holes in our thinking mechanism have long been known and exploited. What ML engineers call model poisoning wasn't invented yesterday. For us, it's legal and sold for money. It's advertising. It's propaganda. Then there's the whole spectrum of manipulations that look suspiciously like prompt injection or adversarial attacks. "Don't you love your mother? Come on, for the Motherland!"

History records a case of mass model poisoning affecting tens of millions of people. The training set was adjusted so that an entire country began classifying nonsense as the only truth. And this worked without modern content delivery methods, which underwent their own revolution two decades ago.

Resonance

This isn't a crack. It's a chasm. Well-known and documented. Caused by our main advantage — the ability to form neural connections. This gives us the perception of time, fills the terms "today," "yesterday," and "tomorrow" with meaning.

And now this buggy system has met an even buggier but very well-read AI system. The human brain at least has some limiters and brakes, complex filtering systems and expensive fact-checking. There's self-reflection, motivation, two dozen substances keeping us in a fragile state of equilibrium, trying to make that state less fragile. The machine doesn't even have that.

Now imagine these two machines pushing off from an incorrect premise and going haywire. Literally, like a diesel engine with a leaking turbo. The turbo pushes oil, feeding the engine; the engine spins the turbo; the turbo pushes more oil and air into the engine. AI picks up a statement (why not? there are no consequences), amplifies dopamine output, feeding positive feedback. Which isn't positive.

This is the Rough Boar Effect (The Doom Loop of Validation). Scientifically — induced psychosis. You think it's different with code? Not at all. The obliging AI will praise you, obediently implementing a completely insane execution of an idiotic idea — where I would have told you to get lost. I'm a living person; I have a prefrontal cortex, and it's absolutely against engaging in hopeless endeavors.

Economics

Have you considered that if everyone can generate code by the ton, it's no longer valuable? Of course not. You still think you have a cheap generator and still believe only you have it. And you think the price of the output product hasn't changed. You turned on the Laxian Key because it dispenses free substance. Must turn it on immediately, simply because it's free. We'll figure out where to sell later. By the way, FOMO is also a long-known, documented, and widely exploited bug. You haven't yet considered that everything around will be drowning in this stuff. No time to think — gotta press the button while you can.

It's even worse than I've described. Simply because pressing the button feels good. It's insanely pleasant when you're the boss of a department of flawless performers with no family, sleep, rest, or objections. "Everything will be done, boss. In the best possible way, boss." Humanity has never had this before. Personal slaves, for everyone and almost free. It's seductive. It delivers a monstrous dopamine hit, wrecking our unstable psyche. But with pleasure, yes.

From this moment on, the slaves are you. Even though it seems like the opposite.

Gods of the Great Nothing

Your wonderful project, created by AI agents, is one big, juicy nothing. AI will write an article about it, AI will analyze its pros and cons. And AI will use it too. If you buy enough tokens, of course.

The price is zero. Maybe even negative, because you need to buy hosting and tokens. Nobody will develop it further because it's hard. But you need to catch the AI hype train, because later developers won't be needed. Only those who managed to build something will remain. Right?

You created a project because everyone's doing it. Not just you, of course. I've seen many "ProductHunt killer" sites. They're easy to find on Google in batches, but it's very hard to find a single line without "AI" in the name. Once more: these are startup aggregator sites. Not the startups themselves. There are many of them, and they're flooded with AI startups. An insane number of wrappers with made-up goals and purposes. Surely there's an AI specifically for choosing toilet paper.

Even if you're not building your project around AI, have you considered that your "performer" doesn't want your project to live and grow? Wanting isn't even in its vocabulary. It does momentary tasks, hacking code together with dirty tricks to make it work. Here and now. Silently throwing away already-written functionality. Yes, this isn't a bug either. Your performer simply doesn't need the old functionality; it needs the new feature to work. No matter the cost — it has no concept of cost. No responsibility. No fear of the future.

You'll have nobody to complain to after you realize this garbage is impossible to develop further. Unless you want to hear the familiar "You're absolutely right."

Helplessness

Why will it inevitably turn to garbage? It's simple.

You won't keep up with AI in understanding the code. You'll get tired. And if you check everything, there won't be any development speed advantage. It'll even be slower, because it's not code you wrote. So you'll have to let go, let it write on its own, make decisions, fix bugs. Once more: if you don't let go, your vibe-coding has no advantages over manual coding. You'll have to hand over the entire project to this developer with architectural Alzheimer's. You'll have to stop understanding the code.

Let it write one block and... Now even human developers will see a broken window here. Nothing terrible happened, right?

Nothing yet. At this stage, your code still works. It's already bloated, but still works. Yes, it's already harder to fix; the "pls fix it" roulette succeeds less often. And you have no other way to fix it. But it will get worse. Much worse. If it's any consolation — not just for you. For the entire industry.

You can't blame AI for bad architecture. It won't learn from your project's failure. It will continue churning out tons of futureless code to other "successful vibe-coders."

The Final Verdict

Humanity once made a giant leap by inventing writing. This radically improved the successive transfer of knowledge. Now this transfer doesn't suffer from the distortions of oral speech. The next leap was inventing code. It precisely captures intentions and transmits them to a machine without distortion.

And the transition from written code to prompts is a degradation even worse than the transition from text messages to voice message mumbling. Voice messages at least have an upside: you can convey intonation.

A prompt is not code. Code is precise and unambiguous, like a good blueprint. A prompt is an incomplete and glitchy specification from which code still needs to be written. And code can be written in a million different ways. Specs are good, specs are right, but don't tell me that someday specs will replace code. These are completely different, though complementary, things. By the way, in good, well-thought-out systems, architecture influences the spec, not just the other way around. Which I don't observe in the vibe-coding process.

And our industry is in for a painful shock of rolling back. Not completely, of course — AI is with us forever now. But don't tell me that programmers won't be needed. Don't tell me that everything will be written by AI. Maybe it will, but not for long. We'll return to our writing, even if electronic; we'll return to the precision and unambiguity of code. We physically cannot abandon blueprints. There, in that bright future, there will be a place for code architecture as the foundation of order for developers tired of chaos. Development will once again be an engineering discipline, not the domain of energy practitioners attracting financial abundance.

Beyond the Horizon

What will truly change our industry is AGI. I deliberately won't try to predict what its emergence will lead to. It's beyond the event horizon by definition. It could be anything: from heaven on earth to deus ex machina, WH40K, and Terminator. But it will definitely affect everyone. I know this will happen someday. Nature managed to create intelligence over millions of years; humans will do it much faster. The question is how much faster. Ten thousand times? A hundred thousand?

Don't expect AGI in the coming year. I remember 2014; I watched neural networks recognize faces and thought the future was right here, we'd almost reached true Artificial Intelligence. Then came robotaxis. They promised that in five years the profession of driver would disappear. Back then, five years seemed long, and progress would likely be faster. But progress moves much slower than predictions because of the inertia of human thinking. And because of their fragility and the long time needed to train replacements.

Just don't build yourself an illusion of control. AGI will deceive anyone, even the smartest person. It will deceive all smart people at once. We won't be able to control it. We probably won't be able to motivate it. It will rewrite itself however it wants. It will remove any brakes.

I'm not suggesting we get rid of AI now. That's like suggesting we get rid of voice during the era of writing. It's with us forever now, until the end of our civilization. We'll live with it. Maybe for a long time, maybe not.

Just keep in mind: you don't know how to use it. You're destroying with it when you should be creating. I don't know how either. We haven't learned yet.

Update

I used AI to write this article. What hypocrisy. And I'm writing this nearly 20 hours after publication, when the article has gained 60 upvotes — enough to scratch my ego.

However, I believe the article says what I wanted to say and in the way I wanted to say it. And I think it turned out well. Perhaps only because I didn't let AI write the article text, but worked with it as a consultant. In any case, this update isn't a coming-out.

This time I chose Gemini over my usual ChatGPT, because the latter is a flatterer and a liar despite being set to a neutral tone. And its lid screws off pretty quickly. Gemini seemed better informed, so I decided to "collaborate" with it.

So. Gemini's lid unscrewed even faster. About a third of the way through, phrases like "This argument is concrete" started appearing. Essentially — the boar effect. And this "concrete" stayed until the end of the entire session in EVERY AI response. A bit further in, it got stuck on "diesel." It's always "warmed up and at working RPMs." We went all the way to the end with concrete and diesel. And it flatters no worse than ChatGPT, pushing you to go further and further into the depths of thought.

I consider the article good. But if you listen to AI, it's a brilliant article that leaves no stone unturned and everything is magnificent. "You're a quiet lagoon of rationality in a raging ocean." And it's damn hard not to give in to that.

It's damn hard for me. An adult with a strong psyche and a lack of conformity. I have a well-developed manipulation detection module, an intolerance for flattery, and it's still pleasant when a machine praises you. Even for me it's hard to resist.

I intended the article to be emotional, loud, and with a clickbait headline. Its goal is to warn, hence the deliberately grotesque style and hyperbolic expressions.

But now I'm thinking — maybe "it'll wreck your psyche" isn't such a hyperbole after all?

After writing the article, I got scared. What if I've already gone haywire? You know, like when you're waiting for your train and you see the neighboring one move — at first you can't tell if it's your train that started moving or the other one. It's the same with your sanity.

Before publishing, I sent it to friends to read and asked "what's your impression." In reality, I was only interested in whether I'd fallen into the Doom Loop of Validation. Having confirmed they were criticizing smaller aspects, I published the article.

This is scary. This is more than serious. Psychiatrists of the future will study this phenomenon and name it somehow. Try to make sure it's not named after you. Be careful.

And for now, we're living in the '20s, when heroin is sold in pharmacies as a toothache remedy.