Returning to Thinking After Months of Coding with LLMs

A senior developer describes how months of heavy reliance on AI coding tools led to cognitive atrophy and an incoherent codebase, and explains the balanced approach he adopted to reclaim his engineering judgment.

TLDR: LLMs are decent at coding, but in large projects they produce a tangled mess. I reduced my use of AI for coding and went back to using my brain, pen, and paper.

Several months ago, I needed to build new infrastructure for my SaaS, because the PHP+MySQL stack no longer met our requirements. I was eager to seize this opportunity to make maximum use of all the new LLMs I had been experimenting with. So I temporarily stepped down from my role as a software developer, becoming my own product manager. I discussed technologies with Claude, conducted my own research, and after many iterations, put together a plan. In the end, I decided to use Go+ClickHouse.

When it was time to start coding, I asked Claude to generate a large and complex markdown file describing my old infrastructure, the desired new infrastructure, listing what I wanted to achieve, why I needed it, and so on.

Then I threw it all into Cursor Notepads and started writing prompts. Cursor writes code, I build and test it. I was fairly happy with what was happening — the codebase wasn't the cleanest, but it seemed to work. Speed of development mattered more to me than code cleanliness — my SaaS business clients said they needed certain data, and this new infrastructure was the only way to deliver it. I had several more potential clients waiting for me to say everything was ready so they could purchase a plan. Until it's ready, I'm literally losing money every day.

But a few more weeks passed, and the first signs of trouble began to appear. The process started to frustrate me. Every day I felt like I was very close to the final product, but then I'd find another issue that pushed the release back by a few more days. I justified it by telling myself I'd never worked with Go and ClickHouse before, so it made sense that fixing problems would take a bit longer than I was used to. But the problems kept coming, and my frustration kept growing. Cursor wasn't really helping me anymore. I'd copy-paste error messages and get a fix in return, but then something would break somewhere else. The more detailed the problem, the harder it was for the LLM to provide an actual solution. So I started examining the code more carefully myself, trying to understand it.

I've been a software developer for fifteen years, and I studied C++ and Java in college, so I roughly understood what was going on in those Go files. But I didn't have a grasp of the optimal patterns for Go and ClickHouse.

I started studying them more deeply. I read documentation, articles, watched YouTube videos about ClickHouse. I became more probing in my questioning of Claude, asking detailed questions and challenging its answers.

One morning, I decided to carefully review all the code that Cursor had written. I don't mean to say I was blindly writing prompts without looking at the output, but I was optimizing for speed, so I wasn't spending time on code review. I just kept building, building, and building.

So I conducted a "code review session." And I was horrified.

Two service files in the same folder with similar names, obviously performing very similar tasks. But the method names were different. Properties were inconsistent. One was called WebAPIprovider, the other — webApi. They referred to the exact same parameter. The same method was declared multiple times across different files. The same config file was called differently and processed by different methods.

No consistency, no overarching plan. It was as if I'd asked a dozen juniors or mid-level developers to work on this codebase without access to Git, locked each one in a separate office, and forbidden them from communicating.

And yes, I gave the LLM context — a whole heap of context. I mostly used Gemini specifically because of its large context window. Every time I needed a new iteration of a particular type of file, I gave specific instructions for the LLM to use it as an example. But it wasn't enough.

Going Back

By this point, it was clear that the approach needed to change. I'm first and foremost a software developer, so it would be stupid not to use the bulk of my skills. I started studying Go and ClickHouse more seriously and stopped chasing development speed. I opened file after file and rewrote code. Not everything — just the stuff that made me feel nauseous. The language may be different, but I understood how things should look and what the structure should be.

Since I went back to basics, debugging became easier. I wasn't moving as fast, but I no longer had that strange feeling of having supposedly written this code while having no idea what was inside it. I still use LLMs, but for dumber tasks: "rename all occurrences of this parameter" or "here's pseudocode, generate the Go equivalent."

The hardest part was resisting the temptation to use AI. I have this amazing tool that can write ten files in a minute. I'm wasting time by not using it! And then it hit me: I've been using my brain less and less. I subconsciously gravitate toward using AI for everything code-related.

I've been writing less on paper. When I need to plan a new feature, my first thought is to ask o4-mini-high about it rather than engage my neurons. This infuriates me. And I need to change it.

So yes, I'm concerned about the impact of AI — not because of the risk of losing my job, but because of the loss of mental clarity, the ability to plan features, to write beautiful and functional code.

So I took a big step back, severely limiting my AI usage. By default, I started using pen and paper, writing the first draft of a function myself. And if I'm unsure, I ask the LLM to check whether it's a good solution, whether the names are well-chosen, and I ask how to finish the last part.

But I don't ask it to write something new from scratch, come up with ideas, or write an entirely new plan. I write the plan. I'm the senior developer. The LLM is the assistant.

The Golden Mean

After I changed my approach, LLMs stopped frustrating me. My expectations of them became very low again, so when they do something right, it pleasantly surprises me. I try to use them wisely — for example, they're a very good learning tool. I use them to study Go, to improve my skills, and then I apply that new knowledge in coding.

But I'm worried about the no-coders. I'm almost certain that things are now worse for them than in the pre-AI "no code" era. At least no-code tools were written by people with common sense, and despite limited features, those tools had at least some structure.

"Vibe coding" or whatever they're calling AI-assisted coding without knowing how to code — it's a path to disaster if you're building anything even slightly more complex than a small prototype.

I can't imagine what coding with Cursor looks like for someone who doesn't know how to write code. Although, maybe I can. Walls of code that you can't understand, error after error that you paste into the chat window, getting back even more code that further tangles and complicates the task, and then becomes outright wrong. And none of it can be fixed.

A Message to the AI Enthusiasts

I can almost hear the crowd of AI coding specialists yelling at me: you should have used <the-latest-model>! You should have used Cursor rules! You should have implemented this fifteen-step process I recently found on Reddit!

I did. I actually tried. You just have to accept that today, AI simply cannot do certain things.

And although I haven't tried every combination of tools, agentic workflows, and so on, I'm still confident in my opinion. If you don't believe me, try asking an LLM — without knowing ClickHouse — to write a complex query touching multiple tables with over a hundred million rows without causing memory errors on a server with limited RAM.

It simply won't manage, even if you give it the full SQL schema, even if you point it to the latest ClickHouse documentation, even if you describe the business requirements and infrastructure constraints in meticulous detail. The latest Gemini won't manage, o4-mini-high won't manage, o3 won't manage, Sonnet 3.7 won't manage.

Besides, if I need to spend many hours setting up an elaborate system just so AI doesn't build a house of cards instead of the application I need, is it even worth it? Especially when model accuracy is inconsistent. Even if you find the perfect workflow, it won't stay error-free for long. Or it'll stop working as soon as you need something slightly different.

Let me be clear: I'm writing this as someone who is extremely enthusiastic about new technologies, who loves being an early adopter, and who is still excited about AI. I think and hope that sooner or later it will become smart enough, but right now the situation is very strange — the tools look amazing, everyone says they're magnificent, but in reality they're good but not perfect; and on top of that, they might be making us dumber.

A strange situation. It's as if I need to get somewhere, and I can either walk or fly in a spaceship at 1,000 km/h, but the labels on the controls are half in Hungarian, half in Ancient Greek. Through trial and error, I can probably fly to where I need to go, but that's a serious effort in itself, and in the end I'll be left wondering whether it would have been better to just walk.

Also, I'm constantly nagged by the feeling that we're being deceived by benchmarks, by influencers who can now sell gold-diggers fancy new magic shovels, and by a bunch of companies trying to convince us that this is an "agent" and not just another cron job.

Sometimes I think we're being deceived by the LLM developers themselves. Go to any AI subreddit, and you'll find people with completely opposite experiences despite using the same model with the same prompt on the same day. If you've been coding with AI long enough, you know this. One day it's brilliant, the next day it's mind-numbingly stupid.

Are they throttling GPUs? Are these tools simply uncontrollable? What the hell is going on?

FAQ

What is this article about in one sentence?

This article explains the core idea in practical terms and focuses on what you can apply in real work.

Who is this article for?

It is written for engineers, technical leaders, and curious readers who want a clear, implementation-focused explanation.

What should I read next?

Use the related articles below to continue with closely connected topics and concrete examples.