Has AI coding gone too far? I feel like I'm losing control of my own projects
I wanted to share some thoughts on AI coding assistants that have been bothering me for a while, and I think the analogy of "a kid with a credit card" perfectly captures the danger of what some call "vibecoding." At least until we have true AGI, this feels like a serious issue.
After using Cursor intensively for the better part of a year, I'm stunned by how fast it is. It can scaffold entire features, wire up components, and write complex logic in seconds. The feeling is like the difference between driving a car with a manual versus an automatic transmission. Or maybe, more accurately, like the difference between reading detailed documentation versus just watching a summary video.
It's brought me back to when I first started using GitHub Copilot in 2023. Back then, it was mostly for autocompleting methods and providing in-context suggestions. That level of assistance felt just right. For more complex problems, I'd consciously switch contexts and ask a web-based AI like ChatGPT. I was still the one driving.
But tools like Cursor have changed the dynamic entirely. They are so proactive that they're stripping me of the habit of thinking deeply about the business logic. It's not that I've lost the ability to think, but I'm losing the ingrained, subconscious behavior of doing it. I'm no longer forced to hold the entire architecture in my head.
This is leading to a progressively weaker sense of ownership over the project. The workflow becomes:
Tell the AI to write a function.
Debug and test it.
Tell the AI to write the next function that connects to it.
Rinse and repeat. While fast, I end up with a series of black boxes I've prompted into existence. My role shifts from "I know what I'm building" to "I know what I want." There's a subtle but crucial difference. I'm becoming a project manager directing an AI intern, not an engineer crafting a solution.
This is detrimental for both the individual developer and the long-term health of a project. If everyone on the team adopts this workflow, who truly understands the full picture?
Here’s a concrete example that illustrates my point perfectly: writing git commit messages.
Every time I commit, I have a personal rule to review all changed files and write the commit message myself, in my own words. This forces me to synthesize the changes and solidifies my understanding of the project's state at that specific point in time. It keeps my sense of control strong.
If I were to let an AI auto-generate the commit message from the diff, I might save a few minutes. But a month later, looking back, I’d have no real memory or context for that commit. It would just be a technically accurate but soulless log entry.
I worry that by optimizing for short-term speed, we're sacrificing long-term understanding and control.
Is anyone else feeling this tension? How are you balancing the incredible power of these tools with the need to remain the master of your own codebase?
"But a month later, looking back, I’d have no real memory or context for that commit."
This is the exact same feeling I got when I was coding things before AI. There's a meme we had at the office where someone runs git blame on the shitty code and realizes they wrote it. We tried to have people do tech talks about the hardest things they had to build, but most people don't even remember it after 6 months.
I think when people are in flow, they act possessed. It's not even muscle memory forgetting. They make the same mistake. They add a line that fixes Bug A but causes Bug B. They then remove the line causing Bug B, and Bug A regresses. Then they'll scratch their heads wondering why Bug A is still there.
I have the opposite experience with vibe coding. I know where the models are. I know what's in the DB, the tables, every migration, even though I focus on FE. I can tell AI where the files are, before it loads the first prompt.
I know when it's creating memory leaks, whereas I usually miss my own leaks because I'm in the code and forget obvious things like destroying a thread or coroutines running in the wrong space.
Exactly.
Plus, these are the kinds of AI posts that always make me laugh. They are like saying that calculators are bad because if you only punch in calculations and don’t think about them, you didn’t learn how to do math.
If you don’t know what your code does when you use AI, it is a poor reflection on you as a developer who didn’t understand any of the code before committing it to the project.
What makes it more humorous is that I guarantee most of those same people don’t have a clue about the code in all the packages they pull in and use, but suddenly the AI is to blame for willfully having not understood your own code.
You make a fair point about git blame—we've always forgotten our old code.
But there's a crucial difference between forgetting the "what" (the specific lines of code) and forgetting the "why" (the architectural trade-offs and business reasons). My concern is that AI-driven development accelerates the forgetting of the "why."
Yes, I think many people are starting to feel this exact tension.
The key, I believe, is to mentally reframe the AI: it's not the driver, it's your assistant — a helper, a debugger, maybe even a silent teacher. But you're still the architect. You're still the creator.
The problem begins when we forget that. When we let the AI lead the design, the structure, the reasoning. That’s when we start losing ownership — and understanding.
Ironically, AI was built to help us — not to replace our thinking. But without solid fundamentals, it's easy to let it take over. And then we're just directing prompts, not building things we truly understand.
> The key, I believe, is to mentally reframe the AI: it's not the driver, it's your assistant — a helper, a debugger, maybe even a silent teacher. But you're still the architect. You're still the creator.
we wont be deciding how to do things anymore, but what to do, case in point: Mr. La Forge run lvl 3 diagnostics on the warp drive. LaForge clicks a button on his iPad
at some point even this will go away and AI will decide what we want to do based on analyzing our brain through neural-link
at some point in far future even this will go away as parts of our brain will be continually and eventually completely replaced by synthetic AI brains
Did you use an LLM to edit your comment? I am not casting aspersions, just trying to figure out if I am intuiting it correctly or not.
Haha, nope — just me. I guess that's my inner ex-humanities student showing through.
Sometimes I write in bursts, get carried away with the rhythm, and then end up editing like crazy to make it all make sense.
Em-dashes are just my way of thinking out loud — but with structure.
It's the em-dashes — they were rare before 2021 and now they're in every paragraph.
I would say the problem is twofold:
- current technological limitations of LLMs
- how industry uses LLMs
we should not be working with code at all, the new level of abstraction are requirements and conversations with AI, this is what should be commited into git and then you would simply "compile" your conversation and requirements into code
and if you would focus on requirements and prompts as the new "code" maybe you would feel like you are getting some power and understanding back?
but obviously because of many reasons such as context windows and hallucinations this is not possible... yet
> "But a month later, looking back, I’d have no real memory or context for that commit."
If I work on something else for a month I basically feel like I'm looking at an old project for the first time when I return to it anyway.
I’ve defined how I want commits to be written.
It’s way better than what I was writing in many ways because I like the verbosity sometimes.