Escaping the GSD Communication Trap: Why I Built an "Architect-Executor" AI Loop
Or: How I stopped negotiating with a talkative parrot and learned to split the brain

Let’s get one thing straight right out of the gate: I am not here to trash the GSD (Get Shit Done) movement. If you’ve been on Twitter recently, you’ve seen the hype. “I built a fully functional SaaS in 14 minutes!” “My AI wrote a marketing platform while I was making an espresso!” It’s wild, it’s exciting, and honestly? It’s exactly the kick in the pants our industry needed.
GSD proves that the initial barrier to entry has evaporated. Ideas are king again. If you have a decent prompt and a strong vision, you can brute-force a prototype into existence before your coffee gets cold.
But... (and you knew there was a but coming).
If you’ve actually spent the last month trying to push a non-trivial application past the “cute prototype” phase using just ChatGPT or Claude in a single massive prompt window, you’ve probably hit the exact same brick wall I did.
It’s not that the AI can’t code. The AI is a savant. The problem is the crushing communication overhead.
The “Junior Dev on Adderall” Scenario
Here’s an anecdote for you. A couple of weeks ago, I was building a fitness app dashboard. It was going beautifully. I told the model, “Hey, redesign the home screen to look like a premium dark-mode tracker.” It did. It looked amazing.
Then, a few feature requests came in. We needed to add a historical data view to the workout sets. Easy, right?
I pull up my IDE, write a carefully crafted prompt to the model: “Hey, add previous weight and reps below the input fields in the active workout component.”
The model spits out the code instantly. I paste it in, hit save, and the entire global state management crashes. Why? Because while I was focusing on the UI in my prompt, the model “helpfully” noticed that my Redux store looked a bit old-school to its training data, so it silently refactored my entire state shape under the hood without telling me.
Thanks, buddy. Now the user’s “water intake” is somehow linked to their “bicep curl” max.
Now, you might think this is just a skill issue. And sure, partly it is. But the GSD “one giant prompt” approach has a few fundamental problems that go well beyond a single bad refactor. First, there’s Context Window Degradation - the technical one. But honestly? That’s just the tip of the iceberg. The bigger issues are the ones nobody talks about: you have zero visibility into what’s actually being built, and you’re so deep in the weeds micromanaging every prompt that you’ve stopped engineering and started babysitting.
Think about how LLMs actually work beneath the hood. It’s not a hyper-intelligent junior developer - it’s essentially a very talkative parrot with Alzheimer’s. When you maintain a single, massive conversation thread, the parrot is forced to drag thousands of tokens of historical baggage along with every new prompt. It vaguely remembers that three hours ago you asked it to try a different color scheme; it remembers the broken code it generated before the fix; and it’s trying to balance all that contradictory memory with the new business logic you just requested.
Eventually, the context window gets muddy. The parrot starts hallucinating constraints that don’t exist anymore, or worse, completely forgets the core architecture you established at the start of the chat. Every time you hit an obstacle or the plan changes, you have to stop, write a new novel explaining the context, negotiate the changes, and manually remind the parrot of its own name. You spend less time engineering and more time acting as a Jira ticket translator.
I realized the issue wasn’t the AI. The issue was threefold. First, yes, trying to force one brain to do both high-level system architecture and low-level syntax implementation in the same context window is asking for trouble. But the other two problems were sneakier: I had absolutely no documentation of what had been built, no living roadmap, no source of truth beyond a 47-message chat history that I’d have to re-read every morning just to remember where I left off. And worst of all, I was so deep in the prompt-tweak-paste-pray cycle that I wasn’t actually thinking about the product anymore. I was just reacting. I had no helicopter view. I was the helicopter.
This led me to an experiment I’ve been running for the past few weeks, and frankly, it feels like the missing link. I call it the CEO Loop (or the Architect-Executor pattern). It doesn’t just fix the context window problem. It gives you back the thing you actually lost: visibility, control, and the mental space to think about what you’re building instead of how you’re prompting. It separates the “thinking” from the “doing” and puts you firmly in the driver’s seat.
Role 1: The Live Architect (e.g., Gemini 3.1 Pro / Antigravity)
This is your CTO. Your technical partner. The Architect does not write code.
Instead, the Architect has a live, “Helicopter View” of your entire file tree. It lives in your environment. You don’t ask it to write a React component; you throw business problems at it.
“Hey, we need to add a dynamic ‘Add Set’ button to the workout screen, but it needs to tie into the local Zustand store without breaking the existing mocks.”
The Architect reads your files, figures out the dependencies, and writes a living, breathing blueprint (I usually use a literal
implementation_plan.md and a devlog file). The Architect is the keeper of the state. It documents the progress, notes the technical debt, and maintains the high-level roadmap.
Here’s the killer feature: It adapts on the fly. If the Architect looks at your store and realizes, “Oh wait, the user’s MOCK_DATA is structured as an array, not a dictionary,” it adjusts the plan dynamically before any code is written.
Role 2: The Executor (e.g., Claude 4.6 Opus)
Here is where the real magic of this pattern shines: You get to wipe the Executor’s context clean every single time.
Because the Architect is maintaining the high-level documentation and the project state in Markdown files, you don’t need Claude to remember yesterday's conversation.
Once the Architect (with your blessing) has documented exactly what needs to be done, the Live Agent spits out a single, perfectly parameterized, atomic prompt. An impeccable instruction set.
You hand that perfect spec to your Executor in a brand new, clean context window. Think of Claude in this scenario as the ultimate Craftsman. He doesn’t ask questions. He receives the Architect’s instructions, sees only the pure, relevant context needed for that specific feature, writes the exact code required, commits it to Git, and hands back a neat little report. Zero context degradation. Zero hallucinated dependencies from three hours ago.
Role 3: Closing the Loop (You)
You take the Executor’s report, pass it back to the Architect to run type-check and verify the Git tree, and if it passes, you move to the next feature.
You are no longer micromanaging the syntax. You have become the Chief Engineering Officer.
Code Less, Engineer More
The hype around autonomous bots generating thousands of lines of code is real. But if you want to build maintainable, production-ready software without losing your mind, you need more than just a fast code generator. You need structure. You need visibility. You need to actually know what’s happening in your own project without re-reading a 200-message chat thread.
By splitting the task into a Live Agent (Architect) that dynamically adapts to reality, and a relentless Executor that just builds to spec, you stop fighting the LLM’s limitations and start leveraging its superpowers.
Give this dual-loop setup a try. Your mental health (and your global state management) will thank you. Now, if you’ll excuse me, my Architect just told my Executor to replace a sparkly AI icon with a picture of the Terminator, and I need to go see if they accidentally triggered Skynet.
Keep building, friends. 🚀

