This post isn’t about the usual “how to prompt AI” advice.
It’s about lesser-known habits that can seriously upgrade the quality of code you get from LLMs.
If the generated code isn’t what you expected, say it clearly and early.
Why this matters:
Often the second attempt is more thoughtful than the first one.
Even if the solution works perfectly, try this:
“Improve this code.”
You’d be surprised how often AI:
Iterative refinement with AI is like having a senior reviewer on demand.
AI performs dramatically better when it understands your architecture and code structure.
Instead of only saying:
“Write a method that does X”
Try:
“This belongs to the service layer. We follow a layered architecture: Controller → Service → Repository. Where should this logic live?”
Now the AI can:
But here’s where things become powerful.
Instead of manually describing the system, you can use an AI Prompt Builder that automatically enriches your request with real project context extracted from your codebase.
For example, the CppDepend AI prompt builder can provide:
This context can be injected into the AI prompt so the model doesn’t just guess — it understands the structure of your system.
Now AI is no longer just a code generator.
It becomes an architecture-aware assistant working with real data from your project.
After generating multiple AI-driven methods, pause.
AI optimizes locally, not globally. You must:
AI accelerates coding — but you are still the architect.
This is one of the most powerful and underrated tricks.
After the code is generated, ask:
Now the AI switches roles: from creator → reviewer
This often reveals:
You essentially get a free design review.
AI doesn’t replace engineering judgment — it amplifies it.
The best results come when you use AI as:
Generator → Improver → Critic → Assistant,
while you stay the architect.
.