The most underused capability of an AI coding assistant isn't code generation — it's explanation. Ask it to walk through how an unfamiliar piece of code works. Ask it to explain why a dependency was designed the way it was. Ask it to describe the tradeoffs between two approaches you're considering. Ask it what the code you just wrote will do with edge case input X. These uses produce understanding rather than output, and understanding is more durable.
This matters because the production pressure is always toward generating. You have tickets to close, features to ship, bugs to fix. The assistant is fast at generating code, and it's easy to fall into a pattern where every interaction is a request for output. But a codebase full of code you don't fully understand is a liability — and AI-assisted development can accumulate that liability faster than traditional development, because the generation is so fast and the temptation to accept without comprehending is so strong.
Using the assistant to understand what it just wrote is not a sign of weakness. It's the appropriate response to working in a medium where comprehension doesn't automatically accompany production. A senior developer reviewing a junior's pull request doesn't accept code they don't understand. The same standard applies when the junior is an AI.
There's also a learning dimension. Asking the assistant to explain an approach you haven't used before — and then asking follow-up questions until you genuinely understand it — is one of the fastest ways to build knowledge in an unfamiliar domain. The assistant is patient, available at any hour, and won't make you feel bad for asking the same question three different ways until it clicks.
The output is the visible part of the work. Understanding is what makes the output maintainable.
This site uses analytics cookies (Google Analytics) to understand how readers use the content. No data is shared with third parties for advertising.
Learn more