The highest-value use of an agent is freeing up human attention for the work that actually requires it. The lowest-value use is automating judgment calls that deserve careful human thought, saving a few minutes while introducing errors that cost hours to find and fix. The difference between these two is a decision about which parts of the work are boring and which are interesting — and that decision requires honest self-assessment.

Boring parts have some recognizable properties. The task is repetitive. The criteria for success are clear and consistent. Getting it wrong is recoverable. The domain is well-understood and the edge cases are enumerable. Reformatting data. Generating boilerplate. Summarizing routine documents. Checking for common errors in well-understood categories. These are tasks where agents add value without adding risk, because the judgment required is low and the feedback loop is short.

Interesting parts have different properties. The criteria for success involve tradeoffs that depend on context you understand and the agent doesn't. Getting it wrong has consequences that compound. The domain involves institutional knowledge that isn't in the context window. The work requires synthesis of information that the agent has but you're not sure it's weighing correctly. Architecture decisions. Consequential user communications. Anything where the cost of a plausible-sounding wrong answer exceeds the cost of thinking it through yourself.

The practical discipline is to notice when you're delegating the interesting parts to avoid the cognitive effort they require. This happens more than people admit. The agent produces something that's probably fine, and you pass it on because checking it carefully would take as long as doing it yourself, and you're busy. This is the path toward a system where you've nominally automated a task but actually just deferred accountability for it.

Stay close to the work that matters. The agent's job is to give you more of that, not less.