I got a question from a reader after Issue #4 dropped.
"How did you know the receipt workflow was ready to automate?"
Good question. Most people skip it entirely.
They see a repetitive task and their first instinct is: automate it.
Sometimes that's right. Sometimes that's how you automate a broken process and get broken results faster.
Here's what I actually did, and the framework I've been studying since.
The Assessment Came First
Before I touched a single automation in that receipt workflow, I asked one question:
Does this require judgment, or does this require execution?
Execution: The task follows a predictable pattern. The inputs are consistent. The right output is definable. A wrong output is immediately recognizable.
Judgment: The task depends on context. The "right" answer shifts based on factors that aren't always visible. Experience changes the outcome.
For the receipt workflow, categorizing, logging, routing, every step was execution. Repetitive. Consistent. Definable. No ambiguity in what correct looked like.
I assessed it. I confirmed it. Then I automated it.
That assessment step is what most people skip.
The Three Modes You're Actually Choosing Between
I've been studying a framework lately that puts language to something I've been doing without thinking about it. It breaks human-AI collaboration into three modes:
Automation: You define the task. AI executes it. You're not in the loop once it's running.
Augmentation: You and AI work together. Back and forth. You bring domain expertise; AI brings speed and pattern recognition. Both contribute to the outcome.
Agency: You configure AI to act on your behalf, independently. It's not waiting for your input. It's operating from the judgment you've already built into it.
Three modes. Very different use cases. Very different readiness requirements.
The mistake I see constantly: people jump from zero to Agency. They want the AI to operate independently before they've documented what good looks like. Before they've even tested it in Augmentation mode.
That's not delegation. That's giving up control.
The Sorting Decision
Here's the three question framework I use before deciding which mode a task belongs in:
Question 1: Is the output verifiable without expertise?
If someone with no background in your field can look at the output and tell you whether it's right or wrong, that's a strong signal the task is ready for Automation.
If verifying the output requires your specific knowledge, experience, or judgment, stay in Augmentation. You need to be in the loop to catch what AI can't.
Question 2: Have you documented the judgment behind this task?
This is the one that gets people.
If you can write down exactly what a correct output looks like, the criteria, the edge cases, the "when X happens, do Y" rules, then you have what AI needs to execute reliably.
If your answer is "I know it when I see it"?
That's not documentation. That's expertise still living in your head.
You can't delegate what you can't define. You haven't defined it yet. Stay in Augmentation until you can.
Question 3: Can you recover if AI gets it wrong?
Some tasks have reversible errors. Miscategorized receipt? Fix it. Wrong draft subject line? Rewrite it.
Some tasks don't. Client-facing communication sent without review. Financial decisions executed without oversight. Anything where a wrong output creates consequences you can't walk back.
If you can't recover easily, human in the loop. Always. Regardless of how confident you are in the automation.
What This Looked Like in Practice
The receipt workflow from Issue #4 passed all three:
The outputs were verifiable. Anyone could confirm a receipt was categorized correctly.
The judgment was documented. The rules were clear: this vendor goes to this category, this spend threshold gets flagged, this format gets logged this way.
The errors were recoverable. A miscategorized receipt is an inconvenience, not a crisis.
Green across the board. Automation was the right call.
But here's what that assessment also did: it forced me to document the judgment before I built the automation. Not as an afterthought. As the thing that had to come first.
That's the sequence most people reverse.
They build the automation first, then wonder why it keeps producing outputs they have to fix.
The documentation isn't the boring part you do after. It's the work that makes automation possible.
Where Most People Are Actually Stuck
They're not stuck on the technology. They're stuck in the middle, running everything through Augmentation because they've never made the sorting decision.
Every task gets the same treatment: send it to AI, review the output, tweak it, move on. Repeat forever.
That's not a system. That's a more expensive way to do what you were already doing.
The sorting decision changes that. It gives you a path forward:
Tasks that pass the three questions: move to Automation. Build the rule once. Stop reviewing.
Tasks that fail question 2: stay in Augmentation, but use that time to document. You're not just getting help. You're extracting the judgment you'll need to eventually automate.
Tasks that fail question 3: keep human in the loop, full stop. Protect the things that matter.
Over time, your Automation column grows. Your Augmentation column gets more intentional. Your Agency column, AI operating independently on your behalf, becomes something you actually trust because you built the foundation for it.
Your Action This Week
Pick one task you're currently running through AI in Augmentation mode.
Ask the three questions.
If it passes all three, document the rules and move it to Automation. Stop reviewing outputs you don't need to review.
If it fails question 2, use your next three Augmentation sessions to document the judgment you're applying. Write down what you're correcting and why. That's the rules you're still building.
The sorting decision isn't a one-time exercise. It's a habit.
And it's how you stop treating AI like a better search engine and start treating it like the system it's capable of being.
Simple. Not easy. Worth it.
If you're working through the sorting decision and want a second set of eyes on what's ready to automate vs. what needs documentation first, that's exactly what we do.
— Jay
Founder, Clarity2Scale Consulting
Process-First AI Strategist

