Time to Ditch the AI Binary
Beep, boop, bot
AI runs on 1s and 0s. Binary is clean. It’s precise. Yet, how that binary is interpreted through layers of code that run the latest AI and automation technology is definitely not precise.
So when I hear HR or marketing leaders ask, “Should we use AI for this?”, I know we’re stuck in the wrong kind of binary thinking. That’s not anyone’s fault, though.
The way most teams have been taught to think about AI is incredibly reductive: automate or don’t. Human or machine. Yes or no. That might’ve been fine when automation meant a glorified macro. But now we’re building systems that talk back, generate ideas, even “reason.” The stakes are higher and the question needs to grow up.
We need to start asking different questions. Stop asking whether AI belongs. Start asking what kind of partnership you’re designing between AI and your people.
Let me explain.
The Question That Gets You Nowhere
Almost every AI conversation I have starts with some version of:
“We’re thinking of using AI to do [X]. Have you used this before? Should we?”
That’s like asking, “Should we use electricity?” It’s too broad, too vague, and completely divorced from the actual work. Some tasks are ripe for full automation. Others? You’d be out of your mind to take the human out of the loop.
There’s a Better Question and a Smarter Framework
Stanford researchers recently published a framework called the Human Agency Scale (HAS), and although it’s dense as academic research tends to be, it’s also surprisingly pragmatic.
HAS maps AI-human collaboration across five levels, from full AI control (H1) to fully human-led tasks (H5). It’s not flashy, but it’s honest. And honestly, that’s what most teams need right now. We should talk about how AI should show up in the work, not just whether it should.
Here’s the shorthand from the research:
H1: AI runs the show. Humans aren’t needed.
H2: AI leads, humans keep an eye on it.
H3: It’s a true partnership where both contribute.
H4: Human leads, AI supports in the background.
H5: AI, stays out of it. Human-only zone.
The intent is simple: Start by intentionally using the right level of AI for the task, not just simply saying yes or no to it.
It’s so intuitive in its nature that you might not think it’s a revolutionary line of thinking. But again, I still get the binary question over and over. Clearly, we need to reinforce this line of thinking even if it feels obvious.
Design for Trust
Another finding from the research is that AI that erodes human agency tends to backfire. AI that respects and amplifies human skill? That’s what ends up working best for everyone. The HAS isn’t just for leaders making decisions, but also a framework for employees to think and talk about where they want help.
The best use cases aren’t typically the flashiest that you might hear about (or that science fiction writers have dreamed up). Instead, they’re the ones that remove the grind without stripping the soul out of the job. Inbox triage, not creative direction. Summarizing claims notes, not breaking bad news to a customer.
Next time you’re evaluating AI for a workflow, don’t ask whether or not it should be automated. Ask:
What’s the ideal mix of human and machine here?
Who needs to stay in control?
Will this make the work more meaningful—or more miserable?
Want to dig into this more? I wrote way more words about it for Reworked in one of my monthly columns.
What else did I see this week?
Your Prize for Saving Time at Work With AI: More Work via WSJ
$21 for an InMail? You’ve got to be kidding me via Shannon Pritchett
When It Comes to AI Talent, $100 Million Can’t Buy What Matters via Reworked
That’s it for this week!




