The Turn: AI Agents as Employees? Not So Fast
New research confirms unintended consequences for treating machines like people
Back in the summer of 2024, Lattice rolled out digital workers on the org chart, complete with onboarding and (of course) performance management. I wrote on Reworked at the time that it was both bold and misguided. It was quickly shelved and forgotten by all but a handful of folks.
It turns out they were ahead of their time, both for their embrace of AI agents as kinda employees and for turning away from it.
Welcome back to The Turn. This week, we’re covering digital workers and unintended consequences.
New research covered in HBR shows what happens when we humanize the machines at work rather than treat them as they are: brittle, lacking accountability, and ultimately deployed by humans who should be monitoring their digital lackeys more closely.
First, let’s talk about accountability. The research showed that when an AI “employee” made a mistake, people blamed the AI. One participant even said, “The blame isn’t on a person; it’s on the technology.”
But who created, deployed, and monitored the agent? What happens when an agent delivers substandard work? What if it does something against the law? Certainly a person, not a machine, will be held to account.
When an AI “employee” created work, not only does the burden of review shift to a person (which has been well covered); they also reduced the confidence in reviewers. Of note from the article: “Several participants explicitly referenced questioning their own skills, doubting whether they had identified all issues, or feeling the need to re-verify work they would typically accept at face value.”
That also led to them catching fewer errors from worked framed as done by AI “employees” rather than an AI tool. Nearly 20% fewer errors caught on average.
Framing AI tools as employees also made people concerned about their own identities, with concerns about job security and lower trust in how the organization would use AI. As one person said, “If you want people to feel like they will lose their job to AI, or can be easily replaced by AI, then put it on the org chart.”
For organizations, all of that may not matter. We can deride people who reject AI “employees” as horse buggy enthusiasts, self-interested, or simply and stubbornly human. Yet adoption also shows no meaningful improvement when positioning AI as an “employee” vs a tool. Instead, adoption is driven the old fashioned way. Human dynamics, manager encouragement, and clear expectations drove actual adoption.
So all of those downsides and for what? You get to pretend you’re ahead of the curve, actively harming the work and employees with no improvement in outcomes?
It’s worth a read to understand how you can avoid all of this but the biggest learn: Words and positioning matter.
What else is happening this week?
Life, Agency, and the Pursuit of Ozempic. Missed this one but Max Bayram ties Kyla Scanlon’s Ozempicization essay to Palantir’s manifesto to AI tools and lands on the same mechanism underneath all of them: the appearance of agency is the product, and the actual agency is what gets extracted in exchange for it.
The Engineer Who Won’t Use AI with Andrew Norcross. Laurie Ruettimann interviews Andrew Norcross, who built the architecture for NASA.gov, the New York Times, and Disney, and now paints fences in Florida while waiting for the AI bubble correction he’s certain is coming. He won’t ship code he doesn’t understand, and “quadruple your AI costs,” he says, “and tell me it’s saving money.”
Ten Years Later, I Still Believe in Better Technology. Shannon Pritchett is back with a new newsletter, and her reread of a 2016 sourcing article lands on the same indictment: the industry confused automation with innovation and the original assignment, giving recruiters visibility into people they couldn’t see before, still isn’t done.
AI and Hiring Alignment Report 2026. Tim Sackett shared this, and Metaview’s data on AI in recruiting is worth a read for anyone who wants numbers behind the conference talk.
Research: Why You Shouldn’t Treat AI Agents Like Employees. BCG and Boston University put it on paper: giving AI agents employee-style roles drops individual accountability, lowers review quality, and increases escalation, all without improving adoption.
I Built My AI Brain 3 Hours. Jess Von Bank keeps building in public, and her “Judgment Stack” framework explains precisely why most people’s AI output sounds like everyone else’s: they stopped at identity and never got to reasoning.
Agentic AI in Talent Acquisition. Madeline Laurano’s hosting a webinar on what’s changing in TA. This one is worth getting on the calendar (tomorrow!).
OK Doomer: AI Isn’t Killing Jobs, Brought to You by AI Investors. Wait, What?. Steve Smith on Galloway and a16z both publishing “relax, AI won’t kill jobs” essays the same week Freshworks laid off 11% and credited AI: “That’s not analysis. That’s asset protection with a Substack account.”
White House’s Hassett: AI Isn’t Costing Anybody Their Job Right Now. Kevin Hassett says there’s no sign in the data that AI is displacing workers, joining a chorus that keeps getting harder to hear over the layoff announcements.
Most HR Leaders Think Their Job Is to Support the Business. That’s a Ceiling. Jennifer McClure puts it plainly: the seat at the table conversation is over, and the question now is whether you’re shaping the agenda or still asking for a chair.
Job Descriptions Are Quietly Becoming the Dividing Line. Jason Lauritsen and Lisa Sterling on the podcast this week: job descriptions were built for factories a hundred years ago, and the best work on your team is probably happening outside the box you put people in.
The Talent Assessment Market Has a Risk Problem. Charles Handler built an AI to track 400-plus assessment vendors and the finding that should stop you cold: at the highest AI risk levels, only 9% have an I/O psychologist anywhere in the process.
The Workday-A16z Debate Has Five Smart Takes. Here’s What They’re All Missing. George LaRocque adds the one angle nobody brought: $24.5 billion into Workday-competitive categories since 2017, and the enterprise ceiling has held every single time.
The AI Bill Is Coming Due, and CHROs Need to Be Ready. George is on a roll. Consumption costs are moving through vendor P&Ls into your renewals, and the average enterprise AI budget went from $1.2 million to $7 million in two years. Your CFO has entered the building.
Hundred Years, All New People. Tyler Weeks on the time his boss couldn’t fire him so she announced his reassignment to the entire HR department, and what he learned from surviving it: the only real license to lead comes from discovering that the worst case was finite.
The How #25, May 7, 2026. Kate Achille, newly promoted to CEO of The Devon Group, lost her dog Ruby Sue and went back to work the next day, and uses that to make the case that grief policies shouldn’t require anyone to justify who they loved.
I SIOPed So Hard This Year. Alexis Fink on the Hot Ones session at SIOP: engagement surveys without follow-through are worse than doing nothing, and most analytics teams maintain dashboards nobody visits. Both true, both will be ignored.
TechWolf, Deep Tech Meets Work Tech, Context Graphs, Transforming Work. Thomas Otter has money in TechWolf and says so, then makes the case that the context graph layer won’t be owned by the frontier model providers or the ERP incumbents, and explains why that’s the most important open question in work tech right now.
Workplace Misconduct Hits Near 7-Year High as Reporting Confidence Rises. 55% of employees experienced or witnessed misconduct in 2025, nearly back to the 2019 peak, reporting confidence is at record levels, and the question of whether AI caused any of this or will fix any of it remains, so far, unanswered.
Have a great rest of your week!


