The Turn: Why Failure Was on Stage at Unleash
AI failures are inevitable but they don’t have to be fatal
Welcome back to another issue of The Turn.
I was able to get back to an Unleash conference, the first for me in more than a decade. I came in the day before and was able to plow through meetings, sessions, and some talking with vendors.
But, what got me was the keynote. Not on AI (as one would expect) but on failure.
Failure, by the way, isn’t a popular conference topic. Shocking, I know. Vendors don’t build booths around it. Nobody wants to tell stories about failures.
Amy Edmondson is different, though. She’s spent 30 years studying failure, and her argument cuts against the dominant mood of every AI conversation happening in organizations right now. “Our intuition about failure fails us,” she said. We learned failure was bad before we could reason about it.
Her framework separates failures into three types. Basic failures are preventable. They are often single cause in familiar territory and better systems can drive them toward zero. Complex failures are the perfect storms where multiple factors converge into a disaster that hinged on no single point.
Intelligent failures, on the other hand, are what you get when you run a real experiment in new territory: the outcome wasn’t what you wanted, but the information was exactly what you needed.
Intelligent failure has specific requirements. A credible hypothesis, a goal the organization actually cares about, and a scope sized to generate knowledge rather than to generate a report. Those exist because it’s supposed to teach you something. “An intelligent failure in your organization happening a second time,” she said, “is no longer intelligent.”
Edmondson described a conversation with senior executives at a financial services company who told her that failure might be acceptable in good times, but right now, facing uncertainty, “it’s important that everything go well.” Her response? “When you implicitly or explicitly declare failure off limits, you should expect to be kept in the dark.”
An AI story without the AI mention
That’s the AI implementation story playing out right now across most large organizations.
The pilots are staffed with enthusiastic early adopters with the conditions controlled. The goal, even when nobody says it out loud, is to produce a result that justifies the next stage of investment. Yet, the next stage inevitably fails to deliver value (like most AI initiatives to date). As she says, “the pilot didn’t prevent the fiasco, because the pilot was a success.”
A real pilot looks different. It runs under representative operating conditions. Its explicit goal is to learn, not to demonstrate.
What makes that hard isn’t process design, though. It’s what Edmondson describes as the anxiety zone: the environment where someone on the ground thinks “I’m afraid I need help. I can’t ask for it. I will look foolish.”
Organizations willing to say publicly where AI is working and where it isn’t, to treat the honest account as more valuable than the polished one, are going to learn faster than everyone performing confidence. The ones that can’t will find out what they missed when it’s expensive.
Education, early career professionals, and HR’s role
I was also invited to a discussion about how education is struggling to keep up with the changing needs of industries. It was a diverse group across workforce leaders, researchers, consultants, and folks like me who really wanted to hear what people saw as an opportunity.
Alexandra Levit led the discussion and it was enlightening. She’s the author of many books, including Make School Work, a focused look at work-based learning.
Reducing stigma about trades, better AI training earlier, and how we can create a national policy when so much of what we’re doing is run on a state-by-state or even down to an individual school level.
I came away with two takeaways.
Many trades have already figured out the high school-to-career pipeline. It’s one of the meaningful ways that districts have kept graduation rates steady while providing students with the opportunity to jump start their careers. White collar work could (and should) spend time learning from them.
The second takeaway is a little more troubling. As we are diving in head first into AI in education, I continue to wonder how we build up skills like judgement. One of the things I heard from Alexandra is that she can’t use anything out of Claude as is. How does she know it’s not ready for prime time? Years of experience doing work by hand. Seeing how words and numbers come together not only give you judgement, they give you taste.
Building taste and judgement has always just been a repetition game. Do things enough times and you figure out what good should look like and what it takes to get there. But if AI is doing the heavy lifting, how can you judge something as good or complete?
AI boosts developer performance (with a catch)
I’ve strung together a number of posts recently critical of AI, so much so that I got asked if I was anti AI at Unleash. My latest on Reworked talks about AI actually coming through on its promises: Freeing people up to do work. And, while AI has been lauded as an accelerant for mid- to senior-level developers, it’s actually doing numbers with junior developers.
Of course, that comes with a catch.
AI has killed a lot of the collaboration between development teams. Instead of asking a coworker how to solve a problem, they are solving it with AI.
Which, is good and bad. Good in that those senior devs can focus instead of fielding questions. But knowledge transfer? Judgement? The why behind why you do certain things? Those things are missing.
That’s what I worry about. We don’t understand the long term consequences of this, either. Maybe things will turn out fine. But instead of wishing and hoping, I wonder what we do to intentionally build skills necessary to be a good partner with AI if and when we use it. How do I know what to look for? What are my own tendencies as a human being with this very agreeable technology?



I am a huge fan of failure. It's the only way to try stuff and learn. It's also the only path to better questions.