When the Pressure to Use AI Doesn't Match What You're Feeling
You're not broken. The situation is just genuinely hard. Here's what you can actually do.
I want to talk to the engineers and engineering managers who are carrying something around right now that they haven't quite figured out how to say out loud.
Maybe it sounds like this: your leadership is pushing hard on AI adoption. Every sprint planning, every all-hands, every performance review cycle — the message is clear. Use AI. Use it more. Use it faster. Show us the metrics. Show us the token consumption. Show us how many agents you're running in parallel. Show us what you built this week that you couldn't have built last month.
And you're doing it. You're using the tools. You're generating code. You're shipping things. But something doesn't feel right, and you can't quite put your finger on what it is.
Maybe it's that the code going out the door doesn't feel like yours anymore. You're less confident in what's being shipped because you didn't write it, not really — you described it, and something else produced it, and you reviewed it as best you could, but the honest truth is that you're not entirely sure what's in there.
Maybe it's that the pace feels unsustainable. Not because the work is hard — you've done hard work before — but because the work feels unmoored. You're moving fast, but you're not sure where you're going. The definition of "done" keeps shifting. The quality bar feels like it's being renegotiated in real time, and nobody's saying it explicitly, but you can feel it.
Maybe it's that your performance is now being measured by something you don't fully control or understand. How much AI are you using? How clever are your prompts? How many lines of code did the model generate this week? These are input metrics dressed up as performance indicators, and some part of you knows they don't actually measure what matters — but you also know that pushing back on them carries risk.
If any of this resonates, I want you to know something: you're not behind. You're not doing it wrong. You're not the problem.
You're in the middle of a disruption, and disruptions are genuinely disorienting. Even for the people driving them.
What's Actually Happening
Let me try to describe what I think is going on, because naming it helps.
Executives across the industry are under enormous pressure right now. Not just to adopt AI, but to demonstrate that they've adopted AI. There's a very real fear — and it's not irrational — that a competitor will find the strategic advantage first. The board will ask why the company isn't moving faster. The market will punish companies perceived as behind.
So the directive comes down: move fast. Apply AI everywhere. Show results. And that directive isn't wrong in principle — there is real value in these tools, and organizations that figure out how to use them well will have a genuine advantage.
But the distance between "figure out how to use them well" and "use them as much as possible as fast as possible" is enormous. One is a strategy. The other is a velocity metric. And when the pressure to show velocity outpaces the organization's ability to define what "well" means, the people in the middle — the engineers, the tech leads, the engineering managers — absorb the gap.
You absorb it as stress. As confusion about what good work looks like now. As a nagging sense that you're building something you wouldn't have approved a year ago. As a disconnection from work that used to give you satisfaction, because the craftsperson in you is being asked to operate at a speed that doesn't leave room for craft.
I've felt all of this. With over two decades in this industry, I have felt every one of these things in the last 24 months. The difference is that I've been in enough disruptions to recognize the pattern — and the pattern has a shape that you can navigate.
What's Driving the Pressure
It helps to understand where the push is coming from — not to excuse it, but to navigate it. And the honest answer is: it depends on who's pushing.
Some executives are genuinely navigating uncertainty. They don't have a secret playbook. They're watching the same demos, reading the same headlines, and hearing the same board questions you are. They've landed on a strategic imperative — make sure the company isn't caught flat-footed — and they're applying the tools they know: goals, metrics, deadlines, performance incentives. When they say "use AI more," they're often trying to express: "I need to know we're seriously engaging with this so we don't miss the thing everyone else finds first." It's a bet on exploration, expressed in the language of execution. There's a real translation gap there, and some of the friction you're feeling stems from it.
Some executives are performing adoption. The board wants an AI story. The investor deck needs a slide. The industry conference needs a talking point. In these cases, the metric isn't whether AI is producing better outcomes — it's whether the company looks like it's producing better outcomes. Token consumption gets celebrated. Lines of generated code get reported. Trophies are awarded for usage volume. And the engineers absorb the cost of that performance in the form of technical debt, quality erosion, and the quiet stress of shipping things they don't believe in.
And some executives are pushing velocity because they genuinely don't understand what they're asking for. They see the demo — describe something, it appears, it works — and they conclude that the hard parts of software engineering were always unnecessary friction. Testing, architecture review, security validation, performance profiling — these look like overhead when you've never experienced what happens without them. The gap between "it compiles" and "it works reliably in production at scale" is invisible until it isn't, and by then the cost is real.
Most organizations have a mix of all three. The thoughtful strategist, the performative adopter, and the well-meaning executive who doesn't know what they don't know — they might even be the same person on different days. The point isn't to sort them into categories. The point is that understanding the source of the pressure helps you decide how to respond to it. Genuine strategic exploration deserves your engagement. Performance metrics deserve your skepticism. Uninformed velocity deserves your expertise — offered constructively, even when it's not being asked for.
What You Can Actually Do
So you're sitting with a project, a directive, or a direction that doesn't feel right. Maybe you're being pushed to ship AI-generated code that you're not confident in. Maybe you're being asked to build something in a way that your experience tells you will lead to problems. Maybe the pressure to demonstrate AI use is distorting priorities in ways no one is willing to name.
Here's what I've learned about navigating these moments — not just with AI, but across every disruption I've lived through in this career.
Move with the direction, not against it — but steer. If leadership has committed to an AI-first approach, standing in front of that train doesn't help anyone, including you. But you can shape how it lands. Instead of resisting the experiment, guide it toward a safe landing. Propose the quality gates. Suggest the measurement framework. Volunteer to run the retrospective. The person who helps an experiment succeed responsibly has more influence than the person who predicted it would fail.
Make the learning visible. Disruptions produce information. Every project you ship, every AI-generated codebase you wrestle with, every deployment that surprises you — these contain lessons that the organization needs. Document what's working and what isn't. Share it. Not as a complaint — as data. Write the internal post. Give the brown bag talk. Be the person who turns organizational confusion into organizational knowledge. That's leadership, and it's more valuable right now than clean code.
Reframe your expertise as leverage, not resistance. When you see a problem with an AI-generated architecture, you're not being a Luddite. You're applying years of pattern recognition to a new situation. The question isn't "should we use AI?" — that ship has sailed. The question is "how do we use AI well?" And the people best positioned to answer that are the ones who understand what good software looks like. That's you. Lean into that.
Protect the things that actually matter. You can't fight every battle. But you can identify the two or three things that would cause real damage if they went wrong — security, data integrity, customer trust — and make those your hill. Be specific about the risk. Quantify it if you can. Frame it in terms the business cares about: liability, customer impact, operational cost. Most executives will listen to concrete, quantified risk even when they're pushing for speed.
Take care of yourself. This is the one nobody says in a professional context, but it's the one that matters most. If you're carrying stress because the work doesn't feel right, that weight accumulates. Talk to someone — a peer, a mentor, a therapist if you have access to one. The disruption is real, but it's temporary in the way all disruptions are: the acute phase passes, the new normal emerges, and the people who navigate it best are the ones who didn't burn themselves out getting through it.
The Permission I Want to Give You
If you're a senior engineer or a tech lead sitting in a meeting where someone is presenting an AI-generated system and your instinct is telling you something is wrong, your instinct is probably right. You've spent years developing that instinct. It's not obsolete. It's more relevant than ever.
The challenge isn't whether your expertise matters. The challenge is figuring out how to apply it in a moment when the organization is moving faster than its own understanding. And that's a leadership problem, not a technical one.
Sometimes we find ourselves in the presence of other people's learning. An executive pushes a direction that turns out to be premature. A product team ships something that needs to come back for rework. A metric gets celebrated that doesn't measure what anyone thought it measured. Sometimes these are honest mistakes made in good faith. Sometimes they're the predictable result of decisions made without consulting the people who would have flagged the problems. Either way, the outcome lands on you.
That's not fair. It is, however, real. And you get to decide whether you engage with it or withdraw from it. The engineers who withdraw protect themselves in the short term but lose influence over time. The engineers who engage — who document the lessons, surface the risks, and help the organization learn from what just happened — end up shaping what comes next.
The engineers who will come out of this disruption with the most influence, credibility, and career capital are not the ones who generated the most lines of AI-assisted code. They're the ones who helped their organizations figure out what this technology is actually good for — and what it isn't. The ones who turned confusion into clarity. The ones who protected what mattered while staying open to what was possible.
That can be you. It might not feel like it right now, in the middle of the pressure, the confusion, the sprint deadline, and the metric that doesn't measure anything real. But disruptions end. And when this one does, the question people will ask isn't "how many tokens did you consume?" It's "What did you build that lasted?"
Find a way to make your answer to that question something you're proud of. Even if it's messy right now. Even if it's imperfect. Even if you're learning as you go.
We all are.
Cheers,
~ John