What If the Lights Go Out?

I've been building software for over twenty years. In that time, I've watched frameworks come and go, survived rewrites that should have killed companies, and developed a pretty reliable instinct for when something is hype and when something is real.

That instinct is confused right now.

Two Worldviews

If you're paying attention to the software development landscape in 2026, you're watching two very different belief systems collide.

The first one is familiar. Humans write the code. Humans review the code. Humans debug, refactor, architect, and ship. AI might help — autocomplete a function, suggest a test, speed up a migration — but the human is in the chair, hands on the keyboard, making the decisions. This is the world I've lived in my entire career. I know how it works. I trust it. I've built teams inside of it.

The second worldview is newer, louder, and frankly a little unsettling. In this version, humans don't write code at all. They don't even read it. Instead, they describe what should exist — the outcome, the behavior, the constraints — and a system of AI agents produces the software. Tests are generated. Builds are run. Deployments happen. The humans monitor outcomes and adjust specifications.

The manufacturing world already has a name for this: the dark factory. Factories where the lights are off because there are no humans inside. Robots don't need to see.

In software, the term has been borrowed by folks like Dan Shapiro, who put it at Level 5 of his AI-assisted programming taxonomy. StrongDM's engineering team published what might be the first real manifesto for it back in February — three engineers, no human-written code, no human-reviewed code. Specifications in, working software out.

When I first heard about it, the twenty-year veteran in me had a pretty immediate reaction.

That's insane. You can't not look at the code.

And honestly? I'm not fully over that reaction. But I've gotten curious enough to sit with it.

My Bias, On the Table

I want to be transparent about something. I carry bias here. Two decades of shipping software have taught me that the gap between "it compiles" and "it works in production at scale" is big. I've watched teams get burned by abstractions they didn't understand. I've inherited systems where nobody knew why a particular decision was made — and that was with humans making all the decisions.

  • So when someone tells me a model can produce production software without a human ever looking at what it wrote, every instinct I have says no. Not because it's impossible in theory, but because I've seen how many ways software fails that have nothing to do with whether the code runs.
  • Architecture matters. Context matters. The weird edge case your biggest customer hits on a Tuesday matters. The decision you made three sprints ago that constrains what you can do today — that matters. And I don't yet see how a model that operates without persistent memory or deep organizational context can reliably navigate those things.

That's my bias. And I'm choosing to put it aside for a while.

Why I'm Choosing to Explore This Anyway

Here's what changed for me. In the last three months, something shifted. Not one big breakthrough — more like a series of moments where I caught myself thinking, "Okay, that's actually interesting."

Models got better at holding context over long sessions. Agentic workflows began producing output that wasn't only syntactically correct but also architecturally coherent. Teams I respect — people with twenty years of experience building high-reliability systems — started talking about dark factory patterns not as thought experiments but as things they were actually doing.

And something happened to me personally that I didn't expect. I started prototyping again. Not because I had to — I manage teams, I'm not supposed to be the one writing code anymore — but because the conversations I was having about these ideas were so energizing that I wanted to get my hands dirty. I wanted to see where the walls actually are instead of just assuming I already knew.

I found myself not just thinking about tools anymore, but thinking about systems. Not "what can I vibe code in an afternoon" but "if I was really going to deploy a software factory inside an enterprise that depends on its software to run the business — what would that path actually look like?"

That question has been a lot of fun to sit with. And I don't say that about many things in tech anymore.

What I'm Actually Trying to Figure Out

I'm not trying to prove that dark factories work. I'm not trying to prove they don't. What I'm trying to do is explore the boundary — to walk toward the edge and see where it actually is, not where I assume it is.

Specifically, I want to understand:

What conditions would need to be true for a software factory to work inside a real enterprise? Not a greenfield weekend project. Not a demo. A company that has existing systems, existing customers, existing commitments, and real consequences when things break.

What are the actual limitations of general-purpose models when they're operating independently? Not the theoretical limitations people argue about on Twitter, but the ones you hit when you try to build something real.

And maybe most importantly, what would the humans do in a dark factory? If they're not writing or reviewing code, what does their job become? Is it specification design? Quality strategy? System architecture at a higher level of abstraction? Or is it something we haven't named yet?

I expect to hit walls. That's part of the point. The walls are informative. They tell you what's missing. And knowing what's missing is the first step toward figuring out what a path forward might look like.

The Mistake I Don't Want to Make

There's a version of me that dismisses all of this. That version says: "The models are too unreliable, the security gaps are real, the architecture concerns are valid, and this is all hype that will burn itself out like every other hype cycle."

And maybe that version is right. But here's the thing — I've watched enough technology transitions to know that the people who get caught flat-footed aren't usually the ones who explored too aggressively. They're the ones who assumed nothing would change until it already had.

As much as we can criticize the models, the approach, the security gaps — there is something possible here. I don't know how far it goes. I don't know if it will reach the enterprise in any meaningful way in the next year or the next five years. But I think at this point, it's a mistake not to apply critical thinking to this possible future.

Not cheerleading. Not dismissing. Critical thinking.

I've been leaning into that, and I'm finding it rewarding in a way that reminds me why I got into this industry in the first place. The prototyping, the conversations, the thought experiments — it feels like the early days again, when everything was a question, and every question was worth chasing.

So that's where I am. Walking toward the boundary. I'll let you know what I find.

Subscribe to Leadership Redefined: Master Adaptation & Conscious Strategies

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe