The Middle Distance Project
Most AI commentary lives at one of two extremes.
One end is the Silicon Valley forecast: full automation is two to four years away, every knowledge worker role will be transformed, the pace of change is unprecedented. This is not dishonest — the underlying technology is moving fast and the direction is real. But the timeline, the assumed starting conditions, and the implementation path bear almost no resemblance to the environments that most large organisations actually operate in.
The other end is dismissal: the hype will pass, our industry is different, we’ve seen this before. This is also not dishonest as a day-to-day survival posture. It is, however, a dangerous long-term strategy. The future is arriving unevenly, but it is arriving.
The Middle Distance Project tries to occupy the honest middle.
The wrong rooms
The AI conversation is happening in the wrong rooms. The people generating the loudest claims are mostly building for environments where change is fast, regulation is light, and data is clean. The people who run large-scale engineering programmes, regulated manufacturing operations, or complex procurement environments hear those claims, recognise that their world looks nothing like that, and go back to the meeting where three departments are arguing about which spreadsheet has the correct version of the data.
Both responses are rational. The gap between them is the problem.
A real asymmetry
There is an underappreciated asymmetry in how AI is being developed versus where it needs to eventually land.
Consumer platforms and SaaS businesses can integrate AI tools directly into their development environments. They can experiment freely, iterate quickly, and use cloud infrastructure without significant friction.
Large-scale industrial programmes, regulated manufacturers, and government-adjacent organisations cannot do this — not because they are unwilling, but because the constraints are real. Sensitive data cannot go to third-party cloud services. Procurement processes require validation steps that take months. Software changes in live systems carry compliance obligations.
The result is that the gap between “what AI can do” and “what AI can do inside our organisation” is genuinely wide in these sectors — and it is wider than most AI commentary acknowledges.
But the gap is not permanent and it is not impassable.
What this space is for
This is not a prediction engine. No article here will tell you that X will happen by Y date. It is not a technology review site. And it is not a place to be told what to do.
Instead: here are the real developments, here is what they could mean for organisations like yours, here are several ways this could unfold, and here are the questions worth paying attention to.
The reader is invited to think, not told what to conclude.
The organisations this is written for are not slow because they are behind. They are slow because they are managing real constraints: regulatory requirements, security classifications, legacy infrastructure, supply chain dependencies, audit trails, and the accumulated weight of processes that exist for reasons. That context is taken seriously here.
I have stood in both rooms. I am not sure I have answers. But I think the questions are worth writing down.