Build Outside the Wall
Most conversations about AI adoption in regulated organisations focus on technology. Model accuracy, data quality, tooling maturity. All real concerns, but there is a different kind of friction that gets less attention, and it may be the more interesting one.
The environment where AI-assisted software needs to be built and the environment where it needs to run have very different requirements. In most regulated organisations, both happen in the same place. It does not have to.
The constraint
Every regulated organisation has a perimeter. Not just a network boundary, but a trust boundary. Inside it sit classified data, production systems, compliance obligations, audit trails, change management boards, security clearances. The perimeter exists for good reasons and protects things that matter.
But it was designed for operations, not for exploratory development.
The conventional approach
The Organisation
AI developers, working under operational rules Same constraints. Same approval cycles. Same pace.
The team inherits every constraint of the operational environment.
When you build AI-assisted tools inside the operational perimeter, every constraint designed to protect live systems also applies to development work. Experimenting with a modern LLM API means a procurement review. Trying a new framework means a security assessment. Iterating on a prototype means waiting for the change advisory board.
None of these gates are unreasonable in their original context. They were built to protect production systems from uncontrolled change, and they do that well. But development work produces no direct change to production. It is not obvious that it needs to pass through the same gates at every step.
In practice, AI projects inside these environments tend to move slowly, produce cautious outputs, and sometimes stall entirely. BCG’s 2024 research on AI adoption found that only 26% of companies have moved beyond proofs of concept, and that 70% of the obstacles are people- and process-related rather than technical (BCG, 2024 ↗). The people involved are usually skilled. The environment is working against them.
One possible separation
An alternative: build outside the perimeter entirely. The idea is not new. Lockheed’s Skunk Works ↗ operated on exactly this principle for decades: a small team, separated from the parent organisation’s processes, with the autonomy to choose their own tools and pace. They built some of the most complex aerospace systems of the 20th century faster and cheaper than any conventional programme could.
A separated approach
Separated Development Team
The team
How they work
Parent Organisation
Operational teams & leadership
The team operates outside the perimeter. The security boundary is never touched during development.
The idea is to create a separate development environment that mirrors the structure of the live organisation but runs on anonymised or synthetic data. No classified information. No production dependencies. No clearance requirements for the developers working in it.
Inside this environment, the team can use LLM APIs, AI-assisted coding tools, cloud infrastructure, and fast iteration cycles. The things that the modern development world takes for granted become available.
The output, once it reaches a state worth deploying, crosses into the live environment through the normal change management path. The same validation gates still apply, but they apply to finished, tested work rather than to every step of the development process.
Whether this separation is practical depends on the organisation. Some environments can mirror their infrastructure cleanly. Others have dependencies between development and live data that are harder to untangle.
Who does the work
A related question that comes up with separated development environments: who should staff them?
The natural instinct is to assign experienced internal people. They know the systems, the processes, the history. That knowledge is valuable, and any separated team probably needs access to it. But there is a tension.
What the team might need
Resources
Direct access to compute and APIs, without the lead times that operational procurement usually requires.
Room to explore
Freedom to choose tools and define workflows, with enough distance from day-to-day operations to try approaches that might not work.
People who ask different questions
People who have not yet learned to see existing processes as fixed. Someone who asks “why does this take four weeks?” is sometimes asking the most useful question in the room.
Someone with fifteen years inside a programme knows where every process boundary sits. They know which systems connect, which workarounds carry real weight, and which meetings actually produce decisions. That understanding matters for operating the existing system.
It can also make it harder to see the system differently. When you know the history behind a process, you are less likely to question whether the process still makes sense. Clayton Christensen made a version of this argument in The Innovator’s Dilemma ↗: successful organisations fail not from incompetence, but because the very processes that make them effective also define their blind spots.
A newer person does not carry that context. They might look at a four-week approval cycle and wonder why it takes that long. They have not yet absorbed the assumption that four weeks is normal. Sometimes that question leads nowhere. Sometimes it turns out to be the right one.
Most likely, these teams need both: people who understand the operational reality, and people who are not yet shaped by it. How you balance those perspectives, and whether the experienced people advise rather than gatekeep, matters more than the exact headcount.
Getting the work back in
If development happens outside the perimeter, the handover becomes important.
The handover interface
Development Team
builds, iterates, experiments freely
Validation Gate
Code review & security audit
Integration testing on mirrored infra
Compliance documentation
Standard change management
The team decides when work is ready. The organisation decides whether it meets standards.
Parent Organisation
receives validated output only
The existing change management process, security review, and compliance checks still apply. What changes is what arrives at the gate. Instead of incremental patches from months of constrained internal development, the gate receives finished, tested software that was built at a pace the technology actually supports.
The validation process does not need to get faster. What changes is the ambition and completeness of what arrives.
There are open questions. Does the separated team build to the same architectural standards the live environment expects? Who owns the integration testing? What happens when something works well against synthetic data but behaves differently against the real thing? These are solvable problems, but they need answers specific to each organisation.
What this approach surfaces
A separated team working with modern tools and fewer constraints will sometimes produce outputs that challenge existing ways of working. They may find that certain manual processes could be handled differently with better tooling, or that certain team structures exist to manage complexity that might be reducible.
Whether an organisation wants to hear that, and what it does with the information, is a separate problem. The separated environment surfaces it. It does not resolve it.
Where it gets difficult
This model has real failure modes. A team working outside the operational perimeter can lose touch with the problems that actually matter inside it. They can build software that is technically interesting but solves the wrong problem, or that works well in a clean environment but does not survive contact with the messiness of real operations.
The people who understand the operational context need to be involved, but in what capacity matters. If they can veto work before it is tried, the separation loses most of its value. If they help the team understand what matters and why, the two perspectives can complement each other.
There are also organisational politics to consider. A separated team can be perceived as a threat, or as a judgement on how things have been done so far. Managing that perception takes deliberate effort and honest communication about what the team is trying to do.
The structural separation removes some common failure modes, but it introduces new ones. Whether the trade-off makes sense depends on the organisation, the problem, and how much room there is to try something different.
This is one way to think about how regulated organisations might close the gap between what AI development needs and what their environments currently allow. There are others. The gap itself is not going away.