Block Diagrams Are People Too

block person

For systems with high levels of complexity, such as organizations, business models, and cross-domain business processes, it’s characterize the current state, identify the future state, and figure out how to close the gap. That’s how I was trained. Simple, elegant, and no longer fits me.

The block diagram of the current state is neat and clean. Sure, there are interactions and feedback loops but, known inputs generate known outputs. But for me there are problems with the implicit assumptions. Implicit is the notion that the block diagram correctly represents current state; that uncontrollable environmental elements won’t change the block diagram; that a new box or two and new inputs (the changes to achieve the idealized future state) won’t cause the blocks to change their transfer functions or disconnect themselves from blocks or rewire themselves to others.

But what really tipped me over was the realization that the blocks aren’t blocks at all. The blocks are people (or people with a thin wrapper of process around), and it’s the same for the inputs. When blocks turn to people, the complexity of the current state becomes clear, and it becomes clear it’s impossible to predict how the system will response when it’s prodded and cajoled toward the idealized future state. People don’t respond the same way to the same input, never mind respond predictably and repeatably to new input. When new people move to the neighborhood, the neighborhood behaves differently. People break relationships and form others at will. For me, the implicit assumptions no longer hold water.

For me the only way to know how a complex system will respond to rewiring and new input is to make small changes and watch it respond. If the changes are desirable, do more of that. If the changes are undesirable, do less.

With this approach the work moves from postulation to experimentation and causation – many small changes running parallel with the ability to discern the implications. And the investigations are done in a way to capture causality and maintain system integrity. Generate learning but don’t break the system.

It’s a low risk way to go because before wide-scale implementation the changes have already been validated. Scaling will be beneficial, safe, and somewhat quantifiable. And the stuff that didn’t work will never see the light of day.

If someone has an idea, and it’s coherent, it should be tested. And instead of arguing over whose idea will be tested, it becomes a quest to reduce the cost of the experiments and test the most ideas.

Leave a Reply

Mike Shipulski Mike Shipulski
Subscribe via Email

Enter your email address:

Delivered by FeedBurner

Archives