Multiscale competency architecture in research labs?
Learning from biological organization
I was recently struck by listening to Michael Levin (on Lex Fridman’s podcast; and reading his work in Nature Reviews Bioengineering), where he talks about living systems having multiscale competency architecture. I had never heard about this concept before, and I found this very appealing. This appears to be nature’s way of avoiding micromanaging everything that goes on within an organism. What works for nature, maybe also applies to research labs - at least to some degree.
Multiscale competency architecture means a hierarchical organization of living systems where at each level certain competencies reside. Think of a frog, and then its organs, tissues and cells. The higher levels of the biological hierarchy act as ‘behavior-shaping’, not by controlling every little thing at the levels below, like at the cellular level or even below. At the modules below there is competition and cooperation, and the levels of the hierarchy each have their goals and drive. This architecture is nature’s way to build problem-solving machines.
A research lab is a not a living system, so the metaphor surely will break down at some point. But maybe we can still learn from this? I think probably yes.
First of all, it is my firm belief that micromanaging (besides being against my personal nature) cannot work for running a complex system, such as a research lab (let alone a rather large one like ours). Cultivating a multiscale competency architecture where different layers in the organization can be relied upon to have their own abilities and problem-solving competencies is essential. There are substructures within the lab that cooperate to achieve a goal (for example, by pooling complementary expertise or just by pooling their combined work hours to achieve something bigger). At some abstract level the different modules within the lab may also compete for funding, but in this case ideally in the absence of the negative connotation of direct competition among actors, the people. An example is that if grant funding does not come in for a certain research line, this research line will not be getting the support to further flourish, while some other research line may expand further (with more people) with the arrival of external funding.
It seems also fairly clear that this ‘problem-solving’ architecture is more resilient in the face of external disturbances; the various ‘levels’ are empowered to do their thing and don’t need micro-scale input to deal with a new situation. We wrote about lab resilience earlier in a paper in PLOS Computational Biology (see here), and this empowerment at the different levels is one component of this resilience. Two of the 10 points from this paper come immediately to mind in this context: providing intellectual freedom and encouraging intensive within-lab cooperation; but also things like fostering flexible working times (basically relying on the various members to do what’s best for them).
The metaphor breaks down in the sense that this kind of biological organization is ruthless and uncaring about the fate of the levels below it: the whole organism may enjoy rock-climbing, and does not care that they get a scratch (meaning the death of cells; the example given in the podcast). This is of course unacceptable in human organizations, and for example in research labs. Research labs should be caring communities.
What other parallels are there, what else can we learn from this? And in which other ways does the metaphor break down?
What do you think?


