Beware of the new gap – between implementation science and implementation practice.

Dagfinn Mørkrid Thøgersen 

EIC Board Member

Clinical Psychologist / Implementation Specialist / PhD-Scholar

The Norwegian Center for Child Behavioral Development

Twitter: @damotho

Could I suggest three ambitions for the new decade of implementation science and practice? Firstly, let’s make sure that the ways in which we design, conduct and report implementation research generate findings that are applicable in practice. Secondly, let’s enhance our knowledge on how to influence the outer setting in implementation by actively collaborating with researchers from political science, economy, sociology and related disciplines. Thirdly, let’s aim for improved implementability of implementation science.


I worry that the growing science of implementation primarily focuses on activities, that can be hard to translate to improved implementation practice. Take the measurement development race and its inherent call for more precise and valid research instruments. While some implementation factors and processes fit well within this predominantly quantitative frame, many highly important elements of implementation are more qualitative and simply hard to measure. 

This applies in particular to the outer contexts of implementation from where policies, system leaders, legislation and funding structures affect implementation in often surprising ways. An ongoing review of outer context measures presented at the 2019 SIRC-conference found relatively little support for their appropriateness, leading its authors to suggest further measurement development. However, for those practicing implementation it is far more interesting to know the malleability rather than the measurability of outer context factors – highlighting the importance of building knowledge on how these can be influenced and shaped.

The focus on creating a metric based implementation science may therefore – paradoxically – contribute to a new “gap” between this science and its applied practice in health, education and social welfare. It is at risk of being a resource-demanding dead end – at least if high-quality measures show to be unusable for implementation practitioners. Some of the financial and human resources may be better spent on enhancing the applicability of current implementation knowledge. Therefore, let’s not develop measures, metrics or models that do not add value to implementation practice.


When struggling to make measurable grasp of the outer setting of an implementation, it also shows how little we know about navigating system complexity, ever-changing policy agendas or the unpredictability of funding streams. They affect implementation but we have hardly understood how. Other disciplines might be far better equipped to define, understand and analyze these influences.

While our field – implementation science – might be doing well in conceptualizing the micro cosmos of an implementation, broader, multidisciplinary expertise is needed to better understand factors at the macro level of our work. This is where political scientists, sociologists, anthropologists and economists come in – to name just a few. 

In broadening our focus and learning from the perspectives of multiple disciplines, implementation science may be better able to understand the larger systems in which implementation occurs and the ways in which they can be influenced and changed. This would not only advance the science, but also the practice of implementation – with ideas on where and how to intervene to make further progress in our implementation efforts.


Comparable to the known difficulties of getting evidenced interventions into practice, we experience persistent barriers to the use of implementation outcomes, strategies or frameworks in practice. Do people and agencies not want what they need?

We have to acknowledge this pretty simple fact: If people or agencies are not engaged and motivated and supported to make the changes they need to do, they will not follow through in the long run. And why might our science not be self-motivating to everybody? Because implementation models, measures and interventions have been developed under the best possible conditions and with the highest possible ambitions – without thinking too much about their implementability or scalability.

This leaves it up to implementation practitioners and leaders to figure out how to translate this knowledge to routine settings and how to change the structures that characterize these contexts. Again, this is ironic, given that implementation science is about closing gaps. Our discipline should assist in creating the optimal conditions for the uptake of evidenced interventions, whenever possible. We should therefore be careful of just pointing to inadequate contexts as the reason for a lack of implementation success. Couldn’t it also be that our models, measures and suggested interventions suffer from a lack of implementability and scalability? 

Or that there is a dearth of approaches to build practitioners’ implementation literacy? Currently, far more people and resources appear to be dedicated to developing implementation researchers, than to training and supporting implementation practitioners. Where are the European institutions, programs or funds that help to train and professionally develop implementation practitioners? 

If we want to be serious about getting implementation science into practice, it will require a stronger focus on its applicability, multidisciplinarity and implementability in the decade ahead. While we have made tremendous progress in the past twenty years, we should aim to advance further – by reminding ourselves that implementation is an applied and interdisciplinary field that depends on implementable and scalable innovations for its continued relevance and success.