Agile Concepts in Academia, Part II: Evidence-Based Management

Team Science is a familiar phrase and Strategic Doing is becoming a bit more common in the academy as a way to initiate Team Science-based projects using Agile practices. Agile practices and Evidence-Based management emphasize skills and methods translated from the business world into active and effective management of research projects—turning the ideals of Team Science into reality. This series of blog posts will explore the similarities and differences between some of the tools and philosophies that broadly fall into the category of “Agile Management” and unpacks ways to apply them to your work.

Click here to read Part One of this series.

Evidence-Based Management: What the non-profit world learned from medicine (and what academia can learn from both)

While this post doesn’t directly discuss Agile methodology, you may notice that the concepts in Evidence-Based practices are extremely similar. We’ll discuss that more in our third post.

The idea of Evidence-Based Medicine (EBMed, for our purposes) originated in the early 80s with a group of epidemiologists led by Dr. David Sackett at McMaster University in Ontario. The group published a series of articles in the Canadian Medical Association journal on how to read clinical journals critically.*

The term EBMed itself was coined a decade later by the same group. This has further expanded into Evidence-Based clinical practice, which takes setting and circumstances of patient care into account.

Sackett’s simple definition of Evidence-Based practice is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient.

Compare this to a definition of Evidence-Based Management proposed by the Center for Evidence-Based Management (CEBma): Evidence-Based Management is about making decisions through the conscientious, explicit and judicious use of the best available evidence from multiple sources.

Emphasis ours, to illustrate the similarity between the two practices. Both systems further break it down into specific steps that apply equally to clinical practice or business management, and many different versions of these steps have been published.

Very simply put, the steps in EBMed are as follows:

  • Generate your (clinical) question

  • Find best evidence

  • Critical appraisal of evidence

  • Apply the evidence

  • Evaluate

Compare this to the process in EBMgmt:

  • Form a strategic hypothesis for improvement (your question)

  • Run your experiments and gather your data (best evidence)

  • Inspect your results (critical appraisal)

  • Adapt your goals or approach based on what you learned (apply evidence, evaluate)

In both cases, the hypothesis is developed, and outcomes are evaluated based on data. A major challenge for both contexts is where to find the data and how to assess it.

In this post, we focus on a 2014 article by Dr. Anthony R. Kovner, Professor Emeritus of Public and Health Management at NYU, a leader in the field of EBMgmt (as the industry has chosen to shorten it). He discusses the implications of EBMgmt for strategic decision making in non-profit organizations, addressing the difficulty of making a well-informed decision in complex systems, and the reasons why non-profits might be reluctant to adopt EBMgmt as a method for improving their decisions. In a non-profit setting, “evidence” is often a more qualitative measure and decision making is often driven more by politics than results. We noticed that many of his comments around the obstacles faced by non-profits in using EBMgmt were equally faced by change advocates in the academy – it will cost too much, it will take too much time, things are working ok the way they are, many times results are ambiguous, and always, politics are in the mix.

Kovner suggests that when the upsides of change are less immediately tangible than the benefits of avoiding the risks, credible evidence is your friend. And the best way to accumulate it is through systematic literature review and analysis. This is, of course, challenging, especially when there is a dearth of published data, when the problem is too “messy” and complex, or when the researcher lacks the specific skills in information analysis. He points to a simplified method called Critical Topics Analysis (CAT), which provides less detail but is perhaps a more realistic approach when just beginning to define the type and quality of evidence required to test your hypothesis, or answer your question.

Back to the issue of politics. In this article, Kovner cautions “Evidence is not sufficient to change people’s behavior.” This is well-known – in times of change, people tend to fall back on familiar patterns and networks. He suggests broadening the circle of constituents and being more transparent and inclusive in the entire process—core values of both Agile and Team Science values. But he cautions against too much inclusion: “Maximal inclusion should always be considered, but realistically it should be reserved for very important operational and strategic decisions.”

He further notes that how one tells the story of the evidence is often how one persuades decision makers. We were struck again by the parallels to academia, where the story you tell about your research is a driving force in both crafting grant proposals and reporting on results.

Kovner’s tightly written article includes a case example and is a brief but nuanced dive into this subject. If you’re interested in how to begin implementing Evidence-Based practices, this is a great place to start.

In a subsequent post, we’ll dive more deeply into the intersection of Agile Project Management and Evidence-Based Management.

* Sackett, D. How to read clinical journals: 1. Why to read them and how to start reading them critically.

Previous
Previous

Finding your problem-solving superpowers

Next
Next

Agile Concepts in Academia, Part I