Approach to develop the Matching Q-M tool

The Matching policy and practice Questions to Methodological approaches (Matching Q-M) tool is an effort to connect two critical parts of evidence support systems:

  1. Converting a decision-making need into a specific type of question
  2. Identifying the most suitable methodological approaches to address that type of question.

The tool was developed first, by building a taxonomy of mutually exclusive and collectively exhaustive types of questions which was done by critically and iteratively analyzing existing frameworks and questions that were asked to evidence-support units around the world. Following the creation of this taxonomy, a Delphi study was conducted among methodological experts to create a ranking of the most suitable methodological approaches to address each one of the questions included in this taxonomy.

Step 1: Building a taxonomy of types of questions

A cross-sectional study was conducted to all global units providing some type of evidence-support at explicit request of decision makers. These units were asked to provide either the questions or the evidence products that they have produced in response to these questions.

Later, an iterative conceptual analysis was conducted to build a mutually exclusive and collectively exhaustive list of questions, structured in the four main stages of the policy cycle. Complementary, other existing frameworks (GRADE EtD and CFIR) were consulted to check whether other questions had not been considered. 

Finally, the draft taxonomy was presented at the Global Symposium in Health Systems Research 2022, where audience and panelists provided critical feedback to ensure the comprehensiveness of the taxonomy.

The taxonomy has four decision-making stages, that includes different goals that may need to be achieved (14 in total), and each goal includes multiple types of question that may be addressed (41 in total).

Step 2: Matching questions to study designs

An online Delphi study was conducted to a sample of methodological experts to reach consensus to what study designs would be more suitable to answer each one of the questions that were included in the taxonomy described above. In each question, the methodological experts needed to rank study designs in their suitability to answer each one of the questions.

Experts were sampled based on their expertise across the eight forms of evidence, and their geographical region. More than 40 methodological experts participated in at least one of the two rounds of the Delphi study, in which 28 types of question reached consensus.

The results of this study were published in a scientific article available below:

Matching the right study design to decision-maker questions: Results from a Delphi study