Theory-Based Approaches to Evaluation: Concepts and Practices

On this page

Preface

Why Theory-Based Approaches?

Over the last 20 years, theory-based approaches — under various names — have increasingly moved into the mainstream of thinking and practice about how interventions (i.e., programs, policies, initiatives or projects) are designed, described, measured and evaluated. During that time, theory-based approaches have demonstrated promise in helping evaluators address a variety of challenges, such as coming to terms with the inherent complexity of certain types of interventions and overcoming the limitations of experimental evaluation designs.

To help ensure that evaluation continues to play a key role in providing a neutral, evidence-based assessment of the relevance and efficiency of federal government programs, the Treasury Board of Canada Secretariat recognizes the importance of expanding the “tool box” available to federal evaluators. Theory-based approaches to evaluation represent a potentially powerful tool.

Using This Document

This document introduces some of the key concepts of theory-based approaches to evaluation. It is hoped that readers will be encouraged by the information and advice provided in this document and will explore the use (e.g., through pilot evaluations) of theory-based approaches to evaluation in a federal setting. To support this, Sections 1.0 to 8.0 of the document describe the general application of theory-based approaches to evaluation, and Sections 9.0 and 10.0 discuss the potential application of theory-based approaches to a range of federal programs.

This document is neither an exhaustive training program in theory-based evaluation nor a step-by-step guide to undertaking a theory-based evaluation. Evaluators who wish to integrate theory-based approaches into their practices are encouraged to pursue additional readings (including those referenced in this document) and, as appropriate, to seek additional support in undertaking a theory-based evaluation.

1. Introduction

1.1 The Challenge of Experimental Evaluation Designs

As noted in Program evaluation methods: Measurement and attribution of program results (Treasury Board of Canada Secretariat, chap. 3), evaluators face two broad challenges: measuring the expected resultsFootnote 1 from an intervention and attributing those results to the activities of the intervention.

Experimental evaluation designs aim to address both of these challenges (chap. 3). These designs typically measure both the baseline and the final results associated with an intervention and, by incorporating a counterfactual (e.g., a comparison group), can assess the causal link between the intervention and the observed results.

Experimental and quasi-experimental evaluation designs can be quite powerful and should be undertaken when appropriate (e.g., when interventions involve a new or different approach to addressing a problem and the goal of the evaluation is to test whether the intervention works). However, there are several shortcomings associated with these designs, in particular:

Practicality: In many contexts, experimental designs, especially the more sophisticated ones, cannot be implemented. Often the development of a counterfactual may be difficult or undesirable (e.g., for ethical reasons). In some cases, there may not be an opportunity to manipulate the delivery of the intervention as required in order to demonstrate attribution. In other cases, the resources or time required to undertake experimental designs may not be available.

Seeing interventions as black boxes: Experimental designs, even when feasible, are not aimed at understanding why and how the observed results occurred. These designs do not attempt to answer questions such as the following: “What was it about the intervention or the context that caused the results? Where the expected results were not observed, what was it about the intervention that didn’t work? Was the underlying theory of the intervention wrong, or was the problem a case of poor implementation?” Knowing the answer to these questions can be valuable for improving the intervention or implementing the intervention in a different location or manner. Because experimental designs do not ask these questions, they are often described as “black box studies”; they may assess whether the expected results occurred and whether the intervention played a role, but they do not explore the reasons why the intervention did or did not work.

A theory-based approach to evaluation can help address these shortcomings. In the absence of an overall experimental design, it provides a way to assess the extent to which an intervention has produced or influenced observed results. It also opens the black box, examining what role the intervention played in producing the observed results.

1.2 What are Theory-Based Approaches to Evaluation?

Theory-based approaches to evaluation use an explicit theory of change to draw conclusions about whether and how an intervention contributed to observed results. Theory-based approaches are a “logic of enquiry,” which complement and can be used in combination with most of the evaluation designs and data collection techniques outlined in Program evaluation methods: Measurement and attribution of program results.

Theory based evaluation is an approach to evaluation (i.e., a conceptual analytical model) and not a specific method or technique. It is a way of structuring and undertaking analysis in an evaluation.

A theory of change explains how an intervention is expected to produce its results. The theory typically starts out with a sequence of events and results (outputs, immediate outcomes, intermediate outcomes and ultimate outcomes) that are expected to occur owing to the intervention. This is commonly referred to as the “program logic” or “logic model.” However, the theory of change goes further by outlining the mechanisms of change, as well as the assumptions, risks and context that support or hinder the theory from being manifested as observed outcomes. This opens the black box of change and allows evaluators to better examine the causal link between the intervention outputs and the observed outcomes. The theory of change can be used to test — with evidence — the assumed causal chain of results with what is observed to have happened, checking each link and assumption in the process to verify the expected theory.

Theories of change can be thought of as the story of what should happen in the “arrows” that link the boxes in a traditional logic model.

Another way to think of a theory of change is as a logic model that has been described and explained, in particular in terms of the causal linkages between outputs and the different levels of outcome.

Theory-based approaches have been discussed in evaluation literature for many years (Weiss, 1997; Rogers, 2007; Funnell & Rogers, 2011). While there is little agreement either on the terminology or the concepts, there is consistency in the main messages and agreement on the value of theory-based approaches.

References

Chen, H.-T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage Publications Inc.
Chen, H.-T. (1994). Theory-driven evaluations: Need, difficulties and options. Evaluation Practice, 15(1), 79–82.
Funnell, S., & Rogers, P. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey Bass.
Rogers, P. (2007). Theory-based evaluations: Reflections ten years on. New Directions for Evaluation, 114, 63–67.
Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21, 501–524.
Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76, 41–55.
Weiss, C. H. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87, 35–45.
Weiss, C. H. (2003). On theory-based evaluation winning friends and influencing people. The Evaluation Exchange, 9(4), 1–5.

2. Context and Causation in Theory-Based Approaches

Two key ideas that distinguish theory-based approaches from traditional approaches are (1) the influence of context on program results, and (2) a mechanistic, rather than counterfactual, approach to determining causality.

Context Matters

Theory-based approaches, more than many other evaluation approaches, pay explicit attention to the context of the intervention. It is acknowledged that contextual factors can help an intervention achieve its objectives or act against the intervention working. For example, in an intervention aimed at reducing smoking in the population, contextual factors may include the enthusiasm of those required to enforce a ban, the social makeup of the targeted populations, and related supporting legislation. These factors are often essential in making causal inferences and need to be part of the evaluation design.

When developing a theory of change, the context may be explicitly identified (i.e., as a stand-alone description) and/or may be represented through a discussion of the assumptions underlying the program.

Mechanistic Causation

For most interventions, there are usually multiple causes for an observed outcome. A wide range of other economic and social factors, not to mention other government interventions, may also come into play. For example, the success of an allowance-based incentive program intended to encourage nurses to live and work in rural areas may be influenced not only by the existence and size of the allowance but also by employment rates in the nursing field, the specific rural settings in which nurses are placed, and the personal backgrounds of the nurses involved. In these situations, seeking a clear “one-to-one” causation that can be wholly attributed to one mechanism (finding the cause) is not possible. Rather, the relevant evaluation question is: In light of the multiple factors influencing a result, has the intervention made a noticeable contribution to an observed outcome and in what way? Understanding contribution, rather than proving attribution, becomes the goal.

Theory-based approaches to evaluation attempt to understand an intervention’s contribution to observed results through a mechanistic or process interpretation of causation, rather than determining causation through comparison to a counterfactual. In theory-based approaches, the specific steps in a causal chain, the specific causal mechanisms, are tested. If these can be validated by empirical evidence, then there is a basis for making a causal inference. At the same time, theory-based approaches seek to identify and assess any significant influencing factors (i.e., contextual factors) that may also play a role in the causal chain and thus affect the contribution claim.

3. Theories of Change and Logic Models

Managers of federal programs regularly use logic models to describe the results expected from an interventionFootnote 2. Where program managers have not developed their own logic model, evaluators often do so to support their evaluation effort. The results chains embedded in logic models are key building blocks for developing theories of change. Theories of change expand on results chains to articulate why the sequence of results is expected to occur, whereas logic models tend to focus solely on the results intended by a program (i.e., on the boxes in a typical visual logic model). The theory-based approach argues that the “logic of the logic” is the important feature of logic models; it focuses on the connections (which can be thought of as the “short-cycle” logic) between the boxes in a visual logic model rather than the “long-cycle” logic of the results chain (see Figure 1). Simply put, theories of change explain how the intervention is expected to bring about the desired results rather than just describing the results.

Figure 1. Short-Cycle Logic versus Long-Cycle Logic (Theory of Change versus Program Logic)
Short-Cycle Logic versus Long-Cycle Logic (Theory of Change versus Program Logic). Text version below:
Figure 1 - Text version

This figure illustrates the differences existing between the short-cycle logic of theories of change that is, the differences between the levels of a results chain or logic model and the long-cycle logic of results chains from activities to ultimate outcomes, in logic models. In the centre of the figure, there are a set of five boxes arranged vertically, representing the results chain. The boxes represent the elements normally found in a results chain or logic model and are labelled, from bottom to top, activities, outputs, immediate outcomes, intermediate outcomes and ultimate outcomes. The boxes are connected with arrows that point upward, from the lower boxes to those above them, representing the movement in the results chain from activities, to outputs, to immediate outcomes, to intermediate outcomes, to ultimate outcomes.

On the right-hand side of the results chain, there is a large oval loop, which starts at the bottom box, which is labelled "activities," and ends at the top box, which is labelled "ultimate outcomes." The loop consists of two lines, each ending in an arrow, and represents a "feedback" process. The loop is labelled "long-cycle logic," also referred to as program logic.

On the left-hand side of the figure, there are four smaller loops arranged vertically and consisting of two curved lines, each ending in an arrow. The loops represent feedback and are situated so that each aligns with the spaces between the five boxes in the results chain. The loops are labelled, "short-cycle logic," also referred to as the theory of change.

Generally, a theory of change includes:

  • a logic model/results chain;
  • the assumptions, risks and, in some cases, the mechanisms associated with each link in the logic model/results chain;
  • the external factors that may influence the expected results; and
  • any empirical evidence supporting the assumptions, risks and external factors.

Theories of change are referred to by a variety of names, including program theories, impacts pathways, and pathways of change.

Assumptions are key events or conditions that must occur for the causal link to happen. Risks are influences or events outside the intervention that may inhibit the causal link from happening. Mechanisms are the causal processes that enable the program to produce results. External factors are circumstances beyond the control of the program, such as social, political or economic context, which may affect the program’s ability to achieve an intended result.

In some cases, theories of change are subdivided into two components: the intervention theory, which outlines the underlying behavioural assumptions (the mechanisms) behind the intervention, and the implementation theory, which identifies how an intervention is expected to operate and trigger these mechanisms. These components can be developed separately, but are often merged into or developed as one theory of change. Unfortunately, there is no consistent terminology for theories of change; different authors may use the same term for different concepts. Blamey and Mackenzie (2007) discuss this issue.

References

Blamey, A., & Mackenzie, M. (2007). Theories of change and realistic evaluation: Peas in a pod or apples and oranges. Evaluation, 13(4), 439–455.

4. Theory-Based Approaches to Evaluation

There is no agreed classification of theory-based approaches; indeed, in recent years, there has been a proliferation of theory-based approaches and numerous variations within each approach. In this section, two prominent categories of theory-based evaluations, realistic evaluation and theory-of-change approaches, are discussed. These descriptions are generic and may not always apply. For more information on the similarities and differences in theory-based approaches, readers can consult Blamey and Mackenzie (2007) or Stame (2004).

Realistic Evaluation

Realistic evaluation is a form of theory-based evaluation developed by Pawson and Tilley (1997, 2006). They argue that whether interventions work depends on the underlying mechanisms at play in a specific context. For Pawson and Tilley,
outcome = mechanism + context

Mechanisms describe what it is about the intervention that triggers change to occur. In a smoking cessation intervention, for example, mechanisms might include peer pressure to stop or to not stop, fear of health risks, and economic considerations. For realistic evaluators, the key evaluation questions are, What works? For whom? In what circumstances? In what respects? How? Realistic evaluators are less interested in the outcome-level question, Did the intervention work at a macro level?

Realistic evaluation develops and then empirically tests the hypotheses about what outcomes are produced by what mechanisms in what contexts. The realistic approach tends to be more research-oriented, focusing on the underlying intervention theory and its behavioural assumptions at work, and the conditions supporting the intervention. The focus is on the most promising context-mechanism-outcome configurations (CMOCs), which show how interventions are meant to work in which populations and under what conditions. These can be viewed as mini-theories of change or links in an overall theory of change of an intervention. Each CMOC is, in effect, the subject of an evaluation and is tested against the available evidence.

Blamey and Mackenzie (2007, p. 444) describe how to undertake a realistic evaluation, using a smoking cessation intervention as an example:

Step 1: The evaluator, through dialogue with program implementers, attempts to understand the nature of the program: What is the aim of our smoking cessation program? What is the nature of the target population at whom it is aimed? In what contexts and settings will it operate? What are the prevailing theories about why smoking cessation services will work for some people in some circumstances?

Step 2: The evaluator maps out a series of potential mini-theories that relate the various contexts of a program to the multiple mechanisms by which it might operate to produce different outcomes. For example, practitioner knowledge and the existing evidence might suggest that focusing the educational component of a midwife-led smoking cessation program on the potential negative effects on babies in utero will be most effective for pregnant women who have no children. However, young, non-pregnant female smokers may be less likely to respond to concerns about the threat of health effects on non-existent babies, but may be more likely to respond to anti-smoking interventions designed to appeal to their self-image.

Step 3: At this stage, the evaluator undertakes an outcome inquiry in relation to these mini-theories. This involves developing a quantitative and qualitative picture of the program in action. It might, for example, address how different types of smokers fare when it comes to breaking the habit following different types of cessation services delivered in a variety of ways. This picture includes an assessment of the extent to which different underlying psychological motivations and mechanisms have been triggered in particular smokers by specific services.

Step 4: By exploring how CMOCs play out within a program, the evaluator refines and develops tentative theories of what works for whom in what circumstances.

A key strength of the realistic approach is its focus on context. Context must be part of the evaluation framework, and specific contexts, whether within or outside the control of the intervention, can enhance or detract from how well the intervention works.

Examples of realistic evaluations are found in Byng, Norman and Redfern (2005); Leeuw, Gilse and Kreft (1999); and Leone (2008).

Theory of Change Approaches

These approaches involve developing a theory of change for the intervention showing how the specific intervention is intended to work and the assumptions behind the theory. They tend to address the traditional evaluation questions of whether and to what extent the intervention has worked (i.e., has made a difference to the desired outcome). The theory of change is usually developed on the basis of a range of stakeholders’ views and information sources. Approaches include theory-based evaluation (Weiss, 1995, 2000), theory-driven evaluation (Chen, 1990), and contribution analysis (Mayne, 2001). All develop a theory of change for the intervention and then verify the extent to which the theory matches what is observed.

One theory of change approach, contribution analysis, argues that if an evaluator can validate a theory of change with empirical evidence and account for major external influencing factors, then it is reasonable to conclude that the intervention has made a difference. The theory of change provides the basis for arguing that the intervention is making a difference and identifies weaknesses in the argument, thus identifying where evidence for strengthening such claims is most needed. Causality is inferred from the following evidence:

  • The intervention is based on a reasoned theory of change: the results chain and the underlying assumptions of why the intervention is expected to work are sound, plausible, and agreed to by key players.
  • The activities of the intervention were implemented.
  • The theory of change is verified by evidence: The chain of expected results occurred, the assumptions held, and the (final) outcomes were observed.
  • External factors (context) influencing the intervention were assessed and shown not to have made a significant contribution, or if they did, their relative contribution was recognized.

In the end, a conclusion (a contribution claim) is made about whether the intervention made a difference. To summarize:

contribution claim = verified theory of change + other key influencing factors accounted for

Examples of theories of change evaluations are found in Carvalho and White (2004), Weiss (1995), and White (2009). Patton (2008) provides an example of an evaluation using contribution analysis.

References

Blamey, A., & Mackenzie, M. (2007). Theories of change and realistic evaluation: Peas in a pod or apples and oranges. Evaluation, 13(4), 439–455.
Byng, R., Norman, I., & Redfern, S. (2005). Using realistic evaluation to evaluate a practice-level intervention to improve primary healthcare for patients with long-term mental illness. Evaluation, 11(1), 69–93.
Carvalho, S., & White, H. (2004). Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25(2), 141–160.
Chen, H.-T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage Publications Inc.
Leeuw, F. l., Gilse, G. H., & Kreft, C. (1999). Evaluating anti-corruption initiatives: Underlying logic and mid-term impact of a World Bank Program. Evaluation, 5(2), 194–219.
Leone, L. (2008). Realistic evaluation of an illicit drug deterrence programme. Evaluation, 14(1), 9–28.
Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance measures sensibly. Canadian Journal of Program Evaluation, 16(1), 1–24.
Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect (PDF – 130 KB) (ILAC Brief No. 16). Rome, Italy: The Institutional Learning and Change Initiative (ILAC).
Mayne, J. (2011). Contribution analysis: Addressing cause and effect. In K. Forss, M. Marra & R. Schwartz, (Eds.), Evaluating the complex: Attribution, contribution and beyond (pp. 53–96). New Brunswick, NJ: Transaction Publishers.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.), Thousands Oaks, CA: Sage Publications Inc.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousands Oaks, CA: Sage Publications Inc.
Pawson, R., & Tilley, N. (2006). Realist evaluation. Development Policy Review Network Thematic Meeting, Report on Evaluation, Amsterdam, Netherlands.
Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2004). Realist synthesis: An introduction (PDF – 475 KB) (Research Methods Paper 2). Manchester, UK: University of Manchester, Economic and Social Research Council Research Methods Programme.
Stame, N. (2004). Theory-based evaluation and varieties of complexity. Evaluation, 10(1), 58-76.
Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Vol. 1, Concepts, methods, and contexts. Washington, DC: The Aspen Institute.
Weiss, C. H. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87, 35–45
White, H. (2009). Theory-based impact evaluation: Principles and Practice (Working Paper 3). International Initiative for Impact Evaluation (3ie).

5. Developing Theories of Change

There is considerable literature (see References) on developing theories of change. Much of it is written for and by evaluators, who often have to develop such theories when they undertake evaluations. A preferred and increasingly common scenario is for intervention designers and managers to develop a theory of change during the initial design of the intervention as an aid to intervention planning and planning for performance measurement. This is good practice for results-based management and is encouraged where appropriate. Section 10.0 discusses these uses of theories of change.

When evaluators are retroactively developing a theory of change, the following sources of information should, at a minimum, be consulted:

  • Key intervention documents (e.g., Memoranda to Cabinet, Intervention Terms and Conditions, and planning documents)
  • Relevant literature review (e.g., prior evaluations and social science research)
  • Intervention managers
  • Beneficiaries
  • Subject-matter experts
  • The program’s logic model

The development of a theory of change generally involves three steps:

  • Developing a logic model with clear results chains and explicit causal links (a basic theory of change)
  • Identifying assumptions and risks underlying the theory of change
  • Identifying other contextual factors associated with the results chain. The result is a refined theory of change.

Developing a Basic Theory of Change

The development of a basic theory of change involves identifying an intervention’s activities, outputs, and the sequence of outcomes needed for the expected results to occur. This often involves developing a logic model with reasonable results chains and describing what the arrows or other connections in the results chains imply.

For example, Figure 2 provides an example of a basic theory of change for an intervention aimed at building effective results-based management (RBM) in an organization. The left-hand side of Figure 2 shows a results chain for the intervention, which includes providing training and guidance on measuring and monitoring results, which leads to improved systems, capacity and institutionalization of RBM, which leads to the use of results information to inform decision making; which leads, ultimately, to the improvements in organizational program performance. The right-hand side of Figure 2 identifies the causal links between the outputs and outcomes (i.e., what “happens” in the gray arrows that link the outputs and outcomes). An outline or summary of the intervention’s basic theory of change appears at the bottom of Figure 2.

The blue arrows, which flow backward from the ultimate outcome through to outputs, indicate that although the results chain is pictured as a linear process, there is normally feedback between the different stages in the results chain. For example, the availability of more and better results information can lead to the need for more training and guidance. Similarly, an increase in the use of results information can lead to an increase in demand for good results information.

In developing theories of change, it is also useful to identify the degree of influence the intervention has in terms of its causal link, the degree of control the intervention has in ensuring that the causal link is realized. Figure 2 illustrates this by labelling the causal links as being directly influenced [DI], influenced [I] or outside the influence [O] of the intervention.

Figure 2: A Basic Theory of Change for Enhancing Results-Based Management in Organizations
A Basic Theory of Change for Enhancing Results-Based Management in Organizations. Text version below:
Figure 2 - Text version

Figure two illustrates a results chain from a logic model and its related basic theory of change.

On the left-hand side of the figure, there are set of five boxes arranged vertically, representing the results chain. The boxes identify the elements normally found in a results chain or logic model and are, from bottom to top, outputs, immediate outcomes, first intermediate outcome, second intermediate outcome and ultimate outcomes. The boxes are connected with arrows that point upward, from the lower boxes to those above them, demonstrating the movement in the results chain - from outputs, to immediate outcomes, to the first intermediate outcome, to the second intermediate outcome, to the ultimate outcomes. The boxes are also connected with a second set of arrows that point downward, from the higher boxes to those below, demonstrating the iterative nature between the levels of the results chain. In each box, there is a results statement appropriate to the link in the results chain.

On the right-hand side of the figure, there are four causal link boxes, representing the basic theory of change. These boxes are arranged vertically so that each aligns with one of the spaces between the five boxes in the results chain. Each causal link box is also connected to one of the upward-facing arrows between the boxes on the results chain, demonstrating that the causal links are the change that happens between the boxes or levels in the results chain. Each causal link box contains a description of what happens between the levels of the results chain. There is also a description of the causal links, labelled "The Outline of the Intervention Theory of Change."

Each of the casual links is labelled, to demonstrate the degree of influence that the program has over it. The lowest causal link (between the outputs and immediate outcomes) is labelled with "DI," indicating that the program has direct influence over this causal link. The next two causal links (between immediate outcomes and the first intermediate outcome and between the first and second intermediate outcomes) are labelled with "I," indicating that the program has a degree of influence. The final causal link, between the second intermediate outcome and ultimate outcomes, is labelled with "O," indicating that it is outside the influence of the program.

The outline of the intervention theory of change: By providing information, training and hands-on assistance in RBM, organizations will build up their RBM capabilities and related systems and use the results information to inform their decision making, leading to more effective and efficient programs.

Legend:
[C] - control
[DI] - direct influence
[I] - influence
[O] - outside of influence

Refined Theories of Change: Identifying Assumptions, Risks and External Factors

Refined theories of change go beyond the basic theory to identify the assumptions behind the various causal links in the results chain and the risks associated with those assumptions. This helps ensure that the theory of change explains what conditions have to exist for each causal link to be realized (i.e., for A to lead to B). Figure 3 sets out a refined theory of change for the RBM example, showing the assumptions and risks behind the causal links identified in the basic theory of change. Alternatively, instead of representing the assumptions as done in Figure 3, they can be presented as a list of premises that need to be tested. This is the approach that Leeuw, Gilse and Kreft (1999) used.

As with the basic theory of change, it is useful in refined theories of change to identify the degree of influence that an intervention has over these assumptions and risks. However, as noted in Figure 3, a new level of influence (control [C]) is introduced to denote areas where the intervention should be able to effectively control a particular condition (e.g., the production of outputs).

It is also important to identify the significant external factors, or contextual factors, that might influence the intended outcomes. These external factors are generally situations or events that are outside the direct control of the intervention to influence, manage or prevent. These can be illustrated as part of the theory of change. In Figure 3, for example, external influences, identified on the left-hand side of the results-chain, may include requirements of funding agencies, negative experiences with past management initiatives or management trends among peers to improve intervention monitoring and evaluation.

Setting out the assumptions, risks and external influences helps describe both the intervention and the context in which it is operating. Defining assumptions, risks and external factors at the intervention design stage can help identify additional activities that the intervention may wish to undertake as part of its risk management. This may allow an identified external influencing factor over which the intervention previously had no influence to be converted to an internal risk that can, to some extent, be mitigated. This expanded results chain/theory of change can be useful as a framework both for evaluation purposes (where the theory of change would be tested with empirical data from the intervention to determine whether the theory is working) and for enhancing reporting on the performance of the intervention (see Section 10).

Discussions on developing theories of change and examples are available on the Theory of Change Community website.

Figure 3: A Refined Theory of Change for Enhancing Results-Based Management in Organizations
A Refined Theory of Change for Enhancing Results-Based Management in Organizations. Text version below:
Figure 3 - Text version

Figure three illustrates a results chain from a logic model and its related refined theory of change.

In centre of the figure, there are a set of six boxes arranged vertically, representing the results chain. The boxes identify the elements normally found in a results chain or logic model and are, from bottom to top, the first output, the second output, immediate outcomes, first intermediate outcome, second intermediate outcome and ultimate outcomes. The boxes are connected with arrows that point upward, from the lower boxes to those above them, demonstrating the movement in the results chain, from first output, to second output, to an immediate outcome, to the first intermediate outcome, to the second intermediate outcome, to the ultimate outcomes. The boxes are also connected by a second set of arrows pointing downward, from the higher boxes to those below, demonstrating the iterative nature between the levels of the results chain. In each box, there is a results statement (i.e., an output statement, an immediate outcome statement).

On the right-hand side of the figure, there are five causal link boxes, representing the refined theory of change for the program. These boxes are arranged vertically so that each aligns with one of the space between the six boxes in the results chain. Each causal link box is also connected to one of the upward-facing arrows between the boxes on the results chain, demonstrating that the causal links are the change that happens between the boxes (i.e., the levels) in the results chain. Each causal link box contains a description of the risks and assumptions that exist between the levels of the results chain.

On the left-hand side of the figure, there is a box labelled "external influences," representing the external factors or contextual issues that can affect the performance of a program. This box is situated so that it is centred between the six boxes that represent the results chain. On the right side of the box is an arrow that points toward the results chain. This arrow represents the impact that external influences can have on any of the elements of the results chain.

Each of the risks and assumptions in the casual links is labelled to demonstrate the degree of influence that the program has over it. These labels are "C," indicating direct control; "DI," indicating direct influence; "I," indicating a degree of influence; or "O," indicating that the risk or assumption is outside the influence of the program. While there is no clear dividing line, the risks and assumptions in the causal links between the lower levels of the results chains (e.g., between the output boxes or between the output and immediate outcome boxes) generally tend to be labelled with "C," "DI" or "I," indicating that at the lower levels of the results chain, programs tend to have a greater level of control or influence. Causal links at the higher levels of the results chain are generally labelled with "I" or "O," indicating that programs tend to have lower levels of influence at higher levels of the results chain.

Legend:
[C] - control
[DI] - direct influence
[I] - influence
[O] - outside of influence

References

Chen, H.-T. (2003). Theory-driven approach for facilitation of planning health promotion or other programs. Canadian Journal of Program Evaluation, 18(2), 91–113.
Connell, J. P., & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubisch & J. P. Connell (Eds.), New approaches to evaluating community initiatives: Vol. 2, Theory, measurement, and analysis. Washington, DC: The Aspen Institute.
Funnell, S. (2000). Developing and using a program theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87, 91–101.
Funnell, S., & Rogers, P. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey Bass.
Goertzen, J. R., Hampton, M. R., & Jeffery, B. L. (2003). Creating logic models using grounded theory: A case example demonstrating a unique approach to logic model development. Canadian Journal of Program Evaluation, 18(2),115–138.
Hatry, H. (2006). Performance measurement: Getting results (2nd ed.). Washington, DC: The Urban Institute Press.
Leeuw, F. L. (2003). Reconstructing program theories: Methods available and problems to be solved. American Journal of Evaluation, 24(1), 5–20.
Leeuw, F. L., Gilse, G. H., & Kreft, C. (1999). Evaluating anti-corruption initiatives: Underlying logic and mid-term impact of a World Bank Program. Evaluation, 5(2), 194–219.
Mason, P., & Barnes, M. (2007). Constructing theories of change. Evaluation, 13(2), 151–170.
Organizational Research Services. (2004). Theory of change: A practical tool for action, results and learning (PDF – 369 KB). Seattle, WA: Annie E. Casey Foundation.)
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage Publications Inc.
Rogers, P. (2008). Using program theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), 29–48.
Weiss, C. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Vol. 1, Concepts, methods, and contexts (pp. 65–92). Washington, DC: The Aspen Institute.

6. Using Theory-Based Approaches to Make Causal Inferences

Causal inferences (claims about causes and effects) can be made by testing the theory of change for an intervention against what has been observed, and assessing the influence of other external factors. One such approach discussed in this section is contribution analysis.

Figure 4 sets out seven iterative steps in a contribution analysis. Each step in this process adds to the contribution claim and helps address the weaknesses identified at the previous step. The result of a contribution analysis should be a reasonably credible “contribution story” (i.e., the narrative description of the theory of change and its supporting evidence).

Figure 4: Contribution Analysis Process
Contribution Analysis Process. Text version below:
Figure 4 - Text version

Figure four illustrates a seven-step process for undertaking a contribution analysis.

The steps are as follows:

  • Step 1: Set out the cause-effect issue to be addressed.
  • Step 2: Develop the theory of change.
  • Step 3: Assess the resulting contribution story.
  • Step 4: Gather the existing evidence on the theory of change.
  • Step 5: Reassess the contribution story and challenges to it.
  • Step 6: Seek out additional empirical evidence.
  • Step 7: Revise and strengthen the contribution story.

At step 7, there is an arrow that points back toward the gap between Step 4 and Step 5. The arrow indicates that the process, even though it is laid out as seven steps, is not intended to be linear. Rather, some steps are iterative and will require the evaluator to go back to earlier steps, based on new information and understandings that arise.

It should be noted that Steps 1 to 5 are best undertaken during the design phase of the intervention (see Section 10.0), rather than at the evaluation planning stage. These first five steps comprise a contribution analysis framework.

The seven steps are presented and discussed in sequential order, but as noted below, these steps will usually be subject to constant review and revision.

Step 1: Set Out the Cause-Effect Issue to Be Addressed

The first step in a contribution analysis is to clarify the scope of the evaluation and articulate the evaluation questions to be addressed:

  • Articulate clearly what cause-effect issue is being addressed, usually in the form of questions such as:
  • Has the intervention made a contribution to addressing the problem?
  • What aspects of the intervention or the context led to a contribution being made?
  • Considering the evaluation context and the nature of the decisions to be informed by the evaluation, determine the level of confidence required for the findings. For lower-risk interventions, the confidence level requirements are generally lower; accordingly, the scope of evidence and the depth of analysis required are generally lower.
  • Explore the nature and extent of the contribution expected from the intervention. Determine what evidence would help to confirm that the intervention made a noticeable contribution to the intended results.
  • Assess whether the expected contribution is generally plausible given the nature of the intervention (i.e., determine whether the problem can reasonably be addressed through the intervention). If a contribution is not plausible, the value of further analysis may be limited.

Step 2: Develop the Theory of Change

Developing a theory of change is the second step in a contribution analysis:

  • Build an initial and a refined theory of change for the intervention (see Section 6.0).
  • In building the theory of change, determine the level of detail needed. Contribution analysis is best done with reasonably straightforward and not overly detailed results chains, especially at the outset. At later steps, it may be decided that further detail and refinements are required to further explore some aspects of the theory of change, but these can be added later.

Step 3: Assess the Resulting Contribution Story

At this point it is useful to critically review the contribution story resulting from the developed theory of change:

  • Assess the logic of the links and test the plausibility of the assumptions in the theory of change: Are there significant gaps in the theory? Can they be filled by further refining the theory of change? If not, what other steps are needed to produce a credible evaluation?
  • Identify where evidence is required to strengthen the contribution story: Which links are supported by minimal or no evidence? Which external factors are not well understood?
  • Determine how much the theory of change is contested: Is it widely agreed to? Are specific aspects contested? Are there several theories of change at play?

Some of this thinking and analysis will have started with the development of the theory of change at Step 2. Accordingly, there will often be a process of iteration between these two steps. An independent and critical review of the theory of change (e.g., by an external expert) may be useful at this point in the process. The theory of change at this stage is the prior-hypothesized theory of how the designers of the intervention expected it work. It is now ready to be tested against the evidence.

Step 4: Gather Existing Evidence on the Theory of Change

Before gathering new data, it is useful and cost-effective to look at relevant existing data and information related to the theory of change:

  • Gather existing evidence, such as prior evaluations and research, ongoing monitoring and environmental scans, and program reports, to provide empirical evidence for the contribution story that the intervention is claiming (e.g., evidence on activities implemented, observed results, assumptions, relevant external factors, and manifestations of risk).
  • Gather evidence on the identified external influences (i.e., data that help determine the plausibility of these influences on the theory of change). In some cases, evaluators may want to develop theories of how the external influences may be affecting the program.

At this stage in the analysis, a theory of change for the intervention has been developed and the available evidence supporting the theory of change has been gathered. The theory of change has, to some extent, been tested. As well, the significant external factors have been identified and any supporting evidence has been gathered.

Step 5: Reassess the Contribution Story and Challenges to It

The contribution story arising from the theory of change can now be critically assessed in light of the existing evidence:

  • To critically assess the contribution story:
  • Determine which links in the theory of change are strong (strong logic; good evidence available that the assumptions held; low risk and/or wide acceptance), and which are weak.
  • Assess the overall credibility of the story. Does the pattern of observed results and links validate the results chain?
  • Determine whether the stakeholders agree with the contribution story developed.
  • Assess the likelihood that any of the significant external factors have had a noteworthy influence on the observed results.
  • Identify the main weaknesses in the story. For example, links in the theory of change could be rated on the likelihood of their being realized.Footnote 3 Any weaknesses point to areas where additional data or information would be useful.
  • Adjust the theory of change, disaggregating elements if needed, and reassess the contribution story.

Thus far in the evaluation process, no new data have been gathered. Rather, the assessment process has relied on existing intervention documents, discussions with management and possibly experts, and a literature review. Step 5 identifies where additional evidence from evaluation is needed to support the contribution story. This evidence can serve as a key driver for the development/redevelopment of the evaluation framework and updates to the related data collection methods.

Step 6: Seek Out Additional Empirical Evidence

It is not until this step that the collection of new primary data for the evaluation begins, informed by the previous steps. At this stage:

  • Evaluators should gather the evidence needed to strengthen the contribution story, using appropriate data gathering techniques discussed in Program evaluation methods: Measurement and attribution of program results (e.g., surveys, interviews, reviews and analyses of administrative data). The goal is to seek data that provide evidence of results occurring; on the validity of the assumptions and risks in the theory of change; and about significant external factors that may have influenced the results achieved.
  • There may be quasi- or experimental designs involving comparison groups that could be used to explore elements of the theory of change. Weitzman, Silver and Dillman (2002) discuss an example of strengthening a theory of change approach using a quasi-experimental comparison group.
  • From a theory-based perspective, several standard data-gathering techniques can be used to strengthen the theory testing:
    • Key informant interviews can be used to test the theory of change, to elicit alternative theories of change which the key informants might have, and to discuss other influencing factors. When collecting responses from interviewees, it is helpful to clarify the evidence on which they are basing their views.
    • Focus groups and workshops are good methods to explore a theory of change, since these allow for discussion and debate about how different stakeholders see the intervention working. Alternative theories of change may emerge and other influencing factors may be identified. These can be used as a means to develop, or further develop, a theory of change, and as a way to determine and identify evidence on the extent to which the theory of change was realized in practice.
    • Case studies can be used in the same way as focus groups and workshops. If put in the context of a theory of change, case studies are more powerful as a data-gathering tool in helping to confirm or refute a theory of change, or the “micro steps” in a theory of change, showing that the theory of change is indeed plausible and not just based on unsupported beliefs.

Step 7: Revise and Strengthen the Contribution Story

Using the new evidence gathered above, evaluators can now build a more credible contribution story, with strengthened conclusions on the causal links in the theory of change. It bears repeating that theory-based approaches such as contribution analysis work best as an iterative process. Accordingly, at this point in the analysis, the evaluator may see a need to return to Step 5 or even earlier steps to reassess the strengths and weaknesses of the theory of change and the contribution story, and to decide whether further analysis would be useful or possible.

References

Weitzman, B. C., Silver, D., & Dillman, K.-N. (2002). Integrating a comparison group design into a theory of change evaluation: The case of the urban health initiative. American Journal of Evaluation, 23(4), 371–386.

7. Strengths and Weaknesses of Theory-Based Approaches to Evaluation

Theory-based approaches to evaluation are not a panacea for attributing results to programs. They do, however, present a number of positive features:

  • They can often be undertaken in circumstances where other approaches (e.g., experimental designs) cannot be used.
  • They allow evaluators and intervention managers to tell a contribution story that makes sense to those involved.
  • They open the black box of the intervention, allowing evaluators to arrive at findings on why interventions are working or are not working.
  • They allow conclusions to be drawn on the cause-effect elements of an intervention.
  • They can help leverage existing data to a greater extent and help focus new data collection on areas where there are significant gaps, resulting in more efficient and effective use of evaluation resources.

At the same time, there are clear challenges in using theory-based approaches:

  • They do not necessarily provide a quantitative measure of the size of the contribution an intervention is making. If this is required, there may still be a need for analysis that supports measurement of the size of observed results.
  • Developing a theory of change can be difficult because it involves synthesizing a range of views and information sources as well as obtaining the agreement of stakeholders.
  • In some situations, developing a theory of change can be time-consuming and/or require significant amounts of data. However, in some cases (e.g., low-risk programs or low-complexity programs where the tolerance for uncertainty in attribution is higher), there may be an opportunity for using a calibrated “lighter-touch approach” (e.g., using a less-detailed theory of change with less testing). In doing so, evaluators may be able to add rigour to, and enhance the credibility of, these evaluations (including evaluations involving small sample sizes (White and Phillips, 2011) or incorporate expert opinion as part of their methodology).
  • More than one theory of change may emerge. If multiple theories of change emerge and are strongly held, they may have to be tested against the evidence to see which theory best reflects reality. In such cases, evaluators may want to focus their efforts on where the theories differ, exploring the reasons for, and implications of, this difference.

Readers are referred to Weiss (1997) and Mackenzie and Blamey (2005) for more detailed discussions about the challenges of using theory-based evaluations.

When introducing theory-based evaluation approaches to their departmental evaluation tool box, federal evaluators may wish to start small, with relatively simple interventions, and build up experience, helping to overcome some of the challenges described above. The references in this document provide a number of examples and discussions of using theory-based approaches.

References

Global Environmental Facility Evaluation Office. (2009). Review of outcomes to impact (ROtl): Practitioner’s handbook: (PDF – 1420 KB) Washington, DC: Global Environmental Facility.
Mackenzie, M., and Blamey, A. (2005). The practice and the theory: Lessons from the application of a theories of change approach. Evaluation, 11(2), 151–168.
Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance measures sensibly. Canadian Journal of Program Evaluation, 16(1), 1–24.
Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect (PDF – 130 KB) (Brief No. 16). The Institutional Learning and Change (ILAC) Initiative.
Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.
Weitzman, B. C., Silver, D., & Dillman, K.-N. (2002). Integrating a comparison group design into a theory of change evaluation: The case of the urban health initiative. American Journal of Evaluation, 23(4), 371–386.
White, H. and Phillips, D. (2011). Small n Impact Evaluations. International Initiative for Impact Evaluation, London Seminar Series. November 17, 2011.
Attribution of cause and effect in small n impact evaluations (PDF – 979 KB)

8. Using Theory-Based Approaches to Evaluate Cause-Effect Issues in Different Types of Interventions

The federal evaluation community is responsible for and continually seeks innovations in how it evaluates various types of interventions. These include evaluating policies, horizontal initiatives, low-risk programs, grants and contributions, and cluster groupings of interventions. This section discusses how theory-based approaches can be applied to these endeavours. In some cases, such as policy evaluation, there is experience to build on. In other cases, suggestions are made that need to be explored in practice.

Evaluating Policies

Evaluating government policies can be challenging. Government policies typically comprise a number of different program-level interventions (or other activities) for realizing the objectives of the policy. These program interventions may be of quite different types, reflecting the use of different policy instruments, such as direct spending, legislation, regulation or taxation, and the specific types of activities undertaken within these broad groups. There may also be interactions among the various program interventions, or between policies, which increase the challenge of evaluating such policies.

The application of theory-based approaches in these cases could take a number of forms, including but not limited to the following:

  • Develop a theory of change for the policy, outlining how it is expected to work, showing the suite of supporting program interventions and how they are expected to work in concert to achieve the expected results of the policy. This theory can then provide a basis for deciding how to go about evaluating the policy.
  • Develop theories of change for each program intervention, aimed at assessing the contribution being made to the overall policy (e.g., a contribution analysis). The findings from the individual evaluations can then be aggregated and synthesized to arrive at findings and conclusions about the policy. In the individual theories of change developed for each program intervention, the other related program interventions (i.e., the other program interventions supporting the policy) can be treated as external factors. Accordingly, an indication of the interactions among the interventions can be assessed.
  • Rather than focusing on program interventions, it may be useful to identify the different types of strategiesFootnote 3 being used to implement the policy, and use these intervention strategies as the focus of the theory-based evaluation. This perspective seeks to answer questions such as: Which policy mechanisms work well? For whom? In what circumstances? This perspective becomes useful when the various program interventions use different policy mechanisms. If each program intervention uses a different policy mechanism, each program intervention could be evaluated separately, as discussed above. However, in such cases, an analysis of policy cohesion may be needed to provide an integrated picture of the impact of the policy instrument.

In all cases, it should be remembered that theories of change are abstractions aimed at developing an overview of how an intervention works. In certain circumstances, especially in more complex settings, it may also be useful to undertake detailed case studies to test theories, aspects of theories or the multiple theories of change that have been developed.

Stame (2004) and Vaessen and Todd (2007, 2008) discuss an approach to evaluating policy-type interventions.

References

Bemelmans-Videc, M.-L., Rist, R. C., & Vedung, E. (Eds.). (1998). Carrots, sticks and sermon: Policy instruments and their evaluation. Piscataway, NJ: Transaction Publishers.
Stame, N. (2004). Theory-based evaluation and types of complexity. Evaluation, 10(1): 58–76.
Vaessen, J., & Todd, D. (2007). Methodological challenges in impact evaluation: The case of the Global Environment Facility (GEF). Antwerp, Belgium: Institute of Development Policy and management, University of Antwerp.
Vaessen, J., & Todd, D. (2008). Methodological challenges of evaluating the impact of the Global Environment Facility’s Biodiversity Program. Evaluation and Program Planning, 31(3), 231–240.

Horizontal Initiatives

From a methodological perspective, horizontal initiatives are quite similar to the policy evaluation case described above; the horizontal initiative can be seen as the overarching policy and the individual program components as the equivalent of the program interventions being implemented to fulfill the policy goals. Horizontal initiatives have an additional challenge, in that their component interventions are being managed by different departments. Horizontal initiatives thus require significant governance and coordination of activities, including program management and performance measurement. This need for governance and coordination extends to the planning and undertaking of evaluations of horizontal initiatives.

Similar to the policy evaluation case, a theory-based approach to evaluating a horizontal initiative may focus on the overarching intervention theory (i.e., the horizontal theory); the theories of the program interventions being undertaken by partner departments (i.e., the different departmental strategies being used); or ideally both. The latter allows for a contribution analysis based on the interconnected theories of change. Also similar to the theory-based approach to policy evaluation discussed above, the theorized interactions among the interventions (i.e., the interventions in other departments) could be seen as possible external factors affecting the theories of change at work in other parts of the horizontal initiative. At the same time, specific interventions may also contribute to other theories of change within a department.

In certain types of horizontal initiatives, departments may use different intervention mechanisms as part of the collective effort. This may provide a natural experiment for learning about and comparing experiences using theory-based approaches.

Low-Risk Interventions

In a low-risk intervention, the combination of risks (e.g., low materiality, low impact of program failure, evidence from a prior evaluation suggesting good program performance, no significant questions about program relevance, and other factors [e.g., contextual stability]) suggests that an in-depth evaluation is not needed at this time, or that the evaluation should put greater emphasis on evaluating specific program elements and less on others. In these cases, the evaluation approach, scope, design and methods can be calibrated relative to the required confidence level. A “light-touch” theory-driven approach may allow the evaluation team to make causal inferences at a higher level of rigour than other approaches, while still using limited resources. This might entail:

  • using a simplified results chain with a basic theory of change;
  • identifying a few critical assumptions and risks;
  • confirming the reasonableness of the intervention design (i.e., the theory of change makes sense);
  • confirming that the planned activities of the intervention were carried out, resulting in the planned outputs;
  • if monitoring data are available, confirming that at least some of the immediate and intermediate outcomes occurred;
  • undertaking selected interviews of key stakeholders to confirm that the theory of change is working as intended and that other influencing factors did not play a major role; and
  • drawing conclusions on the extent to which the program activities are making a difference.

Depending on the case, any of these steps could be enhanced to provide a greater level of confidence in the conclusions reached. The design would be tailored to the perceived risk and level of confidence needed.

Grants and Contributions Programs

The evaluation of grants and contributions programs (Gs & Cs) presents a number of challenges that a theory-based approach can help address. These include:

  • the limited capacity of organizations receiving the funds to gather data and assess their own services or projects;
  • operating contexts, or limits on capacities of fund receiving organizations, that prohibit the use of experimental or quasi-experimental evaluation designs; and
  • data collection from recipients that sometimes results in positive feedback only (“yes, things are working well”), on the assumption that this will help ensure the approval of their next funding application.

In many Gs & Cs cases, it may be useful to develop two theories of change: one for the department and one for the recipient organization(s). The departmental theory of change would involve the assumptions and beliefs of why, with additional funds and perhaps additional capacity building, the recipient organization would be able to deliver and report on the expected enhanced services and benefits for Canadians. The recipient’s theory of change is the one behind the delivery of services to the intended beneficiaries (i.e., why the recipient believes that by providing the expanded services, the beneficiaries will be helped). These two theories of change would provide a framework to identify, clarify and separate the department’s direct interests and accountabilities from those of the recipient.

A recipient theory of change may also provide the basis for the recipient to tell its performance story in a credible manner. This may act as an incentive for the recipient to gather evidence for, and improve the telling of, their performance story. This could include gathering “Most Significant Change” stories (Dart & Davies 2003) from their clientele to confirm the theory of change and to identify key contextual factors and mechanisms of change that will help them deliver better services.

A theory-based evaluation could provide a means for the department to make more credible claims about the difference its funds are making by verifying the theories of change at play. In addition, with a theory of change as a framework, interviews, focus groups and case studies (see Section 7.0) could be used to provide better information on the robustness of the theories.

References

Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The most significant change technique. American Journal of Evaluation, 24(2), 137–155.

9. Other Uses of Theory-Based Approaches

As previously suggested, theories of change can do more than facilitate an evaluation. In particular, program theories and theories of change can be used by:

  • intervention designers for planning and designing interventions;
  • intervention managers for designing Performance Information Profiles (PIPs); and
  • those responsible for reporting, to provide a framework for reporting on the performance of an intervention.

Planning and Designing Interventions

It is common for evaluators to find that a logic model, results chains, and theory of change have not been developed or are only partially developed for the intervention to be evaluated. Developing a basic or more detailed theory of change as part of the upfront intervention design, perhaps with help from evaluators, provides a number of benefits, including:

  • providing a means to reach agreement among stakeholders of just how the intervention is expected to contribute to its intended aims. In more complicated situations, it can help clarify the intended contribution by the various subcomponents of a broader intervention;
  • identifying aspects of the intervention design that need attention, in particular, ensuring that the key assumptions required for the intervention to work can be directly or indirectly influenced by the intervention;
  • identifying where research may be needed to better understand how the intervention works; and
  • communicating to others what the intervention is intending to achieve and how it will produce that result.

Designing Program Information Profiles (PIPs)

The theory of change helps identify not only which results should be monitored, but also which other factors should be followed so that the intervention can be kept on track and progress monitored. A review of the theory of change can help identify which results, which assumptions, which risks and which external factors would be most useful for the manager to monitor through performance measures. Properly introduced, theories of change ought to be attractive to managers. They contribute greatly to effective managing for results (Mayne 2009). Finally, if good monitoring data are available, stronger and more efficient evaluations can be undertaken.

Reporting on performance

The theory of change provides a framework for the intervention, and collectively the department, to tell its performance story. Data and information that are collected can be reported against the theory of change to help show that the intervention is making a difference. The framework also provides a rational and transparent means for selective reporting on performance.

References

Anderson, A. A. (2004). Theory of change as a tool for strategic planning: A report on early experiences (PDF – 231 KB), Washington, DC: The Aspen Roundtable on Community Change.
Chen, H.-T. (2003). Theory-driven approach for facilitation of planning health promotion or other programs. Canadian Journal of Program Evaluation, 18(2), 91–113.
Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey Bass.
Mayne, J. (2009). Results management: Can results evidence gain a foothold in the public sector? In O. Reiper, F. Leeuw and T. Ling (Eds.), The evidence book. Piscataway, NJ: Transaction Publishers.

10. From Concept to Practice

How a theory-based evaluation is carried out will vary from case to case. The material in this document stresses that theory-based approaches are ways of thinking about and designing approaches to address cause-effect issues in evaluations. This guide should not be considered as a how-to manual but as a set of principles and guidelines on how to make use of theory-based approaches.

Theory-based approaches were initially developed to help in the evaluation of more complicated and complex interventions. Much of the literature discusses this experience. Clearly, these approaches also apply to the more straightforward evaluation cases, being particularly useful when experimental designs are not practical. They can help not only to strengthen otherwise weak evaluation designs but also to provide evidence on the perennial evaluation question of attribution: Has the intervention made a difference?

The challenge to federal evaluators is to build a base of practical experience in using theory-based approaches.

11. For More Information

For more information on evaluations and related topics, please visit the Evaluation section of the Treasury Board of Canada Secretariat website.
Alternatively, please contact:
Results Division
Expenditure Management Sector
Treasury Board of Canada Secretariat
Email: results-resultats@tbs-sct.gc.ca

Page details

Date modified: