We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Evaluation Guidebook for Small Agencies


Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Section Four: Evaluation Cycle

This section

  • provides an overview of the evaluation life cycle;
  • outlines considerations for planning an evaluation at the program level;
  • describes Results-based Management and Accountability Framework (RMAF) and its components;
  • outlines evaluation methods;
  • describes steps for carrying out an evaluation, including analyzing data; and
  • provides an overview of evaluation report writing.
Overview of Evaluation Life Cycle


This section provides an overview of evaluation from preparing an RMAF to collecting the information and writing the report. The graphic below illustrates the evaluation cycle.

The focus of the section will be on the first three steps: (1) planning; (2) implementing; and (3) reporting. More specifically, this section is presented as follows:

Drafting the plan at the program level (or RMAF)

  • describing the project
  • preparing a logic model
  • preparing the performance measurement strategy
  • developing an evaluation strategy

Carrying out the evaluation

  • collecting information
  • analyzing information

Writing the report

The fourth step in the overall evaluation life cycle, using the evaluation results, will be discussed in Section 6.

4.1  Planning the Evaluation

In order to ensure that the evaluation will be a useful product, the details need to be worked out early in the evaluation life cycle. In the beginning it is important to establish a basic understanding of why the evaluation is to be carried out. Below is a checklist for planning your evaluation at the program level.

Checklist  Checklist for Planning an Evaluation (Program Level)

Considerations

1.  Establish understanding of why evaluation is being carried out.

2.  Identify who will use the evaluation to make decisions (e.g., individual administrators, program staff, clients or consumers, legislators, senior management, other stakeholders); where the evaluation findings will be reported (DPR, annual report, departmental Web site); and what types of decisions might be made.

3.  Determine whether management responses/action plan will be required.

4.  Develop an evaluation strategy that includes the following:

  • a description of the program
  • scope and objectives of evaluation
  • evaluation issues and questions
  • data collection methods and sources of information

 

4.2  RMAF: Tool for Planning

Since the Results-based Management and Accountability Framework (RMAF) is a common tool in the federal government for evaluation planning, this subsection will describe the RMAF and its components. This section will focus on the profile, logic model and the evaluation strategy but will only provide a high level description of the ongoing performance measurement strategy.

While the previous section dealt with the overall agency evaluation plan, this subsection focusses on planning an evaluation for a specific program . The evaluation framework component of the RMAF should, however, link to the Agency Evaluation Plan.

Small Agency highlightsIn small agencies with only one or two business lines, the evaluation may be organization wide (e.g., evaluation of an agency's core business). In many instances, RMAFs are being developed for entire small agencies, rather than individual business lines.

 

RMAFs are a management tool, somewhat like a guidepost or compass for an organization, program, policy or initiative.

 

4.2.1  Frequently Asked Questions on RMAFs

 

What is an RMAF?

An RMAF is a plan that describes how a program will be measured, evaluated and reported.

 

Why Use an RMAF?

It serves as a useful guide in helping managers to measure, evaluate and report on their programs. It is a good idea to develop an RMAF (or framework) when a program is being designed to establish reasonable links between the proposed activities and the results and to set out the data collection requirements. An RMAF is also a link to an agency's Management Results and Resources Structure.

 

What is the difference between an RMAF and an Evaluation Framework?

RMAFs have generally replaced evaluation frameworks in the federal government. RMAFs include plans for both performance measurement and evaluation activities in one document, all of which will build on the program theory/logic model as the cornerstone of the RMAF. Stand-alone evaluation frameworks do not necessarily include a performance measurement and reporting strategy.

 

When is an RMAF required?

An RMAF is mandatory for certain categories of programs with transfer payments. These include grant programs (class grants), individual contributions, and contribution programs.

RMAFs are considered to be a good management practice and their use is generally encouraged in the Evaluation Policy and the TBS RMAF Guidance document.

 

Who is Involved in Developing an RMAF?

There are two key parties involved in the development and implementation of an RMAF: program managers and evaluation managers. In the case of those involving Treasury Board submissions, analysts of the Treasury Board of Canada Secretariat may also be involved.

An RMAF is manager led, with evaluators acting as facilitators. Managers hold the primary responsibility for the development and implementation of the RMAF. Managers are responsible for

  • ensuring that the content of the framework is accurate; and
  • implementing the RMAF.

The evaluation function is responsible for the "Evaluation Plan" section.

Key stakeholders should also be consulted in preparing elements of the RMAF. Their early buy-in helps to support the implementation process.

 

What are the Components of an RMAF?

Profile – contains a description of the program including context and need, stakeholders and beneficiaries, organizational and governance structures, and resource allocations.

Planned Results and Program Theory – includes a description (planned results and delivery strategy) and a graphical illustration (logic model) that shows how the activities of a program are expected to lead to the achievement of the planned results.

Monitoring and Evaluation Plan – a plan for ongoing performance measurement and evaluation activities. This component also includes a matrix of monitoring and evaluation reporting commitments.

 

additional emphasis on learning points.

An RMAF should be concise and focussed. This will help to support its implementation.

 

Checklist  Checklist for Developing an RMAF

Required Element

1.  Establish RMAF working group

2.  Assess internal capacity

Prepare Description

3.  Profile

4.  Planned Results and Program theory (planned results, delivery strategy and logic model)

Prepare Monitoring Plan (or Ongoing Performance Measurement Plan)

5.  Determine indicators (using logic model as guide)

6.  Determine data sources, data collection methods and timing

7.  Identify responsibility for data collection

8.  Estimate costs for monitoring activities

Prepare an Evaluation Plan

9.  Establish understanding of why evaluation is being carried out

10.  Determine issues and evaluation questions

11.  Cover issues of relevance, success and cost effectiveness

12.  Consider Expenditure Review Committee questions

13.  Determine appropriate evaluation design, data collection methods, data source, and frequency

14.  Identify responsibility for data collection

15.  Estimate costs for evaluation activities

Prepare a Reporting Strategy

16.  Identify all monitoring and evaluation reports (include DPR, RPP, annual performance report, compliance audit and a summative evaluation)

17.  Indicate timeframe for reporting performance information

18.  Indicate responsibility for reporting the performance information and evaluation results

19.  Indicate who will use the report

4.2.2  Profiling the Program

A clear understanding of the organization and the program is needed to guide monitoring and evaluation activities. The profile typically includes a summary of the context, objectives, key stakeholders and beneficiaries, organization and governance structures, and resources. The profile should provide a clear understanding of what the program aims to achieve and how.

Typical Profile Components*

Context

Clearly outlines the need and rationale for the program.

Objectives

Clearly states the objectives of the program. Describes how the objectives link to the department's strategic outcomes as identified in its Program Activity Architecture.

Key Stakeholders and Beneficiaries

This section of the profile should provide the reader with a precise understanding of who is involved in the program. Programs may involve many stakeholders with different roles, perspectives, and management information needs. If information is available, identify targets in terms of reach to project beneficiaries.

Organization and Governance Structures

Describes the organization and governance structures. Identifies decision-making authority and main roles and responsibilities of all project stakeholders (including delivery partners).

Resources

Identify annual resources allocated to the agency and each delivery partner (where applicable). Specify costs for monitoring and evaluation activities.

*  In the past, profiles have typically included planned results and the delivery strategy. These components may now be included in the Planned Results and Program Theory (Logic Model) section.

 

Example:
The Canadian forces Grievance Board (CFGB)

The following is an excerpt from the CFGB's Profile regarding governance structure:

The Board is presently made up of a Chairperson, a full-time Vice-Chairperson, a part-time Vice-Chairperson and three part-time Members. All are appointed by the Governor-in-Council, for terms that initially do not exceed four years.

Grievance Officers, working in the Grievance Analysis and Operations unit are responsible for analyzing grievances, conducting research, including the research of relevant jurisprudence, and drafting the initial findings and recommendations on grievances, in order to assist the Board Members in their work. Lawyers in the Legal Services unit are responsible for conducting a legal review of the findings and recommendations before submission to the Board Members, and the Board Members are accountable for the findings and recommendations that are submitted to the Chief of Defence Staff. The Chairperson of the Board is ultimately accountable for the work of the Board.

The Executive Director, who oversees the delivery of corporate support services, is accountable for the overall sound management of the Board, including its financial management. However, the Chairperson is ultimately accountable for all facets of Board management.


 

Checklist  Checklist for Developing a Profile for an RMAF

Considerations

1.  Have you consulted appropriate strategic and descriptive documents?

2.  Have you consulted with appropriate stakeholders to obtain missing or additional information?

Does the profile…?

3.  Include the main components (context, objectives, key stakeholders and beneficiaries, organization and governance structures, and resources)?

4.  Provide a clear understanding of what the program intends to achieve as well as an appreciation for how it intends to do so?

5.  Clearly describe the context for the program?

6.  Explain need and relevance?

7.  Fully, but concisely, describe the program? (5-7 pages as a general rule)

8.  Use neutral language? (avoid cheerleading)

9.  Identify the scope and magnitude of the program?

10.  Describe how the objectives link to the agency strategic objectives as identified in its Program Activity Architecture?

11.  Provide a clear statement of the roles and responsibilities of the main stakeholders (including delivery partners)?

12.  Outline governance structure from the perspective of accountability?

4.2.3  Developing the Logic Model

 

What is a Logic Model?

A logic model is a diagram or picture that shows the causal links from the activities to the results. Logic models illustrate the cause-effect relationship between activities and outputs through to the final results. It is a visual way of expressing the rationale, thought process or theory behind an organization, program or initiative. It is a representation of how the organization or initiative is expected to lead to the results.


A logic model can be applied to an organization, policy, program or initiative. It can be used for the purposes of planning, project management, evaluation and communication.

additional emphasis on learning points.

Logic models can help clarify objectives and focus the evaluation on results

 

Components of a Logic Model

While logic models can vary considerably in terms of how they look, they typically have three main components – activities, outputs, and results.

Components

Key Attribute

Description

Activities

What we do

The main actions of the project.

The description may begin with an action verb (e.g., market, provide, facilitate, deliver).

Outputs

What we produce

Outputs are the tangible products or services produced as a result of the activities. They are usually expressed as nouns. They typically do not have modifiers. They are tangible and can be counted.

Results

Why we do it

Results are the changes or the differences that result from the project outputs. Note that there can be up to three levels of results (immediate, intermediate, and ultimate or final). Results are usually modified (e.g., increased, decreased, enhanced, improved, maintained).

Immediate Results

Those changes that result from the outputs. These results are most closely associated with or attributed to the project.

Intermediate Results

Those changes that result from immediate results and will lead to the ultimate outcomes.

Ultimate Results

Those changes that result from the intermediate results. Generally considered a change in overall "state." Can be similar to strategic objectives. Link final results to the agency's strategic results as specified in the MRRS.

 

Some logic models also include other features, such as:

  • Reach – To which target groups/clients are the activities directed?
  • Inputs – What resources are used?
  • Internal/External Factors – The identification of factors within and outside control or influence.


 

Example: National Parole Board's (NPB) Logic Model
for the Aboriginal Corrections Component of the
Effective Corrections Initiative

Examples of short-term results

  • Communities are better informed about the NPB and conditional release.
  • Hearing processes for offenders in the Nunavut Territory are culturally appropriate.

Examples of long-term results

  • The conditional release decision-making process is responsive to the diversity within the Aboriginal offender population.
  • The NPB has better information for decision making, including information on the effects of their history, when conducting hearings.

 

Are there Different Types of Logic Models?

Logic models vary considerably in terms of how they look. They can flow horizontally or vertically. The logic model type you choose should be appropriate to your agency and to your stakeholders. Whatever type is chosen, the model should provide sufficient direction and clarity for your planning and evaluation purposes. Flow charts or tables are the most common formats used to illustrate logic models.

Note that the logic model, irrespective of the types described below, will help to focus the evaluation on the results of your program.

 

TYPE 1: Flow Chart or Classic Logic Model

The flow chart or classic logic model illustrates the sequence of results that flow (or result) from activities and outputs. It is a very flexible logic model as long as the three core components of the logic model are presented: activities, outputs, and results. You can have any number of result levels to ensure that your logic model accurately depicts the sequence of outcome results.

The cause-effect linkages can be explained by using "if-then" statements. For example, if the activity is implemented, then these outputs will be produced. If the immediate result is achieved, then this leads to the intermediate result, and so on.


 

 

additional emphasis on learning points.

The flow chart logic model makes you think carefully about the linkages between specific activities, outputs and outcomes. What outputs result from each activity? What outcome resulted from the output?

 

TYPE 2: Results Chain Model

This type of model is also referred to as a performance chain. While it is similar to the flow chart model, it does not isolate the specific activities, outputs or results. The results chain, therefore, does not show the same detail with respect to the causal sequence of outputs and results.

Both types of logic models, however, are used as a structure for describing the expectations of a program and as a basis for reporting on performance. Like the flow chart model it is based on the rationale or theory of the program.

 


SourceSix Easy Steps to Managing For Results: A Guide for Managers , April 2003, Evaluation Division, Department of Foreign Affairs and International Trade.

Other Considerations

  • The results chain is less time-consuming to develop.
  • The flow chart logic model enhances understanding of how specific activities might lead to results.
  • You may develop one, two, or three result levels, depending on the relevance to your program or organization.

 



How Do I Build a Logic Model?


The following graphic presents an overview of the three steps for logic model development.

See Appendix E for more detailed information on building logic models.

 

Is my agency ready to build a logic model?

  • Is there sufficient time and commitment to develop the logic model internally?
  • Is there familiarity with respect to logic model development?
  • Are there sufficient planning and communication skills (key to building consensus and obtaining commitment)?
  • Is there sufficient objectivity or neutrality?
  • Does the program involve only my Agency in the federal government?

If you answered "yes" to these questions, you are probably ready to build a logic model. For details on how to build a logic model, please refer to Appendix E.

If you answered "no" to any of the first four questions, you may wish to contract out the development of the logic model.

If you answered "no" to the last question, then the initiative may be considered a "horizontal initiative." There are typically more challenges to developing a logic model for a horizontal initiative since you have to involve more stakeholders with different perspectives and opinions.

For further information on RMAFs, see Preparing and Using Results-based Management Accountability Frameworks, April 2004.

 

4.2.4  Developing the Performance Measurement Monitoring Plan for an RMAF

While this is an evaluation guidebook, RMAFs also include a monitoring plan. This strategy should generate a timely flow of information to support decision making on an ongoing basis. It is important to note that some data required for evaluation purposes can be collected on an ongoing basis as part of the performance measurement system.

The indicators for the performance measurement strategy are developed from the logic model's outputs and results. For each indicator, the data source, collection method, timing and frequency of the data collection, and responsibility for measurement must be identified.


4.2.5  Developing the Evaluation Strategy

 

This subsection presents

  • an overview and description of an evaluation strategy at the program level;
  • development of evaluation issues and questions;
  • development of indicators;
  • an overview of evaluation designs; and
  • an overview of data collection methods.

 

Overview of an Evaluation Strategy
  • The evaluation strategy includes the following components:
  • evaluation issues and questions;
  • corresponding indicators;
  • sources of data (including performance measurement reports);
  • data collection method;
  • timing; and
  • estimated costs for evaluation activities.

The evaluation strategy may be presented in matrix format similar to the example below.

Evaluation Issue

Evaluation Question

Indicator

Data Source

Data Collection Method

Timing

Success

To what extent has the initiative improved staff evaluation capacity?

Quality of evaluation reports

Evaluation experts

Expert review

Year 2

 

Articulating the Strategy Efficiently and Effectively to Meet the Agency's Needs

An evaluation strategy should balance the need for timely and credible information with the need for practicality. Good linkages between evaluation and performance measurement will help to ensure that performance information is used as a source of information for evaluations. Linkages between planning and evaluation will help to ensure that the evaluation strategy is appropriately focussed and directed towards information needs.

Some things to think about when developing an evaluation strategy:

  • Consider a balanced, mixed methods approach to evaluation design. This helps to strengthen the evaluation design and enhance the credibility of the findings.
  • Place appropriate emphasis on information needs (i.e., process issues) and practicality to guide your development of an evaluation strategy.
  • Target evaluation questions to the most pressing evaluation concerns.
  • Only collect information relevant to those questions.
  • Where practicable, consider strategies for integrating information from performance measurement and evaluation activities with other management information.
  • Where applicable, consider using existing data as a possible source of information for the evaluation.

 

Sample Evaluation Matrix:
The Canadian Forces Grievance Board (CFGB)

Evaluation Issues/Questions

Indicators

Data Source/Methodology

ISSUE 2.5 – Cost-effectiveness

1. Can the quality of CFGBs F&R be maintained at a lower cost and in less time?

Reduction of average cost per grievance since CFGBs establishment.

Level of satisfaction among key stakeholders with the quality of the F&R

Interviews with CFGBs senior operational managers

Interviews with key stakeholders (CDS, DG-CFGA, ADM HR-Mil)

Case Management and Tracking Systems

 

 

Developing Evaluation Issues and Questions

Evaluation issues are the broad areas which need to be explored within an evaluation while evaluation questions are the more specific research questions that need to be answered in order to be able to address each evaluation issue.

The identification of the evaluation issues and questions provides a guide for the development of the strategy that ensures all essential issues will be addressed during later evaluation. The issues are used to elaborate a set of indicators and data collection strategies, which, on implementation, helps to ensure that information necessary for evaluation is available when it is needed. As such, the evaluation strategy needs to be linked to the ongoing performance measurement strategy, as some evaluation data will be collected through ongoing performance measurement activities.

For information on how to develop a specific list of issues and questions, see Appendix E. For the main evaluation issue areas please refer to the following diagram.

Expenditure Review Questions

All evaluations should address the Expenditure Review Committee's Seven Areas to Question. In addition to the three traditional evaluation issue areas listed above, program spending will also be assessed against specific questions in relation to the following:

  • Public Interest
  • Role of Government
  • Federalism
  • Partnership
  • Value for Money
  • Efficiency
  • Affordability

See Appendix D for the specific tests.


Determining Appropriate Indicators

What are Performance Indicators?

Performance indicators are a direct or indirect measure of an event or condition. An indicator is a measuring device showing change over time. Indicators are often quantitative (i.e., based on numbers or objective information) but can also be qualitative (i.e., narrative or subjective information). The indicator is a means to compare planned results with actual results. There are many ways to think about indicators.

  • Proxy indicators. Proxy indicators are sometimes used to provide information on results where direct information is not available. For example, the percentage of cases that are upheld on appeal could be a proxy indicator for the quality of decisions.
  • Quantitative indicators. Quantitative indicators are statistical measures such as number, frequency, percentile, ratios, and variance. For example, percentage of Web site users who find and obtain what they are looking for.
  • Qualitative indicators. Qualitative indicators are judgment and perception measures of congruence with established standards, the presence or absence of specific conditions, the extent and quality of participation, or the level of beneficiary satisfaction, etc. An example would be opinions on the timeliness of services.
  • Output and result indicators. There are also output and result indicators. Output indicators are those indicators that measure the outputs (products and services). Result indicators measure the results or changes of a program.

 

An example of various indicators is illustrated in the table below. See Appendix E for a review of how to develop indicators for your agency.

Measure

Types

Indicator Examples

Outputs

Quantity Produced/Delivered/Served

Number of clients served per month

(If your agency produces policy papers or research studies, the output indicator might be number or quality of policy papers/research studies.)

Quality of service

Achievement of standards for service delivery

Client Satisfaction

Per cent of clients satisfied with product and service delivery

Efficiency

Average cost per unit delivered

Results

Immediate

Number of person-weeks of training and career placement projects completed

(If your output is quality of policy papers, then an immediate result might be "increased awareness of the policy" or "better incorporation of policy principles with other relevant programs/policies.")

Intermediate

Number of successful job placements resulting from training and career projects

Ultimate

Individuals' self-rated health status in terms of well-being and functional abilities

Source :  Adapted from First Nation Self-Evaluation of Community Projects: A Guidebook on Performance Measurement .

 

Overview of Evaluation Designs

Evaluation design is the process of thinking about what you want to do and how you want to go about doing it. 

additional emphasis on learning points. A good research design will:

  1. improve the reliability and consistency of your results;
  2. eliminate (or minimize) bias; and
  3. answer what you need to know.

 

The most practical approach to determining evaluation design is to consider your information needs (i.e., evaluation questions) and use this to guide your design. Key considerations for selecting an appropriate design are feasibility and practicality .

Evaluation designs are typically placed into the following three categories:

  • Experimental designs involving comparisons of clients and a control group (these are randomly assigned and rarely used in federal evaluations);
  • Quasi-experimental designs involving comparisons of clients and the control group, but do not use randomization; and,
  • Implicit designs involving measuring the effects of a program after it has been implemented. Control or comparison groups are not used.

Implicit designs are the most frequently used evaluation design. In the public service context, it is often the only design that can be used, when no pre-program measures exist and there is no obvious control group available, or it is not reasonable to assign interventions on a random basis. This design type is flexible and practical to implement.

It should be noted, however, that there are considerably more challenges in attributing impacts to specific interventions as we move away from experimental designs (the strongest for attributing impacts) to implicit designs.

Checklist  Checklist for Choosing an Evaluation Design

Considerations

Have you considered the following…?

1.  Information and decision-making needs

2.  Type of evaluation

3.  Practicality and costs

4.  Appropriate balance between information needs and costs

5.  Research concerns (i.e., related to the quality of evidence to be gathered)

6.  Other internal and external factors that may influence the program. How can the evaluation design minimize these factors?

7.  Targeted evaluation questions (i.e., those that take into account the most pressing evaluation concerns)

8.  Consider existing data, secondary data, and performance measurement information as potential sources of information for the evaluation

9.  Use multiple lines of evidence to ensure reliability of findings and conclusions

 

Overview of Data Collection Methods

 

excerpts from TB Evaluation PolicyREMEMBER...

Measurement and Analysis: Evaluation work must produce timely, pertinent and credible findings and conclusions that managers and other stakeholders can use with confidence, based on practical, cost-effective and objective data collection and analysis.

— TB Evaluation Policy, 2001

The table below provides an overview of various data collection methods available to evaluators. Note that these data collection methods involve either primary or secondary data. The investigator collects primary data directly. Secondary data have been collected and recorded by another person or organization, sometimes for different purposes.

In choosing appropriate data collection methods you can consider the following:

  • information and decision-making needs;
  • appropriate uses, pros and cons of the data collection methods;
  • costs and practicality of each method; and
  • balanced approach, a mix of quantitative and qualitative methods.

More information on choosing appropriate data collection methods is located in Appendix E.

 

Data Collection Method

When to Use

External Administrative Systems and Records: use of data collected by other institutions or agencies

  • Need information about context
  • Need historical information
  • When comparing program data to comparable data

Internal Administrative Data: data collected for management purposes

  • Need information on management practices, service delivery, clients' characteristics

Literature Review: review of past research and evaluation on a particular topic

  • To identify additional evaluation questions or issues and methodologies
  • Need information on conceptual and empirical background information
  • Need information on a specific issue
  • Need information about comparable programs, best practices

Interviews: a discussion covering a list of topics or specific questions, undertaken to gather information or views from an expert, stakeholder, and/or client; can be conducted face to face or by phone

  • Complex subject matter
  • Busy high-status respondents
  • Sensitive subject matter (in-person interviews)
  • Flexible, in-depth approach
  • Smaller populations

Focus groups: a group of people brought together to discuss a certain issue guided by a facilitator who notes the interaction and results of the discussion

  • Depth of understanding required
  • Weighted opinions
  • Testing ideas, products or services
  • Where there are a limited volume of issues to cover
  • Where interaction of participants may stimulate richer responses (people consider their own views in the context of others)

Case studies: a way of collecting and organizing information on people, institutions, events, and beliefs pertaining to an individual situation

  • When detailed information about a program is required
  • To explore the consequences of a program
  • To add sensitivity to the context in which the program actions are taken
  • To identify relevant intervening variables

Questionnaire or Survey: a paper or electronic list of questions designed to collect information from respondents on their knowledge and perceptions of a program (See Appendix E.)

  • Useful for large target audiences
  • Can provide both qualitative and quantitative information

Expert panels: the considered opinion of a panel of knowledgeable outsiders

  • Experts can share lessons learned and best practices
  • Where outside validation is required
  • Where diversity of opinion is sought on complex issues
  • Where there is a need to draw on specialized knowledge and expertise

Comparative studies: a range of studies which collect comparative data (e.g., cohort studies, case-control studies, experimental studies)

  • For summative evaluations

 

Depth vs. Breadth

Some data collection methods provide more depth of information, while others provide more breadth .

  • Depth – understanding of impact of program on an individual person or case
  • Breadth – understanding of impact of program on large group of people, but in less detail

For example, case studies provide depth of information while surveys provide more breadth. Each type of information is important for an evaluation depending on the specific questions being asked, and the integration of the various methods. Many evaluators attempt to combine methods that will provide both depth and breadth to the findings.

 

4.3  Collecting the Information

4.3.1  Gathering Data

Data collection should follow the plans developed in the previous step. The individuals assigned to the various data collection tasks need to be thoroughly trained in the data collection requirements and procedure.

Appropriate quality control procedures should be implemented and maintained during the evaluation study. If you are managing an evaluation you need to be aware of the progress of the data collection and any other issues of concern.


To facilitate analysis, information collected during an evaluation should

4  use appropriate methods to organize and record the information collected (e.g., frequency distributions, categories, tables); and

4  implement effective quality control procedures to ensure recorded information is accurate and original information is labelled and secure.

4.3.2  Analyzing Data

Once data are collected, they need to be analyzed and interpreted. Data analysis may take many forms from basic description to complex statistical analysis depending on the type of data and the complexity of the issues. For more detail as to how to analyze data, please refer to Appendix E.

Cause and Effect Inferences

The choice of analysis techniques is influenced by the evaluation questions and the evaluation design (i.e., experimental or implicit). For example, drawing inferences about causality is dependent upon the evaluation design rather than the analysis technique.

Generalizing the Findings

The only valid way of generalizing findings to an entire or target population--where you cannot survey or study everyone--is to use findings from a statistically representative random sample of the population you wish to study. Caution must therefore be exercised when analyzing data from non-randomized samples.

Qualitative and Quantitative Analysis

Analyzing qualitative data requires effective synthesis and interpretative skills. Qualitative information can be used, for example, to provide contextual information, explain how a program works, and to identify barriers to implementation. Qualitative data can be analyzed for patterns and themes that may be relevant to the evaluation questions. Qualitative material can be organized using categories and/or tables making it easier to find patterns, discrepancies, and themes.

Quantitative data analysis assigns numerical values to information. It can range from simple descriptive statistics (e.g., frequency, range, percentile, mean or average) to more complicated statistical analysis (e.g., t-test, analysis of variance). Computer software packages such as Statistical Package for Social Sciences (SPSS), Minitab, and Mystat can be used for more complicated analysis. Quantitative data analysis also requires interpretation skills. Quantitative findings should be considered within the context of the program.


Checklist  Checklist for Analyzing Data

Considerations

1.  Understand the problem before you analyze data (i.e., know what is being measured and why)

2.  Understand the program and how contextual factors link together

3.  Find out how the data were collected and how reliable they are

4.  Use your common sense; ask yourself if the analysis and interpretation seem appropriate

5.  Try to identify patterns, associations, and causal relationships

6.  Utilize statistical analyses when appropriate

7.  Are there any deviations in these patterns? Are there any factors that might explain these deviations?

8.  Compare findings to expected results (i.e., industry standards)

9.  Consider strengthening your analyses by combining evaluation data with risk data collected from periodic environmental scans

10.  The logic of each method of analysis should be made explicit (e.g., specify what constitutes reasonable evidence, identify underlying assumptions)

11.  Where possible, use several methods of analysis

12.  Use appropriate tests of significance whenever findings are generalized to the population from which samples were drawn*

13.  Use caution when generalizing evaluation results to other settings

*  In advance of gathering data, the evaluator needs to calculate the probability that the findings are not "accidental." With tests of significance an evaluator can decide how strong the results must be in order to be reasonably confident that the results are not due to chance.


4.4  Writing the Evaluation Report

excerpts from TB Evaluation PolicyREMEMBER...

Evaluation reports must present the findings, conclusions and recommendations in a clear and objective manner.

— TB Evaluation Policy, 2001

A good evaluation report responds effectively to the evaluation questions. Recommendations and lessons should be conclusive, concise, and practical. The executive summary should be a summary of the overall report. Often the executive summary is the most widely read section of the report so it should be detailed enough to give the reader a good sense of the highlights of the evaluation.

4.4.1  Table of Contents

An evaluation report typically contains the following sections:

  • Executive Summary;
  • Introduction and Background;
  • Scope and Objectives of Evaluation;
  • Approach and Methodology;
  • Findings;
  • Conclusions; and
  • Recommendations.

4.4.2  Linking Findings, Conclusions, and Recommendations

There needs to be a clear link between findings, analysis, conclusions and recommendations. Practically speaking, the findings may not answer specific evaluation questions conclusively. Conclusions are formulated by combining the best evidence. Gathering different types of evidence relating to the same evaluation question can enhance credibility. Recommendations should link to the analysis and the conclusions.

Checklist Checklist for Evaluation Report Writing

Considerations

1.  Have the audience(s) and required information needs been identified?

2.  Is the report clear and concise?

3.  Are the reasons for carrying out the evaluation logical and clear?

4.  Does the report identify evaluation issues in accordance with evaluation policy (i.e., relevance, success and cost-effectiveness)?

5.  Does the report start with the most important information? (Each chapter, subsection, or paragraph should begin with the key point.)

6.  Is the context adequately explained?

7.  Is there a description of the general approach used, main data sources, data collection methods?

8.  Does the report clearly articulate the limits of the evaluation in terms of scope, methods, and conclusions?

9.  Are the findings substantiated by the evidence, as described in the evaluation report?

10.  Do the findings provide a good understanding of what was learned from this evaluation?

11.  Is it clear how the subject program or project is really performing?

12.  Does the presentation of results facilitate informed decision making?

13.  Is only the information that is needed for a proper understanding of the findings, conclusions, and recommendations included?

14.  Do the conclusions address the evaluation questions and are they supported by the findings?

15.  Are recommendations realistic and doable? Are the number of recommendations limited based on significance and value?

16.  Does the report present the conclusions and recommendations so that they flow logically from evaluation findings?

17.  Is the report in accordance with external reporting requirements?

18.  Does the report provide an accurate assessment of the results that have been achieved?

19.  Does the report provide relevant analysis and explanation of the exposure to risks for any significant problems identified and in respect of key recommendations?

 

Key References

CIDA, Performance Review Branch. CIDA Evaluation Guide , 2004.

Canadian Evaluation Society. Evaluation Methods Sourcebook , 1991.

National Science Foundation. A User-friendly Guide to Mixed Method Evaluations , 1997.

Forest Research Extension Project. Conducting Project and Project Evaluations: A Primer for Natural Resource Project Managers in British Columbia , 2003.

Gray & Guppy. Successful Surveys: Research Methods and Practice , 2003.

McLaughlin and Jordan. Logic Models: A Tool for Telling Your Program's Performance Story , 1999.

Stufflebeam, D.L. A Checklist Organizer. Guidelines for Choosing and Applying Evaluation Checklists:

Treasury Board of Canada Secretariat. Evaluation Policy and Standards , 2001.

Treasury Board of Canada Secretariat. The Art and Architecture of Writing Evaluation Reports , 2004.

Treasury Board of Canada Secretariat. Guide for the Review of Evaluation Reports , January, 2004.

Treasury Board of Canada Secretariat. Principles for the Evaluation of Programs by Federal Departments and Agencies , 1984.

Treasury Board of Canada Secretariat. Preparing and Using Results-based Management and Accountability Frameworks , April 2004.

Treasury Board of Canada Secretariat. Program Evaluation Methods: Measurement and Attribution of Program Results.

Treasury Board of Canada Secretariat. RBM E-Learning Tool.

 



Date modified: