We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Case Studies on the Uses and Drivers of Effective Evaluations in the Government of Canada

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

 

Final Report
August 15, 2005

Treasury Board Secretariat
Department of National Defence
Canadian International Development Agency
Correctional Service Canada
Indian and Northern Affairs Canada
Public Works and Government Services Canada

 

Table of Contents

1. Executive Summary

2. Introduction
2.1 Background
2.2 Approach/Methodology
2.3 Limitations of study
2.4 Key characteristics of the evaluations

3. Achievement of Evaluation Objectives
3.1 Achievement of objectives
3.2 Conclusion

4. The Usefulness of Evaluation Studies
4.1 The primary users of evaluations
4.2 Expenditure management/resource allocation decisions
4.3 Future policy directions
4.4 Program design & delivery improvements
4.5 Accountability and reporting
4.6 Additional impacts and benefits
4.7 Conclusion

5. Drivers of effective evaluations
5.1 Senior management support
5.2 Participatory relationship between Evaluation staff and Program staff
5.3 Quality of evaluation staff (external and internal)
5.4 Comprehensive performance data collection methodology
5.5 Independence/objectivity in evaluation findings & recommendations
5.6 Focused, well-balanced recommendations
5.7 Stakeholder buy-in/involvement
5.8 Conclusion

 

1. Executive Summary

This report presents a compendium of selected case studies aimed at demonstrating the key drivers and uses of evaluation studies in Government of Canada departments and agencies. It is one of a series of research studies that will form the foundation for future directions of the evaluation function within the federal government. Part of the debate in terms of the future of the evaluation function revolves around the current use of evaluations and how the function should be positioned so as to effectively serve diverse needs such as Expenditure Management decisions, the setting of policy directions, and strengthening accountability and reporting. 

The focus of this report is to highlight best practices and drivers of effectiveness with regard to evaluation impacts. As such, the report does not present a statistically valid representation of the impacts of all federal government evaluations. A departmental survey on evaluation impacts based on randomly selected evaluations is currently being led by the Centre of Excellence for Evaluation (CEE) as a companion piece of work. It is expected that the results of the survey coupled with this study as well as others (see Section 2.1), will provide a holistic and objective assessment of the impacts of evaluation projects across the federal government.

A case study approach was used to document the effectiveness of sample evaluations in terms of impacts on departmental decision-making. Fifteen evaluations from across several federal departments and agencies have been documented as case studies. The majority of these evaluations:

  • Arose from Treasury Board requirements for renewal of program funding and/or terms and conditions 
  • Were undertaken by external consultants and completed in less than one year. The average cost of the evaluations involving external consultants was $181,000.
  • Resulted in formal action plans to address the results and recommendations.

All of the evaluations were deemed to be successful in achieving or meeting the objectives set out in their terms of reference. These included:

  • Assessing the relevance of the program to support continued operation;
  • Providing an objective assessment of the extent of achievement of program results; 
  • Supporting Treasury Board (TB) submissions and Memoranda to Cabinet;
  • Identifying areas of program improvement and/or alternative means of delivery; and
  • Providing an overall assessment of program cost-effectiveness.

The results of the 15 evaluation studies have been used by a variety of audiences including program managers, senior departmental managers, central agencies, Ministers and program stakeholders. A key usage has been to support expenditure and resource management decisions. In at least two cases, evaluation results have been used to support Expenditure Review Committee considerations. 

Evaluation results have been widely used by program management in the ongoing delivery of programs and operations to identify areas for program design, delivery and reporting improvements, support resource allocation decisions, and identify cost saving opportunities. They have also contributed to improved morale and motivation amongst staff responsible for program delivery and knowledge transfer from evaluation staff to program staff to better manage the program. 

The evaluations have also provided context for senior management and have informed decisions regarding future policy directions of programs and operations. They have highlighted key issues for consideration by senior management in determining the future policy directions of the programs evaluated. 

In the wider context of external stakeholders, the evaluations have served to increase awareness and understanding of the programs with clients, third party deliverers, and parliamentarians. In many cases, the evaluations have provided evidence for these stakeholders that the programs are achieving their intended results and that good value for money is being provided. One evaluation has increased Canada's profile on the international stage and at least three have become benchmarks for successful program delivery.

While each case study is unique, certain common themes were identified as key drivers contributing to the effectiveness of the evaluations. These included:

  • Senior management support of the process and evaluation results;
  • Participatory relationship between evaluation and program staff, including agreement on the evaluation terms of reference and objectives and open and rapid communication; 
  • Highly skilled and experienced evaluation staff (internal or external);
  • Comprehensive data collection methodology that includes multiple lines of evidence;
  • High level of independence/objectivity in the evaluation results;
  • Focused and well-balanced recommendations; and 
  • Stakeholder buy-in/involvement through participation in the evaluation governance mechanisms (e.g., Steering or Advisory Committee), consultation and sharing of results in a timely manner.

The value-added of these evaluations has served to reinforce the utility of the evaluation function and demonstrated how well-planned and executed evaluations can go beyond the terms of reference to create positive impacts across a broad range of areas that ultimately benefit not only the program that was evaluated but the department responsible for its delivery, the federal government and the Canadian public.

 

2. Introduction

2.1 Background
The objective of this study is to present examples of the impacts that Government of Canada evaluations are having on departmental decision-making and to identify the driving factors that contribute to the effectiveness of evaluations. It is one of a series of research studies aimed at laying the foundation for future direction of the evaluation function within the federal government. Other related studies include:

  • Impacts of RMAF 
  • Deputy Minister Views on Evaluation (Breen Report)
  • Evaluation of the Centre of Excellence for Evaluation (Sears Report)
  • Quality of evaluation studies
  • Health of the Evaluation Function
  • Improving the Professionalism of Evaluation (Gussman Report)

Part of the debate in terms of the future of the function revolves around the current use of evaluations and how the function should be positioned so as to effectively serve diverse needs such as Expenditure Management decisions, the setting of policy directions, effective program management and strengthening accountability and reporting. 

A Steering Committee comprised of evaluation representatives from Treasury Board Secretariat's Centre of Excellence for Evaluation (CEE), National Defence, the Canadian International Development Agency (CIDA), Public Works and Government Services (PWGSC), Indian and Northern Affairs (INAC) and Correctional Service Canada (CSC) oversaw this project.

2.2 Approach/Methodology
A case study approach was used to document the impacts that sample evaluations have had on departmental decision-making. Fifteen evaluations from across several federal departments and agencies have been documented as case studies.

Evaluation Reports include in this study

Social GOS Economic Small Agency
  • HRSDC – Employment Benefits and Support Measures (1999-2002)
  • INAC – Community Economic Development Program (2003)
  • SDC – National Child Benefit Initiative (2005)
  • CIC – Canada War Crimes Program (2001)
  • National Defence – Vanguard/MCF Readiness and Sustainment (2004)
  • PWGSC – Alternative Forms of Delivery Initiative (2003)
  • AAFC – Rural Water Development Program (2003)
  • ACOA – Atlantic Innovation Fund (2004)
  • IC – SchoolNet Program (2004)
  • NRCan – New Housing Component of the Energy Efficient Housing Initiative (2005)
  • Transport Canada – Subsidized Ferry Services in Atlantic Canada (2003)
  • DFO – Experimental Lakes Area (2004)
  • CCOHS – Assessing the Canadian Centre for Occupational Health and Safety (2001)
  • NCC – Corporate Web Technology Strategy (2003)
  • SHRCC – Canada Research Chairs Program (2004)

In April, 2005, the CEE asked the Heads of Evaluation of all federal departments and agencies to nominate recent evaluations that they judged to have informed deputy or departmental decision-making and to have had demonstrable impacts. Fifteen evaluations were selected based on nominations received as well as the inventory of evaluations maintained by the CEE. The fifteen evaluations represent a mix of large departments and small agencies, and a range of portfolios. 

For the majority of the case studies, interviews were conducted with the evaluation manager, program manager(s) and a senior manager (e.g. ADM/Vice President, where feasible). In some instances the program manager(s) or senior manager were not available for an interview. However, in all cases, at least two interviews were conducted consisting of the evaluation manager and either the program manager(s) or senior manager. Results of these interviews were summarized in 4-5 page case studies. Interviewees were given the opportunity to validate the case study reports. 

The case studies are presented as appendices to this report. The remainder of this report presents a summary of the findings of the case studies.

2.3 Limitations of study
A conscious decision was made at the outset of the study to focus on best practices and drivers of effective evaluations. As a result, this study is not intended to present a statistically valid representation of the impacts of all federal government evaluations; rather it provides a sample of evaluations that have been perceived by program management to have been effective, and documents these evaluations using a case study approach. 

Other companion studies will incorporate statistically representative approaches to gauge the impacts of evaluations, such as the departmental survey on evaluation impacts related to randomly selected evaluations. It is expected that these and the remaining studies as listed in Section 2.1, will provide a comprehensive and objective assessment of the impacts of evaluation studies across the federal government.

2.4 Key characteristics of the evaluations
The administrative characteristics of the 15 evaluations are summarized below, to provide context for the remaining sections of this report.

  • Rationale for conducting the evaluations: The majority of the evaluations were conducted because of Treasury Board requirements for renewal of program funding and/or terms and conditions (seven summative evaluations, two formative evaluations). The remaining six evaluations were initiated at the request of program or senior management to address particular concerns with specific priority areas of the organization, such as the Web Technology Strategy at the NCC, CF troop readiness and sustainment within the DND/CF, and the Experimental Lakes Area of DFO. These evaluations were identified and included with the annual evaluation plan.
  • Use of external contractors: All but two of the evaluations involved consultants hired to conduct the studies. External resources were used either because of limited in-house resources to undertake the work, or a desire to use experts with specialized skills in the particular program/service area under evaluation. The two evaluations that were conducted in-house (DND and Transport Canada) were done so because the specialized expertise existed within the departments' evaluation units. (At Transport Canada, only the survey portion of the evaluation was contracted out).
  • Length of evaluation studies: Most of the evaluations (nine) were completed in less than a year, from planning to completion and approval of the final report. The remaining evaluations were completed in 1½ to 3 years. This was due to either a lengthy data collection phase (ranging from 10 to 13 months for the evaluations at INAC, TC and SSHRC) and/or lengthy report review and approval phase (16 and 17 months for DND and TC). The evaluation at SDC took three years to complete due to the complex nature of the methodology and the involvement of 12 jurisdictions; however, interim results were available within 18 months.
  • Cost of studies: Costs to contract with external consultants varied widely depending on the complexity, scope and duration of the evaluation. The least costly evaluation, for the NCC Web Technology Strategy ($47,000), was completed in six months. The most expensive evaluation was the SDC evaluation of the National Child Benefit (NCB) Initiative ($600,000). The average cost for evaluations using external consultants was approximately $181,000 (excludes DFO).
  • Range of governance models: A wide variety of models were used across departments/agencies to provide oversight to the evaluations. In seven departments, no formal governance structure was used. There was, however, a rigorous process used in all these departments to review the evaluation terms of reference, data collection tools, and draft reports, typically involving senior management and/or program management. Four departments (ACOA, IC, SSHRC, and NCC) used formal Steering Committees to govern the evaluation process, comprised of senior departmental management, program management, and, for two departments, representatives from other government departments and key stakeholders. Two departments (SDC and HRSDC) used inter-jurisdictional working groups/management committees to oversee the evaluation process. Two departments made use of Advisory Committees comprised of both internal and external stakeholder representatives (NRCan and INAC). Both these departments stated that Advisory Committees are preferred over Steering Committees to reduce the risk of influencing evaluation results. In all cases, the final evaluation report was approved by the senior departmental evaluation committee (either a formal audit and evaluation committee or the senior management committee).
  • Implementation process: Most of the evaluations included some form of management response within the final evaluation report. All but two evaluations developed formal action plans to address evaluation recommendations, timeframes and responsibility centres. Most of the action plans are now being monitored on a regular basis (usually quarterly) by the evaluation unit. For the majority of the case studies, the evaluation recommendations have either been fully implemented or are in the process of being implemented. In a few instances (AAFC and TC), some of the recommendations were not implemented at all or partially implemented due to senior management decisions. In the case of DND, the decision was made to hold off on the implementation of the two principal recommendations in order to undertake further study.

 

3. Achievement of Evaluation Objectives

This section summarizes the extent to which the 15 evaluation studies achieved their intended objectives and supported departmental decision making.

3.1 Achievement of objectives
All of the departments reported that the respective evaluations were successful in achieving or meeting the objectives set out in the evaluation terms of reference. These include:

  • Assessing the relevance of the program to support continued operation. In almost all evaluation studies, a key finding dealt with whether or not there was sufficient rationale to support the continuation of the program, operation or initiative. This was of extreme value to program and senior management for making decisions regarding future operations. In the case of the CCOHS, where the survival of the organization hinged on the evaluation's results, the evaluation had a paramount impact. In many cases, the evaluations concluded that there was strong and compelling rationale to continue the program - albeit typically with some updates to address emerging issues and challenges. Notwithstanding, evaluation results were not positive in all instances. In a few cases, the evaluations concluded that some programs or portions of programs were no longer relevant, or the program could no longer continue as status quo if the department wanted to achieve its intended objectives, such as with NRCan and Transport Canada. This provided valuable empirical evidence to program managers in order to make significant, and possibly controversial, decisions.
  • Providing objective assessment of the extent to which program results are being achieved. Many of the evaluations have provided good evidence of the program results being achieved, and the areas where results are not yet evident. For program managers, this has helped to either endorse that program is on the right track, or to identify areas where more focus needs to be placed. In some cases, the evaluations have highlighted the critical need for better collection and monitoring of performance information so that an informed decision about program results can be made on a regular basis. For the two formative evaluations (ACOA and HRDC), where the focus was on implementation and management issues, the evaluations were able to provide early program results. This was particularly helpful to the program managers of the Atlantic Innovation Fund. The positive results coming out of the formative evaluation supported a decision by Treasury Board to extend the requirement for a summative evaluation beyond the original planned five-year period.
  • Supporting TB submissions and Memoranda to Cabinet. Many of the evaluations served the basic function of meeting Treasury Board requirements (e.g. as set out in the Policy on Transfer Payments for G&C programs), for an evaluation of the program in order to support decisions for renewal of the programs and/or changes to objectives. This was a core requirement of the programs that was met by the evaluations.
  • Identifying areas of program improvement and/or alternative delivery means. All the evaluations have served to identify key areas within the programs, operations or initiatives that needed to be improved or changed so that the programs can meet their intended objectives. Many program managers cited that this had a strong impact on the way they view their programs – it was helpful to present a realistic and objective assessment of the program and allow managers to "take a step back" and look at the program/initiative in a new light.
  • Providing overall assessment of the cost-effectiveness of the program. Only six of the evaluations had cost-effectiveness as an objective. These include the evaluations from CCOHS, DFO, PWGSC, INAC, TC and SDC. In these cases, the evaluations served to provide a good analysis of the whether or not the program, operation or initiative was being managed cost-effectively, and provided recommendations for how this could be improved. In the CCOHS, DFO and PWGSC evaluations in particular, assessing the overall cost-effectiveness of the program was a basic rationale for conducting the study. At PWGSC, the evaluation confirmed that the Alternative Forms of Delivery (AFD) initiative was achieving significant financial savings, and that further savings could be realized by expanding the scope of the initiative to include additional buildings. At DFO, the evaluation of the Experimental Lakes Area examined the operational costs associated with the facility and identified options with respect to delivery of the facility and collection of user fees that could improve operational efficiencies.

3.2 Conclusion
All 15 evaluation reports reviewed achieved the objectives stated in their terms of reference and provided substantive evidence regarding program relevance, results achieved, cost effectiveness as well as areas where changes were required to program design and/or delivery to better meet objectives and achieve desired outcomes.

 

4. The Usefulness of Evaluation Studies

Explicit recommendations were included in 12 of the 15 evaluations reviewed. This section describes how the studies' results and recommendations were used by the recipients of the evaluation reports to influence program decision-making.

4.1 The primary users of evaluations

User Key uses of evaluation results and recommendations
Program Managers
  • Implement Program design and improvement activities
  • Increase understanding of the program
  • Support ongoing program management
  • Support for TB submissions/Memoranda to Cabinet
Senior departmental management 
(ADMs, Vice-Presidents, DM)
  • Increase understanding of program
  • Support policy decisions
  • Support visioning and strategic decision-making
  • Improve resource alignment
  • Compare program/operation results against objectives
Ministers & parliamentarians
  • Memos to Cabinet have included evaluation results to support decision making on future of programs
  • Validate the continued relevance of the program/operation/initiative
Central agencies
  • Basis of decisions on renewal of program terms and conditions as well as funding
Stakeholders
  • Increase awareness of program
  • Increase buy-in/cooperation
  • Address areas of concern with program design or delivery
Third party deliverers, including provincial and territorial governments
  • Improved decision-making processes
  • Enhanced program delivery
  • Fulfilment of accountability requirements

4.2 Expenditure management/resource allocation decisions
The evaluations have been very useful to departments to support expenditure and resource management decisions such as ongoing program funding, Expenditure Review, allocation of resources, and savings opportunities. 

Program funding renewal: For several of the programs, including the Atlantic Innovation Fund (ACOA), the War Crimes Program (CIC), the Canada Research Chairs Program (SSHRC), the Community Economic Development Program (INAC), the National Child Benefit Initiative (SDC) and the Subsidized Ferry Program (TC), the evaluation was required to support requests to Central Agencies for the renewal of the terms and conditions of the programs. These evaluations provided the validation that the programs were sufficiently meeting their objectives to warrant renewal for an additional term. 

Resource allocation decisions: Departmental and program management have used evaluation results to inform strategic decision making regarding allocation of resources within the programs or operations. For example, at DND one recommendation brought to the attention of senior management the need to review resource allocation decisions with respect to the use of O&M funding. The evaluation of the RWDP led AAFC to a better alignment of program resources with real needs of clients. At Industry Canada, the evaluation of the SchoolNet program was one of the instruments used in the reallocation of resources when the program budget was unexpectedly reduced by 40%. 

Cost savings: In some departments, improvements are being made to the programs and operations, based on the evaluation recommendations that will result in cost savings or improvements for the departments. For example, Transport Canada is reviewing the assumptions upon which Ferry subsidies are calculated to improve cost-effectiveness. At NCC, the Web Strategy evaluation led to decisions not to invest in specific areas of improvement, because the demand for these services was not there. At INAC, the newly redesigned Community Economic Development Program incorporates cost-effective design changes such as encouraging tiered or graduated program funding to take advantage of economies of scale. At DFO, work is being done to establish new user per diems for the ELA that will increase the overall fees collected to better cover actual facility costs. Treasury Board used the Alternative Forms of Delivery evaluation results to develop the rationale for proceeding with the re-procurement of the AFD, and to increase the scope of the re-procurement by including additional buildings. 

Expenditure Review: In two departments (DFO and TC) departmental senior management included proposed program/operational changes for Expenditure Review Committee considerations. The evaluation of the subsidized ferry services in Atlantic Canada concluded that one of the subsidy programs no longer reflected Transport Canada's policy objectives and should be cancelled, with potential savings of approximately $5M. The DFO Experimental Lakes Area evaluation concluded that the user fees charged by the facility were insufficient to cover operating costs, and an alternative delivery arrangement that charged higher user fees could relieve the department of this funding pressure.

4.3 Future policy directions
In over half the case study evaluations, findings supported the need to review policy or strategic direction for the program, operation or initiative. Results, recommendations and options for improvements have been very helpful to senior management to inform strategic decision-making in this area. 

For example, at DND the evaluation results, along with other key departmental documents and consultations have helped to shape the new Defence Policy for Canada. At AAFC, the evaluation informed the development of the AAFC Water Strategy that will feed into a Federal Water Strategy being developed by Environment Canada. At Transport Canada, the evaluation highlighted the possible need to re-examine the department's marine policy to ensure that it is in line with overall government priorities. At the NCC, the evaluation led to the development of a new vision for the future and identified specific areas of focus. Within NRCan, program management plans to re-examine the future strategic direction for the New Housing component of the EEHI, so that it will be better positioned to achieve its climate change targets by 2010. At INAC, senior management is reviewing the longer-term role of INAC in economic development activities, to determine if it should move away from traditional programming activities to focus more on a policy and support role. Within SSHRC, the Steering Committee has authorized the program secretariat to consult with central agencies to explore the feasibility of changing one program objective's wording to better reflect the policy intent of the objective. At Industry Canada, program management has undertaken a broad issues scan as a result of evaluation recommendations; results of this scan will inform future policy directions. At the CCOHS, the evaluation results provided the basis for Treasury Board to increase the Agency's funding, resulting in a shift in from emergency funding pressures to strategic planning aimed at improving operations.

4.4 Program design & delivery improvements
The evaluation results have been used widely by program and senior management to identify areas of the programs that were not working as effectively as possible, or not meeting objectives. To a large extent, departments are implementing the recommendations made with respect to the design and delivery of these programs. 

The evaluations have identified the need for significant program re-design within some departments. At INAC, design improvement recommendations for the CEDP have been further developed and implemented in a new re-designed program that was launched on April 1, 2005. At NRCan, program management will be undertaking a complete review of the strategic direction of the program; in the meantime management has begun to make improvements to the design of the R-2000 program to reduce overlaps with other energy efficiency programs and improve the consumer-related aspect of the program. 

The increased funding to the CCOHS that resulted from the positive evaluation has allowed the agency to put more focus on projects that expand the reach of services by strengthening product content and delivery; these are expected to ultimately result in improvements in occupational safety and health in Canada. At DND, work is underway across the department to implement practical measures for improving the policy and infrastructure required to support troop readiness and sustainment. Components of the programs for the Canada Research Chairs Program (SSHRC), the War Crimes Program (CIC), the Web Technology Strategy (NCC), and the SchoolNet Program (IC), and the National Child Benefit (SDC) have either been or are currently being re-engineered or re-designed by program staff. 

In other departments, the evaluations highlighted that program design was working well, but improvements could be made to program delivery. Improvements are being implemented within HRSDC, where the evaluations of the EBSM/LMDAs across provincial jurisdictions identified the need for improvements to the partnership relationships with the provinces/territories and access to programs and services for marginalized groups of Canadians. At Transport Canada, programs management is working to implement technological environmental initiatives onboard ferries to reduce fuel consumption and GHG emissions. At ACOA, improvements are being made to the program review and approval process and to stakeholder communications.

4.5 Accountability and reporting
Several of the departments are using evaluation recommendations to strengthen accountability and reporting regimes. For example, the Community Economic Development Program's (INAC) reporting and accountability system has been overhauled based on the evaluation recommendations to ensure the program, and its delivery agents, are able to account for the funds allocated. Roles and responsibilities between the department and the delivery agents have been clarified, and improvements have been made to increase the consistency of reporting by delivery agents and transparency of results. At the NCC, the evaluation was instrumental in clarifying the roles and responsibilities between two specific branches that were accountable for the NCC web sites. At DFO, improvements have been made to the reporting structure within the ELA to improve the effectiveness of the facility management structure. At PWGSC, the evaluation has resulted in a new requirement for re-procurement that contractors provide greater detail in their financial reports. For the National Child Benefit initiative (SDC) the evaluation provided insight to the accountability arrangement with the provinces and territories that was complex in nature, allowing for better understanding by all parties. 

Several programs, including the NWSEP (AAFC), the Canada Research Chairs Program (SSHRC), the Atlantic Innovation Fund (ACOA) and the Community Economic Development Program (INAC) are developing performance management strategies and improving data collection approaches. These are expected to contribute significantly to demonstrating results achieved from the programs. For the EBSM/LMDAs at HRSDC, data integrity and reporting of management information has been an ongoing challenge because data must be exchanged between (often incompatible) data captures systems of provinces/territories and the department. HRSDC and the provinces/territories are working together to manage and resolve these issues.

4.6 Additional impacts and benefits
The evaluations often had impacts on departments beyond the original objectives as set out in the evaluation terms of reference. These varied considerably by evaluation, and include:

  • Raising awareness of the program, both internally and externally. Several evaluations have helped to raise the evaluated program's profile within the department and with external stakeholders. For some programs, such as the AAFC Rural Water Development Program and the INAC Community Economic Development Program, the positive evaluation results helped to dispel misconceptions about the program and to promote buy-in of the program within the department. Other evaluations, such as the SSHRC Canada Research Chairs Program evaluation and the CIC evaluation of the Canada War Crimes Program, have been widely circulated among stakeholders, which has helped to market these programs and demonstrate their positive achievements.
  • Providing a benchmark for successful program delivery. The results of several evaluations have been used as "best practices" across jurisdictions. For example, IC's SchoolNet recommendations are viewed as best practices with applicability across a range of programs. The evaluation of the War Crimes Program (CIC) have increased Canada's profile on the international stage, and other countries are looking to Canada as a leader in these areas. The PWGSC evaluation of the Alternative Forms of Delivery initiative provided a degree of assurance with regard to outsourcing of government services at a time when there was little known success in these types of initiatives.
  • Improving relationships with key stakeholders: Some evaluations had a significant impact on the relationship between the departmental program areas and their key stakeholders. For example, INAC's participatory approach to use a 17-member Advisory Committee comprised primarily of Aboriginal representatives resulted in valuable synergies with these stakeholders and increased their level trust in the department. The mere fact that the NRCan evaluation was commissioned and undertaken was a very positive signal to the program's key stakeholder that the department was serious about solving program issues. For the CCOHS evaluation, the positive results that led to increased TB funding to the Agency helped to renew engagement in the organization by key stakeholders.
  • Providing information and feedback on stakeholder needs: The comprehensive data collection exercises in many of the evaluations allowed program management, particularly within the programs and operations at INAC, CCOHS, and the NCC, to collect a wealth of client and stakeholder feedback. This has helped the departments to gather objective and empirical evidence on stakeholder preferences and focus on providing services and programs that clients want and need, rather than what the program management perceive clients to want and need.
  • Affirming the integrity of the role of public servants: The Transport Canada evaluation produced a recommendation that was factually sound but politically sensitive. Notwithstanding, the Ferry Services program staff brought the recommendation forward for consideration by senior management. This exercise affirmed for program management that their role was to present an unbiased assessment of whether or not the program was meeting its objectives, regardless of the controversy that this recommendation might cause in the public domain.
  • Demonstrating the value-added of the evaluation function. Having participated in the evaluation process, many program staff expressed their increased appreciation for the value that an independent, objective assessment can bring to their program. At Transport Canada, Ferry Services program staff now have an increased focus on due diligence, and there is more attention being paid to developing monitoring and data collection tools. At ACOA, there is increased recognition that having evaluation involvement in certain types of projects can be beneficial. At SDC, the evaluation raised important issues for further consideration by program staff and provided information to inform and advance the policy agenda on child poverty.
  • Confirming the feasibility of joint federal/provincial-territorial evaluations. The SDC and HRSDC evaluations are concrete examples demonstrating that joint, federal/provincial-territorial evaluations are feasible and acceptable to different orders of government. They provide a benchmark for a successful approach for future large evaluations involving multiple jurisdictions.
  • Providing a means for knowledge transfer. Some evaluation processes have supported the transfer of knowledge from evaluation staff and/or consultants to program staff. For example, at ACOA, the AIF program staff has taken more ownership of the program RMAF, and has worked on modifying it so that it is better tailored to the program. At CCOHS, based on their active participation with consultants engaged on the evaluation, managers now have the basis and methodology to undertake their own surveys of clients, and have begun to do so for information and data gathering purposes.
  • Improving morale and motivation amongst staff. Some evaluations have had a powerful motivational impact on staff and managers, by confirming the success and utility of the program, operation or initiative under evaluation. For example, the IC SchoolNet Program evaluation provided positive recognition to Program staff and resulted in elevated confidence levels. Staff at the CCOHS felt that the positive evaluation of the Centre was a validation that their work was useful and effective in improving Canadian workplace safety. After several years of funding cutbacks, the confirmation that increased program funding was required also provided a huge morale boost.
  • Enhancing internal working relationships. In some departments, the participative nature of the evaluations, involving oversight committees and/or program staff working together with evaluation staff, promoted cohesiveness among previously disparate Branches and groups.

4.7 Conclusion
Overall, the 15 evaluation studies have proven to be very useful to a wide range of users, including program management, senior departmental management, central agencies, Ministers and program stakeholders. 

A key usage of the evaluation reports has been to support expenditure and resource management decisions at many levels of departments and agencies. For program managers, the evaluations have supported ongoing program resource allocation and cost-savings opportunities; for senior management, the evaluations have been useful for resource alignment and program renewal decisions; and for Central agencies, Ministers & parliamentarians, evaluations have been critical to provide evidence to support ongoing program funding decisions. In at least two cases, evaluation results have been used to support Expenditure Review Committee considerations. 

Evaluation results have been widely used by program management in the ongoing delivery of programs and operations, to identify areas for program design, delivery and reporting improvements. The evaluations have proven to be constructive tools for program managers, giving them the independent, fact-based evidence they require to support changes to the program or operation. They have also contributed to improved morale and motivation amongst staff responsible for program delivery and knowledge transfer from evaluation staff to program staff to better manage the program. 

The evaluations have also provided context for senior management of departments and agencies and informed decisions regarding future policy directions of programs and operations. They have highlighted key issues for consideration by senior management in determining the future policy directions of the programs evaluated. 

In the wider context of external stakeholders, the evaluations have served to increase awareness and understanding of the programs with clients, third party deliverers, and parliamentarians. In many cases, the evaluations have provided evidence for these stakeholders that the programs are achieving their intended results and that good value for money is being provided. At least two evaluations have increased Canada's profile on the international stage and become benchmarks for successful program delivery. 

Lastly, the value-added of the evaluations confirmed the utility of the evaluation function and demonstrated how well-planned and executed evaluations can go beyond the terms of reference to create positive impacts across a broad range of areas that ultimately benefit not only the program that was evaluated but the department responsible for its delivery, the federal government and the Canadian Public.

 

5. Drivers of effective evaluations

The following is a summary of the 'best practices' gathered from the 15 evaluation studies. These are felt, by both the evaluation staff and the program staff, to have contributed significantly to making the evaluations useful and worthwhile.

5.1 Senior management support
Senior management support of the process and the evaluation results is extremely important. This can help in areas where processes are being stalled; relationships with clients and stakeholders are tenuous and require senior management involvement; disagreements exist on evaluation objectives, results or recommendations; or support is required to approve contentious recommendations.

5.2 Participatory relationship between Evaluation staff and Program staff
Evaluations where programs staff were actively involved in the evaluation process contributed not only to a process that was focused, smooth and problem-free, but to producing results that were relevant, timely and defensible. The buy-in from Programs is critical to increasing the likelihood that results and recommendations will be accepted and ultimately implemented. Specific best practices include:

  • Program participation: Involvement of the Program management in the planning of the evaluation, including providing input to the evaluation Terms of Reference, interview lists, and data collection instruments. Involvement could be through membership on the evaluation governance body (e.g. Steering Committee), or frequent interaction and communication with the evaluation unit.
  • Mutually agreed-upon terms of reference and evaluation objectives: Mutual agreement on the objectives of the evaluation, including the measures of success, between the evaluation unit and the program staff will lessen the risk of the evaluation going off track and ensure that there are no last minute surprises. For example, using very specific evaluation terms of reference and meeting to discuss and document evaluation objectives and expectations have helped to ensure that all parties are working toward the same goal.
  • Open & rapid communication throughout process: Examples of methods that have been used include making regular presentations to programs areas, steering committees and client groups; maintaining an open process throughout the evaluation; and engaging in internal consultations to ensure that the evaluation was addressing managers' concerns.
  • Engagement of program managers in the presentation of management response: Effective evaluation processes have included the program managers in developing and presenting the management response and action plan to the departmental senior management committee approving the evaluation. This provides an opportunity for program managers to be part of the process and promotes ownership of the action plan. It also ensures the development of a timely response to the evaluation by program management.

5.3 Quality of evaluation staff (external and internal)
The professionalism and experience of evaluation staff, both internal departmental staff and any externally consultants used are critical to the success and usefulness of evaluation studies. Specific skills and abilities mentioned as important for the evaluator (either internal departmental evaluation unit or external consultants) include:

  • Deep knowledge and understanding of the evaluation function;
  • Good understanding and contextual knowledge of the program/organization, the pressures being faced, and the external environment within which the program/organization operates;
  • Past experience conducting similar evaluations of similar programs;
  • Good understanding of the objectives of the evaluation;
  • Relevant evaluation qualifications, particularly in developing evaluation frameworks and data collection methodologies and strong project management skills;
  • Appropriate technical skills (e.g. particularly important for the conduct of surveys); and
  • A careful, rigorous contracting process, when external consultants were used.

5.4 Comprehensive performance data collection methodology
All the evaluations used multiple lines of evidence to gather information about the program/service being evaluated. This was felt to be extremely important to allow for a robust and objective assessment of the program. At a minimum, all evaluations gathered data through document review and interviews. Other additional approaches used by many evaluations included case studies, reviews of program databases and focus groups. Specific best practices mentioned include:

  • Broad representation of interviewees: A data gathering process (through interviews, surveys or focus groups) that includes a wide and diverse range of key clients and stakeholders is required to obtain comprehensive feedback on the program's strengths and weaknesses. Interviewees/those surveyed should include good representation from both supporters and detractors of the program, as well as those impartial in nature. This would also include those who have used the program/service, those who have tried to use the program and been denied access, and those who have not tried to use the program. Evaluations have typically sought input and feedback from senior departmental management, program management, and other external stakeholders to develop interview lists.
  • Data integrity: Having solid, reliable data available with which to evaluate the program's results and outcomes is critical to substantiate evidence. An ongoing data collection strategy that addresses data integrity, completeness and ownership is important to maintain for future reviews of the program.
  • Use of peer review process: Some evaluations have made use of a peer review oversight process, either using internal evaluation peers or external technical specialists, to challenge and modify the evaluation approach and methodology, to review specific results for appropriateness, to diffuse divisive situations and to promote consensus amongst governing bodies.

5.5 Independence/objectivity in evaluation findings & recommendations
Almost all evaluations were felt to be extremely objective and to have presented evaluation findings and recommendations that were fact-based and unbiased. The independence of the evaluations contributed to the credibility of the studies as well as the acceptance of the recommendations. Best practices that are felt to have contributed to the level of independence and objectivity of evaluations include:

  • Evaluation led by the evaluation unit, removed from program management. 
  • Use of external, impartial consultants contracted through a competitive bid process with rigorous proposal evaluation criteria.
  • An oversight committee (Steering Committee, Management Committee, Advisory Committee, etc.) with broad internal and external representation. 
  • Comprehensive data collection methodology involving multiple lines of evidence.
  • Presentation of fact-based results and recommendations that, in some cases, may not be complimentary to the program, or could be controversial or politically sensitive. 
  • Final approval of the evaluation by a departmental senior management committee or Audit and Evaluation Committee.

In the case where, despite a robust data collection approach, there may be concern with bias in survey or interview results (i.e. such as in the case where a wide range of stakeholders have been surveyed, but all belong to the same industry, union or association and there is risk of influence by the association or union), the evaluation report should document this risk and take steps to include other data collection measures that attempt to address any bias.

5.6 Focused, well-balanced recommendations
Evaluations that have contributed effectively to management decision-making have generally had clear, useful recommendations. Program areas cite that evaluations with practical, non-prescriptive recommendations that recognize the overall objectives of the program are the most acceptable and amenable to implementation. One particular evaluation unit specified that there was a purposeful decision made to present a limited number of focused recommendations that were directly linked to the study's conclusions, rather than a laundry list of improvements required. Evaluation reports that also presented options for implementation have been very well received by the program areas.

5.7 Stakeholder buy-in/involvement
The involvement of key stakeholders in evaluations, including clients, central agencies, service delivery agents, academia, provincial counterparts, has been instrumental in building engagement and buy-in from these groups. Some specific best practices include:

  • Involvement of stakeholders in the governing Advisory or Steering committee to participate in evaluation planning and monitoring process, as well as in the future decision-making of the program.
  • Stakeholder consultation during data collection phase to support well-rounded feedback as well as to build their engagement and interest in the results of the study.
  • Making the evaluation results available to stakeholders as early as possible, so that potentially controversial issues can be identified and worked through with supportable, fact-based findings.

5.8 Conclusion
The best practices identified in this study attest to the fact that effective evaluations require a combination of strong senior management support, highly qualified evaluation staff, robust methodologies, independence and objectivity coupled with balanced reporting, and active engagement of key internal and external stakeholders. Taken together, these drivers provide a robust framework for ensuring that evaluation results are relevant and credible and, perhaps most importantly, that they can effectively inform the decision-making process.

Date modified: