We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Interim Evaluation of the Treasury Board Evaluation Policy

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Evaluation Report

Centre of Excellence for Evaluation
Treasury Board Secretariat of Canada

January 2003


Table of Contents


Executive Summary

The purpose of the interim evaluation is to report on the progress made to date in implementing the new Evaluation Policy (the Policy). Concurrent with Policy approval, Treasury Board funded the first two years of a four-year program with funding for years three and four being contingent on the results of an interim evaluation. Timing of this interim evaluation falls in the early stages of the implementation process, with the field work occurring between 12 and 18 months after implementation. By most standards, this interim evaluation has occurred relatively early in the life of the revised Evaluation Policy. As a result, the evaluation necessarily focuses on issues of implementation rather than policy impacts.

The evaluation study was conducted by external consultants and was based on methods outlined in the Results Based Management and Accountability Framework (RMAF) for the Policy (i.e., survey of departments, case studies, and interviews with representatives from departments, external stakeholder organizations and TBS program sectors). The RMAF was developed jointly by the Centre of Excellence for Evaluation (CEE), the office within the Treasury Board Secretariat established to assist with policy implementation, and the broader evaluation community in the federal government, as represented by a working group of Heads of Evaluation.

The primary evaluation issues for the interim evaluation included:

  • Issue #1: How much progress has been made in implementation?
  • Issue #2: How much is left to do to achieve full implementation?
  • Issue #3: What are the specific challenges in implementing the Policy in the short- and long-term?
  • Issue #4: What is the most appropriate role(s) of the CEE in implementing the Policy?

Findings for Progress and Gaps in Implementation (Issues #1 & #2)

The results of the evaluation first indicate that the status of Policy implementation varies significantly between departments and agencies. Many of the smaller agencies report having a very limited or non-existent evaluation function. Among the medium and large departments, some are just starting to implement the new Policy, while others report being well beyond the half-way mark in implementation at present. As can be expected, greatest progress has been accomplished in departments that had a more mature function prior to the introduction of the revised Policy.

Evaluation findings further indicate that:

  • 39 (or 93%) out of the 42 departments/agencies with an evaluation function have an active Audit and Evaluation Committee, of which 33 (85%) are chaired by Deputy Heads or an equivalent.
  • Half of the departments and agencies that responded to the survey have an evaluation plan approved for 2002-03 (23% increase over 2001-02).
  • Over the last two years, 27% of all evaluation studies (involving 54% of large departments/agencies) have moved beyond the traditional program areas to cover broader government policies and initiatives.
  • Productivity in terms of RMAFs and evaluation studies has increased significantly within the past two years.
  • Performance measurement studies have been increasing. Because of RMAFs, there is anticipation of a higher level of integration of performance measurement and evaluation than previously occurred.

Prior to the establishment of the Policy, there was an acknowledged capacity gap in the evaluation functions in many of the departments and agencies. The interim evaluation shows that departments have been investing in the evaluation function, and that while the number of FTEs has increased from 230 to 286, there is still a considerable shortfall in professional evaluators (105 FTEs) necessary to achieve a mature evaluation function in departments and support the Evaluation Policy requirements.

Evaluation findings further indicate that:

  • There remains minimal evaluation capacity in some large departments, while small agencies generally have no evaluation resources.
  • There are issues around the timing and focus of evaluations, often preventing their use for decision-making during the TB submission process.
  • Given the revised Evaluation Policy has been in place since April 2001, it is too early to determine whether RMAFs are being adequately implemented, and are actually improving the quality of performance measurement and evaluation work.
  • Demands for evaluation work from program managers within departments is at times beyond what the evaluation function as it is currently resourced can deliver.
  • Medium and large departments report that it would take a substantial period of time to evaluate all programs, policies and initiatives covered under the Evaluation Policy. For example while 11 out of 24 medium and large departments indicate that it would take under 6 years to complete evaluations of all programs, policies and initiatives covered by the Policy, 13 out of 24 departments report that it would take between 6 to 10 years.

Conclusions for Progress and Gaps in Implementation (Issues #1 & #2)

Given the interim evaluation was conducted at this early stage in the life of the Evaluation Policy, reasonable progress in the implementation of the Policy has been made. This progress has been attained despite various challenges such as:

  • the required re-building and/or repositioning of the function within many departments; and
  • a significant shortfall in the number of qualified evaluators currently operating within the field in the federal government.

Moreover, progress in implementation is markedly lagging among the small agencies, most of which have little or no evaluation capacity. To date, most of the measured progress in the implementation of the Policy has occurred among the medium and large departments which had some evaluation capacity prior to the implementation of the Policy. Smaller agencies continue to lack the capacity to address the requirements of the new Policy. Additional considerations should be made to ensure that the smaller agencies are aware of the Policy, familiar with the CEE and their services/products, and have the necessary support to implement the aspects of the Policy that are relevant for them given their specific operating requirements.

Findings for Challenges to Full Implementation (Issue #3)

There are significant challenges to the implementation of the new Policy. These include:

  • Many departments are in the process of rebuilding or establishing an appropriately resourced evaluation function.
  • Demographics of current evaluation professionals within the federal government, and the demand for qualified evaluators, have had a major impact on efforts to recruit as well as build and maintain capacity.
  • Given the relatively small size of the evaluation community, traditional federal government approaches to recruitment, training and professional development may not be appropriate or effective.
  • Full implementation of the Evaluation Policy will require a "cultural shift" within many departments. This shift includes changing and evolving roles for program managers, the evaluation function, and senior management. Successfully creating the new culture will require time, resources and support from senior managers.
  • The "managing for results" environment, combined with the requirements of the Transfer Payments Policy, is creating large demands for the evaluation function to use its expertise in performance measurement and evaluation framework development to support program managers.

Conclusions for Challenges to Full Implementation (Issue #3)

The level of increased productivity of the function over the past 18 months that is documented in this interim evaluation study may not be sustainable unless considerable effort is made to redress budget and human resources gaps that currently exist.

Projected workloads are forecast to increase at a rate of approximately 200% over the upcoming two years while the number of evaluation staff is expected to increase by only 20-30% over the same time period. This increased demand for productivity from the function following the implementation of various policies (primarily the Evaluation Policy itself and the Transfer Payments Policy) and approaches to measuring results will require commensurate increases in both budgets and personnel to ensure that the productivity level is sustained. In building the capacity of the function across government, it will remain important that departmental support for the evaluation function is sustained beyond full implementation of the Policy.

Advancement in addressing community development issues such as retention, training, and professional development will require creativity and going beyond a traditional federal government approach to recruiting and training professionals. The relatively small size of the community will require ongoing partnerships and networks to be developed and maintained with external professional evaluation associations, and internal evaluation groups. While initially resource intensive, this approach is likely to address these issues more quickly than with traditional federal government methods for the recruitment and training of professionals.

Findings for CEE Role and Responsibilities (Issue #4)

There is currently a strong need identified for a coordinating and advisory organization like the CEE to support and guide policy implementation, since the evaluation community is facing a number of challenges.

Full implementation of the Policy will require a cultural shift in many departments in order to create an environment where managers take the initiative to embed the practice of evaluation into their work. CEE can play a leadership role in assisting departments with implementing this cultural shift, recognizing that a fundamental change of this nature will require time, resources, and senior level support.

Roles identified by departments and agencies during the interim evaluation study included:

  • Providing leadership and being an advocate for the Evaluation Community within the Government of Canada.
  • Providing advice and guidance for Policy implementation.
  • Building evaluation capacity in departments and agencies.

Among stakeholders outside the evaluation groups within departments and agencies (e.g., TBS Program Sectors, Office of the Auditor General, evaluation professional associations), there is strong support for an increased involvement of the CEE in:

  • Monitoring and reporting on evaluation capacity and policy implementation within the federal government.
  • Reviewing evaluation reports supporting TB submissions.
  • Supporting the TBS budget office role.

Conclusions for CEE Role and Responsibilities (Issue #4)

There is a strong need for the CEE to continue to play a leadership role in assisting the evaluation community in implementing the Policy. To date, the CEE has been successful in this role, taking into consideration the number of challenges it has faced in the past 18 months. There are clear indications from the interim evaluation study that for additional progress to be attained in implementing the Policy, the evaluation community is looking for centralized leadership, similar to that assumed by the CEE in the initial stages of the Policy implementation. The main areas that will require ongoing leadership are in advocating the importance of evaluation to Senior Managers, continuing to assist in system-wide capacity building, training, tools and guides for supporting policy implementation, and identifying best practices for dissemination within the community.

A role that the CEE must emphasize over the next two years is its support of the Secretariat in the use of evaluation results for decision-making purposes. Interviews conducted with key informants from the Secretariat Program Sectors highlight a need for increased support from the CEE in assisting in analysis of RMAFs and evaluations supporting submissions. The need for this support will likely increase substantially within the next two years given the anticipated significant increases in numbers of evaluation studies during the period, and in consideration of the proposed development of a Budget Office role for the Secretariat.

1.0 Introduction

The purpose of the current report is to present a summary of the findings for the interim evaluation of the Treasury Board Evaluation Policy, which came into force in April, 2001. This section provides a background to the evaluation including the scope, issues and methods used for the evaluation. Section 2.0 contains a background to the Evaluation Policy. Findings from the interim evaluation are included in Section 3.0. Finally, conclusions for consideration are provided in Section 4.0.

1.1 Scope of Interim Evaluation

The purpose of the interim evaluation is to report on the progress made to date in implementing the new Evaluation Policy (the Policy) in the 87 departments and agencies subject to the Policy. To assist in the implementation of the Policy, Treasury Board funded the first two years of a four-year program with funding for years three and four being contingent on the results of an interim evaluation. The Treasury Board Ministers also established a Centre of Excellence for Evaluation (CEE) to assist with policy implementation, particularly with respect to capacity building. Timing of this interim evaluation falls in the early stages of the implementation process. As a result, the evaluation necessarily focuses on issues of implementation rather than policy impacts. As stated in the Results-based Management and Accountability Framework (RMAF) developed for the Policy:

"18 months is insufficient time to fully implement the Evaluation Policy or assess its impacts. However, given the need to assess whether to extend Business Case funding to a third and fourth year, some preliminary assessments will need to be made"(1)

The scope of the interim evaluation is the assessment of the progress accomplished so far in terms of the implementation and early outcomes of the Policy.

1.2 Evaluation Issues

The main evaluation issues addressed by the interim evaluation are those that have been outlined in the RMAF for the Policy. Again, given the timing of the evaluation, the present study emphasizes the extent of the progress made to date on developmental or implementation objectives. The evaluation issues are presented in Table 1, below, with associated sub-issues.

Table 1 - Issues and Sub-Issues for Interim Evaluation

Issue #1: How much progress has been made in implementation?
  • What is the current status of implementation of the Evaluation Policy?
  • Have departments made progress in implementing ongoing performance measurement?
  • What are current departmental evaluation practices?
  • Have departments been able to provide adequate evaluation capacity?
  • Is the evaluation function producing information and results that are integrated with departmental decision-making?
  • Has the Policy contributed to improved design, implementation and resourcing of programs, policies and initiatives?
  • Are external agencies using evaluation and performance monitoring results for decision-making?

Issue #2: How much is left to do to achieve full implementation?

Issue #3: What are the specific challenges in implementing the Policy in the short- and long-term?

Issue #4: What is the most appropriate role(s) of the CEE in implementing the Policy?

1.3 Evaluation Methodology

The methodology used for the interim evaluation incorporated three lines of evidence as outlined in the RMAF for the Policy. These lines of evidence included a survey of the evaluation community, key informant interviews, and case studies. Each method was developed and implemented by an external consultant.

1.3.1 Survey of the Evaluation Community(2)

To collect detailed information on the current status of evaluation functions within federal departments and agencies, a survey of the evaluation community was conducted during June and July of 2002. A questionnaire addressing a wide range of issues related to individual organizations' evaluation functions and the CEE was mailed to each of the 87 federal government departments and agencies subject to the Policy. Two forms of the questionnaire were used based on the size of the organization and its evaluation capacity. A longer form of the questionnaire (referred to as the standard template) was sent to 44 large- and medium-sized departments known to have had some existing evaluation capacity at the time of survey administration. A shorter form (referred to as the smaller agency template) was sent to 43 smaller agencies that were believed to have little or no evaluation capacity at the time of the survey administration. Determination of the appropriate survey to administer was based on information about the evaluation community managed by the CEE. The shorter form was designed to capture and account for the smaller agencies' unique size, needs and environment, while limiting their reporting burden. In the case of departments and agencies receiving the longer form, Heads of Evaluation received and were requested to complete the survey. This was also the case in smaller agencies where a Head of Evaluation was clearly identified; where the position did not exist, the Senior Financial Officer received the form and was requested to complete it on behalf of the agency.

An overall response rate of 72% was achieved. This included a 93% response rate from the large and medium departments (41 returns), and a 51% response rate from small agencies (22 returns).

1.3.2 Key Informant Interviews(3)

Key informant interviews were conducted with representatives from 16 departments and agencies, Program Sectors of Treasury Board Secretariat (TBS), the Office of the Auditor General (OAG), and the Canadian Evaluation Society (CES). This approach was deemed essential to obtain information on the implementation of the Policy and on the capacity of the evaluation function to support decision-making in the context of Results-Based Management. In total, 61 individuals participated in the interview process.

Interviews were conducted using a semi-structured interview guide based on the evaluation issues. Key informants were provided with a copy of the interview guide prior to the interview to review and prepare their responses. Interviews were conducted from July through October 2002.

Departments selected to be interviewed included a cross-section of large, medium and small departments and agencies with varying levels of capacity ranging from minimal through to strong capacity for evaluation. For each department or agency selected, the evaluation team attempted to interview the director of evaluation, the director general responsible for evaluation, and the deputy minister or assistant/associate deputy minister. In total, 44 departmental representatives participated in key informant interviews. This included 19 directors of evaluation or evaluation managers, 12 directors general responsible for the evaluation function, and 13 senior managers at the deputy, associate deputy or assistant deputy level.

Interviews were also conducted with representatives from the TBS Program Sectors, the OAG, and the CES.

1.3.3 Case Studies(4)

To provide more detailed qualitative information on issues such as use of evaluation by program managers and quality of evaluation reports, eight case studies were conducted with medium and large departments. The criteria used to select departments for inclusion in the case study portion of the interim evaluation included:

  • a function well-enough established to permit analysis of evaluation products produced throughout the department;
  • representation from policy-based as well as program-delivery organizations;
  • a cross-section of departments involved in scientific, social, administration of justice and cultural policy and programs; and,
  • to the extent possible, medium as well as large departments and agencies.

For each case study, the participating department was asked to provide up to three evaluation studies or RMAFs that had been completed within the past year. The case study team reviewed these reports, and conducted interviews with both program managers and evaluation managers involved with the selected studies.

2.0 The Evaluation Policy

On April 1, 2001, Treasury Board issued revised policies for the Evaluation and Internal Audit functions within the federal government. These revised policies replaced the TBS Review Policy, which had previously provided policy direction to both functions. In approving the revised Evaluation Policy (the Policy), Treasury Board Ministers identified a four-year implementation strategy and provided funding for the first two years. The Ministers also established a Centre of Excellence for Evaluation (CEE) within the Secretariat, to assist in, and monitor the success of, government-wide implementation of the Policy. While a portion of the two-year funding ($2.8 million) supported the establishment of the CEE, the bulk of the resources ($8.6 million) were directed to departments and agencies subject to the Policy (see Table 2, below). Treasury Board Ministers also directed that both revised policies were to be assessed after 18 months (Autumn of 2002), and fully evaluated at five years (2005/2006). Decisions on funding for the third and fourth years of the implementation strategy were to be contingent on the findings of the 18-month assessment of the Policy

Table 2 - Funding for initial two years of implementation of Policy

 

Departments and Agencies

Centre of Excellence for Evaluation

Total

2001-02

$3.2M

$1.4M

$4.6M

2002-03

$5.4M

$1.4M

$6.8M

Total

$8.6M

$2.8M

$11.4M

2.1 Requirements of the Policy

The objective of the revised policy is "to ensure that the government has timely, strategically focused, objective and evidence based information on the performance of its policies, programs and initiatives to produce better results for Canadians."

In support of this objective, key requirements of the revised policy are:

  • A required capacity, integrated with senior management. Deputy-heads are to establish an appropriate evaluation capacity. They should appoint a senior head of evaluation, and establish an evaluation committee with a senior departmental executive to chair it. Departments/agencies may still combine audit and evaluation functions in one branch.
  • Increased Scope. Evaluations are now to cover policies, programs and initiatives, rather than just the traditional program areas. Further, they are to cover any of these delivered through partnership mechanisms (inter-departmental, inter-governmental, etc.).
  • Increased Emphasis on Performance Monitoring & Early Results. Departments are to embed the discipline of evaluation into the life-cycle of programs, policies and initiatives to:
    • Develop results-based management and accountability frameworks (RMAFs) for new or renewed policies, programs and initiatives.
    • Establish ongoing performance monitoring measurement practices.
    • Evaluate issues related to early implementation and administration of the policy, program, or initiative.
    • Evaluate issues related to relevance, results, and cost-effectiveness.

To this end, Heads of Evaluation are to work with managers to help them enhance design, delivery and performance measurement.

  • Strategic Planning for Evaluation. To ensure the most effective balance of evaluation work, and to best serve department and government priorities, departments/agencies should develop a strategically focused evaluation plan.
  • Integration with Management and Strategic Decision-making. The policy defines factors contributing to success of evaluation, including establishing a conducive environment where managers embed the practice of evaluation into their work; and where evaluation discipline is used in synergy with other management tools to improve decision-making. Heads of Evaluation are to work to ensure that evaluation in their organization is healthy with respect to these factors. To this end, there is an emphasis on objectivity of the evaluation function, not independence.
  • Revised Standards of Practice. The policy includes a simplified and consolidated set of guidelines for professional practice in evaluation.

2.2 Context

2.2.1 Background

The introduction of the new Evaluation Policy occurred in a climate of renewal for the Federal Public Service. "The revised policies on internal audit and evaluation are part of the Government's ongoing commitment to continuous management improvement and accountability," (Minister Robillard, Feb. 14 Press Release). The new policies were the result of parallel reviews on the state of the Evaluation and Internal Audit functions, conducted in 2000, that included public and private sector consultations.

2.2.2 Focus on Serving Results-based Management

Improved internal audit and evaluation support overarching policy on modern management, as articulated in Results for Canadians: A Management Framework for the Government of Canada(5).

The revised policy situates the evaluation function as a key enabler to managing for results, as it supports program managers' efforts to track and report on actual performance and helps decision makers objectively assess the results of policies, programs and initiatives. This distinguishes evaluation from internal audit, which provides assurances on a department or agency's risk management strategy, management control framework and information, both financial and non-financial.

2.2.3 Acknowledged Capacity Gap

As with other functional communities within the Federal Public Service, the evaluation function is facing the challenges of renewal. The number of evaluators and the level of financial resources devoted to evaluation have declined significantly since the early 1990s. The TBS study conducted in 2000 that led to the introduction of the new policy, suggested that capacity was as low as 56% of the level needed to fully implement the proposed policy requirements(6). In addition there is the reality of an aging public service, reflected in the evaluation community, and associated forecasted high departure rates among experienced evaluation staff.

Within the context of the recognized reduced capacity for evaluation, the revised policy additionally requires an increase in the scope of activity to be undertaken by the function.

2.2.4 Pressure from Related Policies

Another important environmental factor for implementation of the revised policy is the pressure on capacity created by other related policies related to risk management and results-based management. Two policies in particular have had significant impact.

Transfer Payments Policy

Treasury Board's recent Transfer Payments Policy (TPP) requires the development of Results-based Management Accountability Frameworks (RMAFs) for all department / agency grants and contributions programs, before they can be approved. Assisting in the development of RMAFs is now a relatively new, primary focus of the evaluation function in many departments and agencies. As each RMAF defines requirements for future results reporting, another result of the TPP is an increase in future workloads associated with performance monitoring and evaluation of these programs.

Active Monitoring Policy

As a key component of "Results for Canadians", the Active Monitoring Policy requires departments to monitor management practices and controls used to deliver programs. The intent is to ensure early warning and effective control, without returning to the command and control approach of years past. Implementing active monitoring means increased demands on the role of the evaluation function in each department.

2.3 Responsibilities for Policy Implementation

Responsibility for implementing the Evaluation Policy is shared among all stakeholders. It is a partnership between Treasury Board Secretariat and departments and agencies. Within departments, success is not the sole responsibility of Heads of Evaluation - departmental senior managers and program managers each have a role in ensuring a supportive environment "where managers embed the practice of evaluation into their work" and "where evaluation discipline is used in synergy with other management tools to improve decision-making." Within Treasury Board, the CEE and other parts of TBS must work together in supporting the evaluation community and in "using the products of evaluation to support decision making at the centre.(7)" This partnership is also an integral part of the joint effort to implement Active Monitoring of programs, policies and initiatives by the federal government.

2.3.1 Centre of Excellence for Evaluation (CEE)

The CEE was established to: provide leadership for the evaluation function within the federal government; take initiative on shared challenges within the community, such as devising a human resources framework for long-term recruiting, training and development needs; and, provide support for capacity building, improved practices, and a stronger evaluation community within the federal Public Service.

The mandate and role of the CEE were derived from broad directional statements in the revised policy. The focus of CEE activities has been primarily external, building evaluation capacity across the system and repositioning the evaluation function. The CEE identified a number of activities for the first two years of policy implementation and defined these in its strategic and operational plan (November, 2001). This plan also identified five core strategic areas that would guide the Centre's operations. These key areas covered policy implementation, monitoring, capacity building, strategic advice, and communication and networking.

As with the overall evaluation, the assessment of the CEE will look at the Centre's progress accomplished so far against its longer-term implementation goals. This assessment covers the first eighteen months of operations.

3.0 Interim Evaluation Findings

The evaluation findings are presented in this section according to the primary issues addressed, namely:

  • progress in implementation of the Policy;
  • gaps that remain to full implementation;
  • challenges to implementation; and,
  • appropriate roles and responsibilities for the CEE.

3.1 Progress in Implementation

The focus of the interim evaluation is the progress made in the initial 18 months in implementing the Policy. Progress is assessed according to how departments and agencies have advanced in meeting the key requirements of the Policy such as: positioning the evaluation function within the organization so that it is viewed as a required capacity that is integrated with senior management; increasing the scope of evaluations; increasing the emphasis placed on performance monitoring and early results; strategic planning for evaluation; integrating evaluation with management and strategic decision-making; and, ensuring that standards of practice are maintained in evaluation work.

3.1.1 Current status of implementation of the Policy

The status of implementation varies significantly between departments and agencies. Many of the smaller agencies report having a very limited or non-existent evaluation function. Among the medium and large departments, some are just starting to implement the new Policy, while others report being well beyond the halfway mark in implementation at this point.

Departments and agencies were asked a number of questions on the survey, in the key informant interviews and during case studies, to gauge current evaluation infrastructure and capacity. Having a robust evaluation infrastructure requires a Treasury Board Evaluation Policy that is fully implemented and well understood, not only by professional communities of evaluators, but also by program managers across the federal government. It also requires having accountability structures in place within the department or agency such as an active Internal Audit and Evaluation Committee, as well as an approved Evaluation Plan that is relevant and implemented (regular updating on targets and objectives, and follow-up on approved recommendations in evaluation reports). Finally, it involves Deputy Heads making a commitment to appropriate levels of resourcing for the function to ensure that evaluation is embedded in management culture, and that functional staff are adequately trained to support policy implementation.

The results of the study show that approximately one-half of the departments and agencies subject to the policy have an active evaluation function. Based on the survey data, 42 out of 87 departments/agencies have an active evaluation function where there are current, dedicated evaluation resources in the form of an evaluation budget and dedicated FTEs.

Within those organizations that report an evaluation function, there is a wide range of strong, adequate and weak evaluation capacity. As illustrated in Figure 1 below, based on these indicators from the survey of the evaluation community, only one department can be classified as having strong evaluation capacity for full implementation of the Policy. Another 16 departments have moderate capacity, but indicate they still required significant additional resources to reach a steady evaluation state that would be consistent with full implementation of the Policy. A further 25 organizations are starting from a weak or minimalist position but are working towards building their capacity and will require considerable time and support to attain full implementation of the Policy. Small agency respondents (n=21) indicated little or no evaluation function. The remainder of the evaluation community (24 non-respondents among the small agencies) can be assumed to have little or no evaluation capacity.

Figure 1 - Assessed Evaluation Capacity (source: survey data)

Assessed Evaluation Capacity

Note: "Little or no capacity" includes both respondents (n=21) and non-respondents (n=24)

Interview results indicate that the main distinguishing factor, with regard to which departments or agencies define themselves as being advanced in implementation, is the maturity of the evaluation function prior to the establishment of the new Policy. Those departments that reported having relatively mature evaluation functions prior to the establishment of the Policy report that their functions remain stable and are increasing in capacity under the implementation stages of the Policy. In contrast, departments that report that they are in the initial stages of implementation also tend to either report having had under-developed evaluation functions prior to the implementation of the Policy, or that the evaluation function within the department has been undergoing major changes within the past two years with activities such as restructuring and repositioning within the department.

Findings from the case studies generally supported the information collected during key informant interviews and survey. The case studies focused on departments that for the most part had relatively mature evaluation functions prior to the implementation of the Policy. Across the eight medium and large departments which participated in the case studies, the case studies found that implementation was relatively advanced, with many of the objectives of the Policy being met in the recent evaluation work that was presented to the study team for examination. In each department, the evaluation function was found to be producing credible evaluation results that meet Treasury Board Standards, and that have or will assist in managing for results. Based on the case studies, progress in meeting many of the objectives of the Policy have been significant among the eight departments.

3.1.2 Evaluation Scope

One requirement of the new Policy is to increase the scope of evaluation beyond traditional program areas to cover policies, programs and initiatives. Results from the survey and the key informant interviews demonstrate an increased scope of evaluation beyond the traditional areas of evaluation to cover policies, programs and initiatives. According to the survey, this component of the Policy has been implemented by approximately one-half of the medium and large departments. From the data collection templates, 54% of these departments have conducted evaluations of either policies or initiatives, accounting for 27% of the evaluations conducted within large departments. In the key informant interviews, some departments indicated that they are still at the stage of defining the evaluation domain and universe for their organizations. The case studies did not collect information directly on the changing scope of evaluation.

3.1.3 Capacity of the Evaluation Function

Prior to the establishment of the Policy, there was an acknowledged shortage of resources and professional evaluators in the evaluation functions in many of the departments and agencies. The resources allocated to assist in the implementation of the Policy were in part intended to address this shortfall evident in many organizations. The establishment and support for sufficient evaluation budgets and an adequate evaluation workforce are necessary if evaluation functions are to meet key policy requirements. The interim evaluation found that departments are investing in the evaluation function above 2000 levels, and that while the number of FTEs has increased there remains a considerable gap in the number of evaluators required. Despite these challenges, the production of evaluations and performance measurement studies has steadily increased, with plans for considerable additional increases within the next two years. FTEs are however planned to increase at a significantly slower rate in comparison to the expected productivity of the function over the next two years.

Evaluation Budgets

Based on survey responses, the total evaluation budget in 2002/03 across the evaluation community is $42.4M. This represents an increase of $9.4M (or 28%) from an earlier evaluation budget estimate of $33M indicated in a 2000 study(8) conducted by the Secretariat, just prior to policy implementation. Large and medium departments make up the majority of the budget, of which 65% or ($27.4M) of the total are concentrated in 11 organizations.

Small agencies represent only a small percentage of the total evaluation budget across government. Only three of the small agency respondents indicate they had a total evaluation budget of more than $50,000. Clearly, there is very little capacity and dedicated resources for evaluation across small agencies.

In 2002/03, departments invested almost $4M towards the function over the $5.4 million invested by Treasury Board in the Evaluation Policy Implementation Strategy. This indicates that while more could be done, departments have taken steps to rebuild the function. The distribution of the $4M investment is relatively even across the medium and large departments subject to the Policy, with 29 departments reporting an increase in investment in the current fiscal year.

Evaluation workforce

As demonstrated in Figure 2, FTEs have increased from 230 estimated in a Secretariat commissioned study that took place in 2000(9), to approximately 286 evaluators that were identified by respondents in the interim evaluation. This represents an increase of 24% in evaluation personnel over two years. Yet, according to departments, there remains a significant gap of approximately 40%, or 105 FTEs, that are required to support full Policy implementation.

Figure 2 - Evaluation Workforce

Evaluation workforce

Table 3 contains the breakdown of current evaluators according to classification and level. Of the current complement of 286 evaluators, 62% are localized in ten large departments. Approximately 19% of the total make-up of classifications and levels are considered to be at the junior evaluator level. Another 33% are at the analyst level, 28% are senior analysts and 6% are executives.

Table 3: Breakdown of current evaluator complement (FTEs)

 

FTEs

% Total FTEs

Classifications & Levels
Junior Analyst

53.8

18.8

AS01-03, ES01-03
Analyst

94.0

32.8

AS04-05, C001-02, ES04-05
Senior Analyst

80.7

28.2

AS06-08, C003-04, ES06-07
Executives

17.4

6.1

EX01-05
Other

40.3

14.1

All other classifications
Total

286.2

100.0

 

A significant number of departments and agencies indicate they planned to engage in hiring activities over the next three years. This clearly signals that the community is taking steps to build capacity. As described in Table 4, survey respondents indicate that they were expecting to hire approximately 84 evaluators over the next three years, of which 52 will be new hires and 32 will be replacement hires.

Table 4 - Staffing Actions (FTEs), 2002-2005

 

2002/03

2003/04

2004/05

Total

Replacement

9.00

13.75

9.00

31.75

New Hires

35.00

16.00

1.00

52.00

Turnover

16.00

16.75

10.00

42.75

Retirements

3.00

5.00

7.00

15.00

Despite this anticipated activity, the evaluation community is only expected to grow by an additional 26 evaluators in the next three years. The community will peak at 319 evaluators in 2003/04 and slightly decrease to 312 by 2004/05. This is primarily due to a large number of expected departures including turnover (43) and retirements (32), over the next three years. It is not known what proportion of the turnover will involve moves between departments (i.e., evaluator remains within the evaluation community), and what proportion of turnover will actually be exits from the community. Regardless, the expected turnover and retirements will have a counter-balancing effect on expected hiring over the same period.

Figure 3 demonstrates that the net result of this activity is that there will remain a considerable gap of approximately 79 evaluators from the 391 required to reach a desired steady evaluation state, as indicated by survey respondents. The challenge will be ongoing to overcome this gap as demand for experienced and qualified evaluators outweighs the current supply.

Figure 3 - Projected FTE Trend 2002 - 2005

Projected FTE Trend 2002 - 2005

It was recognized by most key informants at the level of manager/director/director general of the evaluation function that building evaluation capacity within departments will be a long process with significant challenges, given the current demographics of the evaluation community within the federal government and the high demand for evaluators.

Despite these capacity challenges that were identified in the survey and key informant interviews, the case studies found that the staff from the evaluation function played key roles in each of the evaluation studies and RMAFs. The roles ranged from taking the lead in projects, to providing assistance and support to program managers. In a few projects, the evaluation staff were able to step in and assume the lead in a project when the program managers either lacked sufficient expertise or had significant time constraints.

Productivity - Evaluation Studies, RMAFs and Performance Measurement Studies

As illustrated in Table 5, the productivity within the community is expected to increase significantly between 2001/02 and 2002/03. For example, RMAFs are scheduled to increase from 57 to 181 (218% increase)(10), and planned evaluation studies from 85 to315 (272% increase)(11). Performance measurement studies will also be increasing significantly from 45 to 92 (112% increase)(12). This large increase in productivity is substantially higher proportionally in comparison to the estimated increase in FTEs over the same period of time. This raises issues over the actual sustainability of this sharp increase in productivity.

Table 5 - Evaluation Work 2001/02 to 2002/03 (All respondents)

 

2001/02

2002/03

Programs

309

455

Policies

30

47

Initiatives

98

135

Other

26

11

Total

463

648

RMAFs completed

192

181

Evaluation Reports (approved)

138

249(13)

Evaluation Reports (Not yet approved)

36

65*

Performance Measurement Studies

58

92

Other Substantive Studies

39

61

Total

463

648

3.1.4 Emphasis on Performance Measurement and Early Results

One requirement of the new Policy is an increased emphasis on performance measurement/monitoring and early analysis of program/policy/initiative design and implementation. All departments where key informants participated in interviews reported progress in this area. However, these key informants also indicated that their departmental performance measurement function is not sufficiently mature to provide information based on validated, high quality data that is related to costs.

Most departments reported in key informant interviews that performance monitoring will be a priority area for development over the next few years. An example of the greater emphasis placed on performance measurement in the Policy is the substantial increase in the numbers of performance measurement studies, with a reported 45 in 2000-01, to 58 in 2001-02 and to 92 planned studies in 2002-03.

Given the role of the evaluation function in assisting with the development of RMAFs in most departments, many departments anticipate there will be a higher level of integration of performance measurement and evaluations than previously occurred. This is because the RMAF requires the development of both an ongoing performance measurement approach, in addition to an evaluation and reporting strategy, to ensure that managers can account for program/policy/initiative results.

During interviews, many key informants reported that it is too early to determine the extent to which program managers have actually "bought into" this requirement of the Policy, particularly in regard to their responsibility for managing ongoing performance measurement. Moreover, key informants also indicated that insufficient time has passed to determine the extent to which RMAFs are being implemented. Most noted that the real assessment of the integration of performance measurement and evaluation will occur during the first round of evaluations resulting from RMAF requirements.

Information from the case studies indicated that while the RMAFs studied appear to be implementable, there were issues as to whether implementation has occurred. In some instances there was considerable challenges in getting buy-in from the large number of stakeholders implicated in the RMAF, while for others it was a question of whether the RMAF contained sufficient details on data collection and reporting strategies. As well, among some program mangers interviewed, there was the perception that RMAFs are more of an administrative exercise rather than a useful management tool.

Key informants mentioned that the link between performance measurement and evaluation may require increasing the capacity among program managers to see the integration as a useful management tool. In some departments this capacity is evident among managers. The case studies generally found that while participating program managers may not have been fully aware of the Policy, they viewed evaluation as a useful integrated management tool.

3.1.5 Evaluation Planning

Strategic planning for evaluation can assist in maintaining an effective balance of evaluation work and serve department and government priorities. It was found that overall, progress has been made among departments and agencies in implementing a regular evaluation planning process. However, this aspect of the new Policy's requirements is not yet fully implemented in many of the small agencies. Approximately one-half of all departments and agencies (49%) reported that they had an evaluation plan approved for 2002-03. This is a 23% increase over the number of plans that were approved in 2001-02.

Seventy-one percent (71%) of medium and large departments reported approved plans in 2000/01 and 88% in 2002/03. Small agencies reported no change in numbers of approved plans with two in 2000/01 and in 2002/03. Overall, this would indicate that the community is making progress in increasing strategic planning through the use of approved evaluation plans - but only in the large and medium departments.

The case studies found that the vast majority of the evaluation studies and RMAFs presented for examination were conducted as a result of TBS requirements rather than according to strategic planning and priorities for the department.

In the interviews, some key informants reported that they are linking the evaluation plan into the strategic planning and priorities for the department. In contrast, a few reported that, although they are developing evaluation plans, they tend to be more reactive to current and immediate demands rather than linked to the strategic planning and priorities of the department. However, for these few, the intention is to link plans into the departmental strategic planning within the next few years.

During the interviews, a few departments expressed concerns that the RMAF process will "prescribe" the evaluation planning, leaving less opportunity and resources to link evaluation planning into overall departmental strategic planning and priorities.

3.1.6 Integration with Senior Management Decision-Making

One requirement of the new Policy is that evaluation should be considered a required capacity that is integrated in the management "tool-kit" to support senior management decision-making. One way to achieve this integration is to establish an evaluation committee chaired by a senior departmental executive. By establishing a committee where senior managers can actively participate, they become aware of evaluation findings, with a higher likelihood that they will use them for strategic decision-making within their organizations.

Responses from the survey show that 39 (or 93%) out of the 42 departments/agencies with an evaluation function (standard template respondents) have an active evaluation committee, of which 33 (85%) are being chaired by agency heads. Most report that the committee is active and meeting regularly. In contrast, only two of the small agencies reported that they have an active evaluation committee.

These results suggest that the majority of departments and agencies with current evaluation capacity have evaluation committees, and are making concerted efforts to use them regularly. However, many smaller agencies do not yet have this basic infrastructure in place. This may be due to the restrictions imposed by limited resources or to the size of these organizations not warranting a committee structure dedicated to the evaluation function.

Most key informants reported that evaluation results are used in strategic decision-making in some manner within their departments. Most frequently, interviewees pointed to the fact that senior level involvement on evaluation committees has increased over the past two years. This increased involvement results in a heightened awareness among senior managers of evaluation findings. Similarly, the case studies found that senior management through committees has addressed evaluation findings and used them to inform program/policy/initiative decision-making. Key informants at the level of evaluation managers, directors or directors general identified another indicator of importance of the function to senior management decision-making: the increased funding assigned to the function, through internal reallocation. Furthermore, those senior managers interviewed reported that they are aware of evaluation findings, and often use the results in making decisions viewing evaluation as an essential component of a results-based environment.

Case studies indicated that the findings from the evaluation studies were presented to, and approved by senior managers (ADM or DM level). RMAFs were not required to go through this approval process.

Representatives from the TBS Program Sector and OAG further indicated that the evaluation function was essential to working within the current results-based management environment. Some concerns were expressed during these interviews regarding the utility of some evaluations that focus primarily on design and delivery issues, and less on the more difficult issues of relevance and program impacts. Respondents viewed these types of evaluations of limited utility for strategic decision-making. This sub-set of key informants also indicated that the appropriate timing of evaluation is critical to their usefulness in decision-making within the TB submission process. Some key informants indicated that there are occasions where the availability of evaluation results does not coincide with this decision-making process (e.g., evaluation results are not available to accompany the Treasury Board Submission for program renewal or redesign).

3.2 Gaps to Full Implementation

Among the medium and large departments, over one-half (25/42) of departments are attempting to implement the Policy from a weak or minimalist position. In many instances there is considerable capacity building required in an environment that is very competitive in attracting qualified evaluators. With these challenges in mind, a considerable amount of progress has been made, particularly in the areas of productivity, with sharp increases in the number of evaluation studies, RMAFs and performance measurement studies being produced in the past few years. Despite this increase in productivity, a number of gaps remain before implementation of the Policy can be considered fully achieved by the evaluation community.

Among the smaller agencies, there are numerous gaps with most having limited or non-existent evaluation functions.

3.2.1 Coverage of programs, policies and initiatives by RMAFs and evaluations

Coverage varies considerably by department according to key informant interviews with representatives from departments and with TBS Program Sector representatives. Survey respondents were asked a series of questions designed to provide a self-assessment of their capacity gap, to identify requirements to close this gap, and to identify any additional resources they would require.

As illustrated in Figure 4, only 9 large and medium departments indicated they are covering all their current RMAF commitments. Furthermore, 10 indicate that at least 25% of their commitments were not covered, of which 3 departments indicate they are covering less than 50%.

Figure 4 - Proportion of RMAF commitments unable to cover

Proportion of RMAF commitments unable to cover

These findings suggest that the TBS Transfer Payments Policy is having an impact on a significant number of departments and agencies and may be widening the capacity gap by creating a significant backlog of evaluation work.

When asked about the length of time needed to evaluate all programs, policies and initiatives covered by the Evaluation Policy, large and medium departments report that they still require a substantial period of time (Figure 5). Ten respondents indicated that it would take at least 3 to 5 years from today, while another 13 respondents indicated it would take 6 to 10 years.

Figure 5 - Time required to complete evaluation of all Programs, Policies and Initiatives

Time required to complete evaluation of all Programs, Policies and Initiatives

These survey results are corroborated by the information collected during the key informant interviews. Representatives of departments and agencies interviewed indicate that the demand and requests for evaluation work from managers within the department (e.g., evaluation studies, RMAF development) is at times beyond what the function can deliver with current resources. Interviews with representatives from TBS Program Sectors indicate that coverage of key programs/policies/initiatives is not yet complete, with issues arising with both the timing and quality of evaluations and RMAFs as they relate to TB submissions.

These results strongly suggest that the evaluation community is not capable of meeting all its commitments under the Policy, and is still a long way from being able to reach a mature and steady state. This situation may be related to several factors including insufficient evaluation budgets, the state of the current demographics of the evaluation community within the federal government, as well as high demand on evaluators across the system, created by the Evaluation Policy itself in conjunction with the Transfer Payments Policy.

When asked what additional resources would be required to bridge the capacity gap and fully implement the Evaluation Policy, respondents indicated that an additional $13M is required above current resource levels, which include $5.4M in temporary funding related to year two of the implementation strategy. Large and medium departments indicate that $11.9M (or 92% of the total) is required. This total figure represents an increase of 31% over and above the current $42.4M ($37M in A-Base + $5.4M in TBS funding) evaluation budget. As mentioned earlier, respondents indicate a need for an additional 105 FTEs. Interestingly, only 12 FTEs are identified by small agencies even though they also indicate an insufficient number of staff is their primary barrier to fully implementing the Policy.

When asked where these additional resources would be used, 88% of respondents indicated a need for supporting performance measurement across the department/agency, 79% for conducting evaluation studies, and 58% for developing RMAFs. Other responses included follow-up and monitoring of evaluation results and recommendations, conducting evaluation studies, and providing strategic advice and research (for example, developing models and tools).

3.2.2 Gaps in Use of Evaluation for Decision-making and Accountability

During the interviews, many cautioned that the identification of any gap in how evaluation is currently used in decision-making is difficult to assess at this stage of the implementation of the Policy. This gap is likely better assessed once there has been sufficient time and opportunity to have evaluation studies developed under the Policy becoming available to management.

3.3 Challenges for Implementation

A number of challenges to implementation of the Policy were identified during the evaluation. These are summarized briefly below.

3.3.1 Implementing the Policy during a Rebuilding Phase for the Function

During the interviews, respondents in many departments indicated that one of the main challenges they are facing in implementing the Policy is that there is a need to rebuild the evaluation function in order to respond to the current demands. This fundamental challenge should be considered in light of the other challenges identified in this section.

3.3.2 Laying the Groundwork for Implementing the Policy

Part of this rebuilding has required concentrating on laying the "ground work" for full implementation of the new Policy (e.g., hiring staff, planning, developing performance measurement). Many interviewees reported that full implementation of the Evaluation Policy will require a "cultural shift" within many departments. These include changing and evolving roles for program managers, the evaluation function, and senior management. This shift will require time, resources and support from senior managers.

Case studies found that the level of involvement of program managers varied according to department. They tended to range from significant involvement in all aspects and stages of the project, to more limited involvement primarily due to time constraints on the part of the program managers. A few departments reported significant efforts in the area of developing evaluation capacity among managers so that they can become more fully involved in the evaluation and RMAF development process.

3.3.3 Restrictions in Budget and Staff

As previously mentioned, there are currently significant gaps in the number of evaluators in departments, with estimates indicating a shortage of approximately 40% in trained evaluators across the system. Key informant interviews indicated that the demographics of current staff and the increasing demand for qualified evaluators have had major impacts on their success in recruiting and capacity building.

Respondents to the survey also indicate a wide range of issues and limitations to the question about barriers to implementing the Evaluation Policy (Figure 6). However, the primary barriers for most departments and agencies revolve around very basic infrastructure requirements.

Sixty-eight percent of large and medium respondents indicate an insufficient number of staff and lack of evaluation budget as their primary barriers to implementing the Policy. They further indicate that their current staff skill set (54%) and new priorities and issues within their organizations (51%) remain major challenges. Another important barrier indicated by large and medium departments is staff turnover.

These concerns are actually higher among small agencies with 94% of respondents also indicating an insufficient number of staff, and 89% indicating lack of budget. Current staff skill set (44%) is also identified as a major barrier to implementation.

Figure 6 - Identified Barriers to Implementation of Policy

Identified Barriers to Implementation of Policy

3.3.4 Challenges in recruitment, training and development

Community development is particularly challenging for the evaluation community given its relatively small size. This means that some of the traditional approaches to professional recruitment within the federal government may not be as appropriate or effective in targeting this relatively small, specialized professional group.

Similar challenges are evident for the community with regard to issues of training and development. Training and professional development opportunities for small groups are relatively costly in comparison to similar opportunities designed for larger professional groups.

3.3.5 Transfer Payment Policy Requirements

For large and medium departments the perceived impact of the RMAF workload is mixed. Survey results (Table 6) show that 25% (8/32) of respondents have done "significantly less" of at least one area of evaluation work (performance measurement, evaluations, strategic advice or other) and an additional, 44% (14/32) of respondents have done "somewhat less" in 2000/01to meet the requirements of the TBS Transfer Payments Policy (TPP). Only, 7% (2/27) of respondents indicated they have done "significantly less" while 41% (11/27) of respondents indicated doing "some what less" in 2001/02.

Table 6 - Percentage of Departments impacted by TBS Transfer Payments Policy 2001-02 and 2002-03

"Some what Less"

2001-02

2002-03

Of at least one of performance measurement, evaluations, strategic advice or other to meet requirements of Policy on Transfer Payments

43.7

40.7

Of at least two of performance measurement, evaluations, strategic advice or other

18.7

25.9

Of at least three of performance measurement, evaluations, strategic advice or other

12.5

11.1

Of all four of performance measurement, evaluations, strategic advice or other

6.2

7.4

"Significantly Less"

2001-02

2002-03

Of at least one of performance measurement, evaluations, strategic advice or other to meet requirements of Policy on Transfer Payments

25.0

7.4

Of at least two of performance measurement, evaluations, strategic advice or other

9.4

7.4

Of at least three of performance measurement, evaluations, strategic advice or other

3.1

0

Of all four of performance measurement, evaluations, strategic advice or other

0

0

In general, these results might suggest that while the Policy has not greatly impacted the ability of some departments and agencies to provide services in other evaluation areas, in fact a significant portion of respondents (69%) are actually doing less work in at least one area as a result of the Policy.

One impact of the implementation of the TPP, according to many evaluation heads and TBS Program Sector representatives, has been the manager's focus on process rather than content in many instances. This may lead to the development of a "check-list" approach to performance measurement and evaluation rather than true buy-in to the rationale and utility of the functions. For example, many managers seem to be focusing on getting the RMAF completed (a TPP requirement), without extensive consideration of the benefits and challenges of its actual implementation.

This concern was echoed in some of the case study findings where RMAFs in many instances had not yet been implemented. In a few instances, the program managers reported viewing the performance measurement strategy outlined in RMAFs more as an administrative exercise, rather than a useful manager tool.

Some interviewees reported that there should be caution in assessing the extent to which performance measurement in a department is planned vs. actually implemented. Representatives from both TBS Program Sector and from the evaluation functions in departments and agencies reported that there may be a significant proportion of existing RMAFs that include planned performance measurement strategies that are not yet implemented.

3.3.6 Role of Project Managers in Evaluation

A challenge identified during the interviews was the role of the program manager in evaluation. Both TBS Program Sector representatives and departments raised concerns during the interviews as to how useful evaluations will be for strategic decision-making if a significant level of responsibility for the function remains at the program manager level. A caution was raised that there may be a tendency for more focus on process or program delivery issues, and a hesitancy to ask certain questions that may produce negative findings. Emphasizing design and delivery issues is a crucial aspect of evaluation, especially if the evaluation is to be useful as a management tool for program managers. However, this emphasis, at the expense of other issues, may have an impact on the usefulness of evaluation results for strategic decision-making. Within departments, it will be critical to balance the broader strategic and corporate needs (such as managing to achieve strategic or higher level outcomes) with the need for evaluation of individual programs. The findings from the case studies indicate that this balance is being achieved in some of the studies reviewed. In some cases, evaluations were deemed by the program manager to be very useful while, at the same time, the evaluation manager reported that evaluation results were being used by senior management.

Knowledge of evaluation among program managers may have to be given consideration according to the key informant interviews. This is particularly in light of the requirement of the new Policy is to have managers embed the practice of evaluation into their work. According to a few key informants, full implementation of the Policy will require program managers to gain a better understanding of evaluation, which may require some form of training.

The case studies identified that while program managers often did not have in-depth knowledge of the Policy, they were aware of the purpose of the evaluations. As well, the case studies found that program managers in the eight large departments were actively involved in the planning, implementation and reporting on evaluation studies. In addition, managers reported that they generally found the results were useful and resulted in implementable recommendations.

3.4 CEE Roles and Responsibilities

This section contains a brief description of the changes that have occurred in the mandate and role of the CEE within the initial 18 months, an assessment of results achieved by the Centre, and client feedback collected during the evaluation study in addition to other recent studies.

3.4.1 Changes in Mandate and Role

In the first 18 months of the implementation of the Policy, the CEE focused on helping to explain the new Policy and its requirements, and facilitating policy implementation in departments and agencies. Over the years, the evaluation capacity in many organizations had been weakened by events such as Program Review. As a result, the CEE has engaged heavily in helping departments and agencies build evaluation capacity though a community development strategy and plan comprised of a broad range of activities.

As the CEE moved through its first year of operation, there was a clear recognition by the Centre of an emerging and growing internal need within the TBS. The TBS Program Sector seeking assistance and support on the review of RMAFs, evaluations and active monitoring in general drove this need. The result has been that more and more of the CEE's resources have been devoted toward internal TBS responsibilities. This emerging internal need can be traced to an unanticipated influence of a number of factors. These include:

  • The TBS Transfer Payments Policy imposed significant and immediate demands on the evaluation community and the CEE. The Centre is now playing a key role in providing guidance on RMAF issues.
  • The Active Monitoring Policy created an additional workload on the Centre and its resources i.e., portfolio teams.

This change in mandate and role of the CEE has resulted in a new internal focus for the Centre that has grown to include assisting TBS decision-making around future funding vis-à-vis new programs. More specifically, the CEE plays a role in monitoring, reviewing and assessing individual evaluations that are provided in support of TB submissions.

3.4.3 CEE interim results

The CEE has focused its efforts both externally to departments and agencies, and internally to TB ministers and the TBS, with a goal of helping both ultimately achieve a number of key results (see Table 7). These results are generally longer-term in their attainment system-wide, which is an important consideration when assessing performance after 18 months.

Table 7 - CEE desired results

Departments

TB Ministers & TBS

  • Competent Workforce and Sufficient Capacity
  • Improved Evaluation Practices
  • Integration of Evaluation into Management Practices
  • Evidence-based, Timely and Credible Reporting for Decision-making and Accountability
  • Assess effectiveness of Programs
  • Inform Program Funding & Allocation Decisions
  • Report performance results to Parliament and Canadians

During this period, the four broad roles of CEE, around which it has come to define its plans and activities, are the following:

  • leadership and support to assist departments in repositioning evaluation;
  • community development to assist in capacity building;
  • integrating results into decision-making; and,
  • centre of expertise for evaluation.

3.4.3.1 Leadership and support to assist departments in repositioning evaluation

Within 18 months, the CEE has made progress through a number of achievements including:

  • allocation of $8.6 M to departments to strengthen and increase evaluation capacity; re-establishment of evaluation networks within the federal government, and use of this network to highlight the repositioning of the evaluation function in departments; re-establishment of the evaluation web presence with current topics and guidance on evaluation practices;guidance to evaluators and managers on RMAFs including workshops, RMAF Guide, strategic approach and guidance, compilation of good practices; and, development of the RMAF for the Policy.

This has laid the initial groundwork that will be necessary for continued progress in implementation of the Policy within departments.

3.4.3.2 Community development to assist in capacity building

The CEE has made some progress in achieving results in the community development area that will assist in providing groundwork upon which future community development and capacity building can take place. Some results include:

  • establishment of a Community Development Strategy for the federal government evaluation community;
  • development of demographic and competency profiles for professional evaluators in the federal government;
  • coordination and support community-wide recruiting efforts for professional evaluators,developed and implemented an entry-level internship program;
  • development of training and development curriculum for in-career evaluators; and,
  • establishment of an Innovative Ideas Exchange to promote the sharing of good/best practices within the community.

3.4.3.3 Integrating Results into Decision-making

Results achieved by the CEE in the initial 18-months of implementation of the Policy include:

  • review, analysis and provision of advice to TBS Analysts on RMAFs supporting Transfer Payments Policy (Over 360 RMAFs); and,
  • establishment of database capability to monitor health and performance of evaluation function in departments and agencies, and tracking of evaluation trends and issues across the system.

These results will assist in the eventual attainment of the end goals of improved design and performance of Transfer Payment Programs, and the increased use of evaluation results at TBS and by TB Ministers.

3.4.3.4 Centre of expertise for evaluation

The Centre has played an important role providing evaluation expertise to a number of stakeholders outside departments and agencies. Foreign governments for example have on an ongoing basis sought advice on the practice of evaluation in Canada, given the recognition that Canada has received by such international bodies as the OECD. Advice and counsel on the practice of evaluation is generally provided in relation to the following:

  • Visiting foreign delegations;
  • Evaluation questions from all parts of the TBS;
  • TB Ministers;
  • Provincial and other levels of government; and,
  • Responses to Public Accounts Committee, the Office of the Auditor General, and other external questions on evaluation.

3.4.4 Client feedback for CEE

Client feedback on the CEE is taken from a number of sources including the survey of the evaluation community conducted for the current evaluation, key informant interviews, case studies, a survey to assess user needs that could be met through on-line products and services as part of the redevelopment of the CEE website, and a study of the current demographic and skill profile of the evaluation community. These findings are structured according to familiarity with the CEE, roles and responsibilities of the CEE, and products and services of the CEE.

3.4.4.1 Overall Satisfaction with the CEE

Survey results show that those departments and agencies with an evaluation function (n=42) indicated overall satisfaction with the CEE. This finding is consistent with the information collected during the key informant interviews. As presented in Figure 7, results from the survey found that 57% reported being satisfied, with the Centre while another 20% reported being very satisfied, and only 6% reported being dissatisfied.

Figure 7 - Overall satisfaction with CEE

Overall satisfaction with CEE

3.4.4.2 Client Familiarity with the CEE

The survey of the departments and agencies indicates familiarity with the Centre and its products and services by the evaluation community. Seventy-seven percent (77%) of respondents indicate that they are either "somewhat familiar" or "very familiar" with the CEE. However, the responses for large and medium departments are very different for small agencies where there is limited or no existing evaluation function.

When compared with the smaller agencies, large and medium departments report being more familiar with the CEE. Forty six percent (46%) report being "somewhat familiar", while another 54% of respondents report being "very familiar". Smaller agencies are significantly less familiar with only 26% of respondents reporting being only "somewhat familiar", while 69% of respondents report being "not at all familiar" with the Centre.

The case studies found that there was limited familiarity with the CEE among the program managers, however, it should be noted that the main contact that CEE has with departments is with staff from the evaluation function. Representatives from the evaluation function in the departments indicated that CEE had been supportive in an advisory role for many of the RMAF projects presented for review under the case studies.

3.4.4.2 CEE Roles and Responsibilities

On the survey, departments and agencies were also asked about the importance and satisfaction of the current CEE roles and responsibilities. Some of these categories involved "Leadership role with in the Evaluation Community," "Advocate for the Community within the Government of Canada" and "Building Department and Agency Capacity." Again, responses of large/medium departments and smaller agencies varied substantially with ratings markedly lower in smaller agencies.

As illustrated in Table 8, the importance ratings range from "neutral" to "important" in smaller agencies, and ratings from "important" to "very important" for large and medium departments. This difference in ratings may be due to less familiarity and interaction by small agencies with the Centre. For departments and agencies familiar with the CEE, satisfaction levels range between the "neutral" and "satisfied".

In some instances, a significant gap exists between the importance and satisfaction ratings for particular CEE roles and responsibilities. In these cases, respondents tend to give a high importance, but show only moderate satisfaction. Such instances include "Building department and agency capacity", where on average respondents indicated high importance (4.3 rating) but only neutral satisfaction (3.4 rating). For large and medium departments "Being an advocate for the community within the Government of Canada", was given very high importance (4.7 rating) but only moderate satisfaction (3.5 rating). Some sustained focus and attention to these areas by the CEE would be required to raise satisfaction levels in those areas of greater importance to the community.

Table 8 - Average importance and satisfaction ratings

CEE Roles & Responsibilities

Small Agencies

Large /Medium Depts

Importance

Satisfaction

Importance

Satisfaction

Leadership role within the Evaluation Community

3.6

4.0

4.5

3.8

Advocate for the Community within the Government of Canada

3.1

N/A

4.7

3.5

Building department and agency capacity

3.8s

3.0

4.4

3.5

Level of Importance: 1=very unimportant, 2=unimportant, 3=neutral, 4=important, 5=very important

Level of Satisfaction: 1=very dissatisfied, 2=dissatisfied, 3=neutral, 4=satisfied, 5=very satisfied

A number of findings on the CEE roles and responsibilities also emerge from the key informant interviews. In light of the current challenges in implementing the Evaluation Policy, most interviewees within the evaluation community indicate a strong need for a coordinating and advisory organization like the CEE. Similar levels of support for a coordinating and advisory organization are provided by external stakeholders such as the OAG and professional association of evaluators (Canadian Evaluation Society). In interviews conducted within the TBS, Program Sector representatives indicate that there is a role for the CEE in supporting their work involving the review of evaluation reports and the coordination of evaluation work around horizontal initiatives.

3.4.4.3 CEE Products and Services

Survey respondents were also asked to provide importance and satisfaction ratings for a number of CEE products and services. Data from organizations with an evaluation function is presented in Table 9. Results show that "CV Circulation" and "Job Fairs" are relatively less important when compared to other CEE products and services, indicating these are low priority items for the community.

A number of products and services received high ratings for importance from the evaluation community. These included the following: "Heads of Evaluation meetings", "advice and guidance to support policy implementation", "strategic planning sessions", "guide for the development of RMAFs", "website", and "providing best practices in evaluation". These are areas the CEE may want to continue to support.

As with CEE roles and responsibilities, the gap between importance and satisfaction ratings indicates that there are some products and services where the CEE should look to improve its performance. These include providing "Best practices in evaluation" and "Website" issues for large and medium departments.

Table 9 - Importance and satisfaction with CEE products and services

CEE Products and Services

Large/Medium Depts.

Importance

Satisfaction

Heads of Evaluation Meetings

4.4

4.0

Advice and guidance to support Policy implementation

4.2

3.7

Strategic Planning Sessions - Heads of Evaluation

4.3

4.0

Guide for the development of RMAFs

4.1

3.7

Website

4.3

3.2

Providing best practices in evaluation

4.2

3.0

Presentations on various evaluation topics

4.1

3.6

Strategic approach to RMAFs

3.9

3.3

Community Development Forum

4.1

3.7

Innovative Ideas Exchange

3.9

3.4

E-mail Communications

3.9

3.8

Sponsoring developmental programs for evaluators (i.e., internship)

3.7

3.3

Senior Advisory Committee Meetings

3.8

3.6

CV circulation

3.4

3.2

Job fairs

3.4

3.1

Level of Importance: 1=very unimportant, 2=unimportant, 3=neutral, 4=important, 5=very important

Level of Satisfaction: 1=very dissatisfied, 2=dissatisfied, 3=neutral, 4=satisfied, 5=very satisfied

The interviews with key informants indicate that there is satisfaction with the products and services provided by the CEE, but that there are needs for additional products and services, as well as some suggestions for improvements with current products and services. These include the need for: more strategic advice and content advice in addition to process oriented advice; development of best practices and examples; consistency of messages and advice to departments and TBS analysts; timing and abundance of detail in review of evaluation reports for TBS analysts; communication of mandate and roles to both the evaluation community and external stakeholders; and, working more closely with TBS analysts.

3.4.4.5 Direction and Future Focus

A number of sources provide direction for the CEE and indicate where it should focus its efforts moving forward. In an earlier survey(14) of the Evaluation and Internal Audit communities, employees from 59 departments and agencies were asked where they thought the TBS Centres of Excellence should place their emphasis. They responded that the CEE should play a leadership role, "performing a centralized function in terms of developing the required capacity, the standards, policies and even in terms of the provision of training and development services". Respondents further expressed the need to include in this centralized function the education of the roles of client departments and the functional community in the evaluation initiative.

As illustrated in Table 10, the survey of the evaluation community conducted in the context of the current evaluation study indicates that the primary focus for the CEE should be community development including recruitment and staffing assistance, providing best practices and training assistance, and offering workshops and tools.

Another key area where the community feels the CEE should play a role is in helping to facilitate communications, networking and coordination amongst the community and with other stakeholders such as TBS analysts. Other areas identified include providing guidance and strategic advice on key evaluation issues, promoting "advocacy" on behalf of the community and securing additional funding and resources for departments and agencies.

Table 10 - Identified areas for future focus

Based on your needs, where should the CEE focus future efforts?
(coded responses to open-ended question; multiple responses possible)

Response Frequency
Community development (recruitment, staffing, best practices, training assistance, workshops and tools)

17

Communications, networking and coordination amongst the community and with others (i.e., TBS analysts)

10

Guidance, strategic advice

6

Advocacy

5

Build evaluation capacity

5

Secure additional funding and resources for departments

5

Website

3

Small agency assistance

2

Secure additional funding for the CEE

1

The key informant interviews also identified a number of areas for increased involvement for the CEE. Key informants recognized the limited resources of the CEE, which led many respondents to hesitate discuss any expanded roles beyond current activities. It was recognized among the interviewees that full implementation of the Policy would require a cultural shift, to create an environment in which managers could embed the practice of evaluation into their work. Many indicated that the CEE can play a leadership role in assisting departments with implementing this cultural shift, recognizing the challenges of a longer time line, and the resource and senior level support that this would require.

Participants in the case studies also recommended some areas for future focus for the CEE including providing guidance in areas such as: costing for evaluations, how to choose at which levels to evaluate, TBS expectations with regard to evaluation, areas of flexibility in evaluation, development of RMAFs, and sharing of lessons learned across departments.

4.0 Conclusions

4.1 Progress in implementation

Given the early stage of implementation, it can be concluded that reasonable progress in the implementation of the Policy has been made. This progress has been attained despite various contextual challenges such as:

  • The required re-building and/or repositioning of the function within many departments; and,
  • A significant capacity gap in the number of qualified evaluators within the functions.

Progress in implementation is lagging among the small agencies, most of which have little or no evaluation capacity. To date, most of the measured progress in the implementation of the Policy has occurred among the medium and large departments which had some evaluation capacity prior to the implementation of the Policy. Smaller agencies continue to lack the capacity to address the requirements of the new Policy. Additional considerations should be made to ensure that the smaller agencies are aware of the Policy, familiar with the CEE and their services/products, and have the necessary support to select and implement the aspects of the Policy that are relevant for them given their specific contexts.

4.2 Challenges and barriers to implementation

The level of increased productivity of the function over the past two years may not be sustainable unless considerable effort is made to redress budget and human resources gaps that currently exist for the function.

Projected workloads are forecast to increase at a rate of approximately 200% over the upcoming two years while the number of evaluation staff is expected to increase by only 20-30% over the same time period. To sustain this increased demand for productivity from the function, following the implementation of various policies and approaches to measuring results, will require that there are comparable increases in both budgets and personnel. In building the capacity of the function across the government, it will remain important to ensure that departmental support for the evaluation function is sustained beyond full implementation of the Policy.

Advancement in addressing community development issues such as retention, training, and professional development will require creativity and going beyond a traditional federal government approach to recruiting and training professionals. The relatively small size of the community will require ongoing partnerships and networks to be developed and maintained with external professional evaluation associations and internal evaluation groups. While initially resource intensive, this approach is likely to address these issues more quickly than traditional federal government methods for the recruitment and training of professionals.

4.3 CEE Roles and Responsibilities

There is a strong need for the CEE to continue to play a leadership role in assisting the evaluation community to implement the Policy. To date, the CEE has been successful in this role taking into consideration the number of challenges it has faced in the past 18 months. There are clear indications from the interim evaluation that for additional progress to be attained in implementing the Policy, there will be a need for a central group willing to play a leadership role, similar to that assumed by the CEE in the initial 18 months. The main areas that will require ongoing leadership are in advocating the importance of evaluation to Senior Managers, continuing to assist in system-wide capacity building, training, tools and guides for supporting policy implementation, and identifying best practices for dissemination within the community.

Another role that the CEE will need to continue to enhance over the next two years is its support of the TBS in its use of evaluation results in decision-making. The interim evaluation findings from TBS Sector interviews identified a need to have increased support from the CEE in assisting in analysis of RMAFs and evaluations supporting submissions. The need for this support will likely increase substantially within the next two years given the anticipated significant increases in numbers of evaluation studies during this period and beyond.

Footnotes:

(1)  Results-based Management and Accountability Framework: Evaluation Policy - Centre of Excellence for Evaluation, Treasury Board Secretariat.

(2)  The survey of the evaluation community was conducted by Trevor Bhupsingh Consulting.

(3)  Key informant interviews were conducted by Goss Gilroy Inc. Given the concurrent timing of the evaluation of the Audit Policy, some key informant interviews were conducted jointly with one interviewer from each evaluation team present. Key informants were given the choice as to whether they preferred joint or separate interviews.

(4)  Case studies were conducted by Consulting and Audit Canada.

(5)  Treasury Board Secretariat, 2000.

(6)  Internal communications.

(7)  Treasury Board Evaluation Policy, 2001

(8)  Consulting and Audit Canada Study, 2000

(9)  Consulting and Audit Canada, 2000 Study Estimate of Evaluation FTEs

(10)  The increase is 218% when compared to the 57 RMAFs produced in 2000/01

(11)  The estimated increase is 193% when compared to the 85 evaluation studies produced in 2000/01

(12)  The estimated increase is 112% when compared to the 45 performance measurement studies produced in 2000/01

(13)  The actual number approved at the time of the data collection (July 2002) was 116 evaluation reports with 198 not yet approved. Using the 2001/02completion rate provides an estimated 249 completed evaluations by 2002/03 yearend and with 65 still to be approved.

(14)  2001, Personnel Psychology Centre of the Public Service Commission of Canada

Date modified: