2012 Annual Report on the Health of the Evaluation Function

ISSN 1928-3474

Table of Contents

Message From the Secretary of the Treasury Board

The 2012 Annual Report on the Health of the Evaluation Function is the third such annual report and provides the Treasury Board and Canadians with information on trends in the evaluation function within the Government of Canada from 2007–08 to 2011–12.

In , the Policy on Evaluation was renewed. The policy included a four-year transition period to give departments time to build the capacity needed to fully implement its requirements as of . This annual report provides a government-wide view of the first three years of policy implementation, during which time:

In 2013–14, there will be an evaluation of the Policy on Evaluation. Information from a range of sources, including feedback from departmental stakeholders and annual reports on the health of the evaluation function, will support the evaluation of the policy and subsequent recommendations.

Yaprak Baltacioğlu
Secretary of the Treasury Board

1. Introduction

1.1 Purpose of this report

The 2012 Annual Report on the Health of the Evaluation Function fulfills a key responsibility of the Secretariat to monitor and report annually on government-wide evaluation priorities and the health of the evaluation function.

1.2 Areas addressed in this report

This report continues to address the six areas covered in preceding annual reports, i.e., resources for evaluation, coverage, governance and support, quality of evaluations, use of evaluations, and support provided by the Treasury Board of Canada Secretariat (the Secretariat).

1.3 Information sources used in preparing this report

Information presented in this report focuses primarily on the 35 large departments and agencies.

The report draws information from the Secretariat's ongoing monitoring of the function, including through its Capacity Assessment Survey of departmental evaluation functions, its annual monitoring of departmental evaluation quality and use, its regular interactions with departmental evaluation units, and departmental evaluation plans and evaluation reports submitted to the Secretariat.

2. Resources for Evaluation

2.1 Financial resources

Financial resources for evaluation have remained relatively stable across the function.

As shown in Table 1, financial resources for the federal government's evaluation function have been relatively stable since 2007–08, with a peak in 2009–10. In 2011–12, for large departments and agencies:

  • Annual resources for evaluation were approximately $60.2 million, representing no change from 2010–11. The amount spent on evaluation across the Government of Canada was 0.4 per cent of the combined dollar value of all the programs evaluated in 2011–12.
  • As for all previous years, salaries represented the largest component of total resources for the function, at 65 per cent.
  • Resources for salaries and operating and maintenance grew slightly relative to 2010–11 levels; those for professional servicesFootnote 1 decreased for the fourth consecutive year.
  • The median amount that departments devoted to their evaluation functions was $1.3 million, representing a decline from $1.6 million in 2010–11.
Table 1. Financial Resources Expended in the Evaluation Functions of Large Departments and Agencies in the Government of Canada From 2007–08 to 2011–12
Resource Category 2007–08table 1 note *
($ millions)
2008–09
($ millions)
2009–10
($ millions)
2010–11
($ millions)
2011–12
($ millions)

Table 1 Notes

Source: Capacity Assessment Survey

Totals may not add due to rounding. Includes organizations defined as large departments and agencies under the Policy on Evaluation, as determined each fiscal year. The list of large departments and agencies may vary slightly from one year to the next.

Table 1 Note 1

2007–08 figures for salary, professional services and O&M represent only ongoing resources. Figures for all other years represent combined ongoing and time-limited resources.

Return to table 1 note * referrer

Table 1 Note 2

For the years from 2008–09 to 2011–12, “Other” refers to other evaluation resources not managed by the head of evaluation.

Return to table 1 note referrer

Table 1 Note 3

The 2007–08 figure for “Other” includes other evaluation resources not managed by the head of evaluation as well as time-limited resources for salary, professional services and O&M.

Return to table 1 note referrer

Salary 28.4 32.3 37.1 38.2 39.0
Professional Services 17.9 20.5 19.1 17.6 14.3
Operating and Maintenance (O&M) 4.2 4.4 5.0 4.3 4.6
Othertable 1 note 6.7table 1 note 3.7 5.8 0.3 2.2
Total Resources 57.3 60.9 66.9 60.2 60.2
% Annual Increase N/A 6.3 9.8 -10.0 0.0

2.2 Human resources

Human resources for the evaluation function have increased since 2007–08.

As shown in Table 2, human resources for evaluation have fluctuated over the years but increased overall for large departments and agencies. In 2011–12:

  • An increase of 8.9 per cent over 2010–11 levels brought the number of full-time equivalents (FTEs) to 500.
  • The median number of FTEs was 13.3, which was almost unchanged from 13.4 in 2010–11.
Table 2. FTEs Working in the Evaluation Functions of Large Departments and Agencies in the Government of Canada From 2007–08 to 2011–12
  2007–08 2008–09 2009–10 2010–11 2011–12

Table 2 Notes

Source: Capacity Assessment Survey

FTEs shown represent combined ongoing and time-limited resources, reported by departments after the end of each fiscal year.

Includes organizations defined as large departments and agencies under the Policy on Evaluation, as determined each fiscal year. The list of large departments and agencies may vary each year.

Full-Time Equivalents (FTEs) 409 418 474 459 500
% Annual Increase N/A 2.2 13.4 -3.2 8.9

2.3 Contracted resources

There was a decline in the use of contracted resources in 2011–12.

In large departments and agencies in 2011–12:

  • Ninety-four of 146 evaluations (64 per cent) involved contractors for at least some part of the work, whereas the remainder did not involve contractors at all. In comparison, 73 per cent of evaluations involved contractors in 2010–11.
  • The total contracted cost for evaluation work was $12.4 million, which represented just less than 21 per cent of all evaluation resources expended over the fiscal year. In comparison, the total contracted cost of evaluation services in 2010–11 was $13.8 million, or 23 per cent of all evaluation resources expended in that year.
  • According to large departments and agencies, the most common reasons for contracting were to address pre-identified capacity constraints or workload issues (reported by 68 per cent of organizations), to meet the need for specialized technical expertise that was not available within the department (50 per cent), and to access cost-effective data collection services (41 per cent). See Appendix A for more information on the reasons for contracting evaluation work.

3. Coverage

3.1 Coverage of ongoing grants and contributions

Evaluation coverage of grants and contributions programs increased in 2011–12; however, not all departments fully met the requirement set out in the Financial Administration Act.

This 2012 Annual Report on the Health of the Evaluation Function is the first report that covers a five-year period since the advent of the legal requirement to evaluate all ongoing grants and contributions programs every five years. Across large departments and agencies:

  • The proportion of ongoing grants and contributions spending that was evaluated in 2011–12 was 18.5 per cent, compared with 7.4 per cent in 2010–11 (see Table 3).
  • Over the five-year period from , to , the average coverage of grants and contributions spending was approximately 16.5 per cent annually.
  • The cumulative government-wide evaluation coverage in the five-year period from 2007–08 to 2011–12 was estimatedFootnote 2 by the Secretariat at 82.4 per cent of spending on grants and contributions.

Key coverage requirement for evaluation of ongoing grants and contributions:

Table 3. Evaluation of Grants and Contributions Programs (Gs&Cs) From 2007–08 to 2011–12
Fiscal Year Gs&Cs Program Spending Covered by Evaluations ($ millions) Total Gs&Cs Program Spending From Main Estimates ($ millions) Annual Gs&Cs Coverage (%)

Table 3 Notes

Source: TBS monitoring of departmental evaluation plans and reports

Table 3 Note 1

Includes only evaluations that reflect coverage requirements of section 6.1.8 of the 2009 Policy on Evaluation.

Return to table 3 note * referrer

2007–08 1,579 25,469 6.2
2008–09 4,662 27,311 17.1
2009–10table 3 note * 10,167 30,605 33.2
2010–11table 3 note * 2,903 39,145 7.4
2011–12table 3 note * 6,190 33,505 18.5

To further examine compliance with the Financial Administration Act coverage requirement, the Secretariat engaged large departments and agencies in a confirmation process after the end of the 2011–12 fiscal year.Footnote 3 Results of that confirmation process showed that of the 27 departments that administered ongoing grants and contributions programs during the period from 2007–08 to 2011–12:

  • 16 had fully evaluated these programs;
  • 8 had partially evaluated these programs, with unevaluated programs representing less than 5 per cent of their total ongoing grants and contributions spending; and
  • 3 had partially evaluated these programs, with unevaluated programs representing more than 5 per cent of their total ongoing grants and contributions spending.

A new monitoring approach was piloted by the Secretariat, with the participation of departments, to confirm evaluation coverage of ongoing grants and contributions.

  • In 2012–13, the Secretariat conducted a review of its monitoring approaches for tracking the evaluation of ongoing grants and contributions programs and other direct program spending.Footnote 4 As a result of the review, the Secretariat began to pilot an adjusted approach for tracking.
  • The adjusted approach entailed including in each departmental evaluation plan a complete inventory of grants and contributions programs administered by the department and a report on the evaluations completed during the previous five years that align to each program in the inventory.

3.2 Overall coverage of direct program spending

The annual rate of coverage of all direct program spending increased significantly in large departments and agencies in 2011–12.

An examination of evaluation coverage of all direct program spending (including both ongoing grants and contributions programs and all other direct program spending) among large departments and agencies showed the following:

  • More than twice the dollar value of spending was evaluated in 2011–12 compared with 2010–11 (see Table 4). Overall, coverage of all types of direct program spending rose quickly from 2010–11 to 2011–12, with annual coverage rates of 6.7 per cent and 16.8 per cent, respectively.
  • Estimated cumulative coverage over the three years since the introduction of the Policy on Evaluation in was 37.7 per cent, and average annual coverage was 12.6 per cent.
  • Looking ahead to the next five years, 62 per cent of those that submitted departmental evaluation plans to the Secretariat projected full coverage of all types of ongoing direct program spending over the five-year period from 2012–13 to 2016–17. This ratio remained unchanged from the previous year.
  • More than two thirds of those not projecting full coverage of direct program spending over the 2012–13 to 2016–17 period were planning to evaluate 90 per cent or more of their spending.
  • Although many were already planning full evaluation coverage, the first five-year period over which they will be required to achieve full coverage of this spending is from 2013–14 to 2017–18.

Key coverage requirements of the 2009 Policy on Evaluation:

  • All ongoing programs of grants and contributions are evaluated every five years, as required by section 42.1 of the Financial Administration Act (implementation began in ).
  • The administrative aspect of major statutory spending is evaluated every five years (implementation began in ).
  • All direct program spending (excluding grants and contributions) is evaluated every five years (implementation begins in ).
Table 4. Evaluations of Direct Program Spending in Large Departments and Agencies From 2007–08 to 2011–12
Fiscal Year Total Number of Evaluations Direct Program Spending Covered by Evaluations ($ millions) Total Direct Program Spendingtable 4 note *
From Main Estimates ($ millions)
Annual Evaluation Coverage (%)

Table 4 Notes

Source: TBS monitoring of departmental evaluation plans and reports

Table 4 Note 1

Total direct program spending includes estimated spending on ongoing Gs&Cs programs; Gs&Cs program spending is one specific type of direct program spending.

Return to table 4 note * referrer

Table 4 Note 2

Includes only evaluations that reflect coverage requirements of section 6.1.8 of the 2009 Policy on Evaluation.

Return to table 4 note referrer

2007–08 121 5,041 77,617 6.5
2008–09 134 5,879 79,327 7.4
2009–10table 4 note 164 11,999 84,665 14.2
2010–11table 4 note 136 6,607 99,325 6.7
2011–12table 4 note 146 15,202 90,710 16.8

By 2010–11, many departments had made significant progress in evaluating their grants and contributions programs to meet the requirement of the Financial Administration Act. As shown in Figure 1, after this time there was a government-wide shift in focus from evaluating grants and contributions spending to evaluating non-grants and contributions spending.

Figure 1: Value of Direct Program Spending (DPS) Evaluated from 2007–08 to 2011–12, for Both Gs&Cs and Non-Gs&Cs
Bar chart over the five years from 2007–08 to 2011–12. Text version below:
Figure 1 - Text version

This bar chart shows, over the five years from 2007–08 to 2011–12, the relative dollar values of direct program spending evaluated in two categories, specifically, grants and contributions spending and other direct program spending (i.e., non-grants and contributions spending). The chart shows that in 2007–08, the amount of non-grants and contributions spending evaluated was more than twice the amount of grants and contributions spending evaluated. In 2008–09 and 2009–10, this pattern was reversed. The chart shows that in 2008–09, the amount of grants and contributions spending evaluated was almost four times the amount of non-grants and contributions spending evaluated. In 2009–10, this trend continued such that the amount of grants and contributions spending evaluated was more than five and a half times the amount of non-grants and contributions spending evaluated. In 2010–11, the pattern reversed again, with the amount of non-grants and contributions spending evaluated exceeding the amount of grants and contributions spending evaluated by more than 25 per cent. In 2011-12, the amount of non-grants and contributions spending evaluated exceeded the amount of grants and contributions spending evaluated by more than 45 per cent.

Source: TBS monitoring of departmental evaluation plans and reports

The Secretariat's monitoring of evaluation coverage includes the review and assessmentFootnote 5 of departments' evaluation plans and the tracking of completed evaluations that align to these plans (see Appendix B for information on assessment criteria). In 2011–12, 90 per cent of departments achieved coverage ratings of “acceptable” or “strong” (see Figure 2).

Figure 2: 2011–12 Overall Assessment Ratings for Coverage (31 Large Departments and Agencies)
Bar chart shows the assessment ratings for coverage for large departments and agencies. Text version below:
Figure 2 - Text version

This bar chart shows the percentage of the 31 large departments and agencies that received each of the four possible assessment ratings for evaluation coverage in 2011–12. 41.9 per cent of large departments and agencies were rated “strong,” 48.4 per cent were rated “acceptable,” 3.2 per cent were rated “opportunity for improvement,” and 6.5 per cent were rated “attention required.”

Source: TBS annual monitoring

Some large departments chose to allocate a portion of their 2011–12 evaluation effort to work that was not demanded by the policy's coverage requirements but that was required for their own purposes. See Appendix C for more information.

4. Governance and Support

Most large departments and agencies have established the structures, roles and responsibilities for effectively governing the evaluation function and for planning the function's activities, with greater involvement from deputy heads across the function.

The Policy on Evaluation requires departments to put governance structures in place to ensure a neutral evaluation function. In 2011–12, all large departments and agencies had a departmental evaluation committee. These committees demonstrated the following characteristics:

The Policy on Evaluation also requires that heads of evaluation have direct and unencumbered access to the deputy head of their organization.

Most heads of evaluation (76 per cent) fulfilled more than one role within their respective organizations:

Ninety-one per cent of departmental evaluation functions were co-locatedFootnote 6 with one or more other functions, including, for example, internal audit (67 per cent), performance measurement (18 per cent), strategic planning (18 per cent) or finance (3 per cent).

Information collected by the Secretariat for monitoring governance structures and the support provided to evaluation by program performance measurement is analyzed and assessed annually (see Appendix B for information on assessment criteria). In 2011–12, 94 per cent of departments achieved governance and support ratings of “strong” or “acceptable” (see Figure 3).

Figure 3: 2011–12 Overall Assessment Ratings for Governance and Support (31 Large Departments and Agencies)
Bar chart shows the assessment ratings for governance and support for large departments and agencies. Text version below:
Figure 3 - Text version

This bar chart shows the percentage of the 31 large departments and agencies that received each of the four possible assessment ratings for governance and support in 2011–12. 51.6 per cent of large departments and agencies were rated “strong,” 41.9 per cent were rated “acceptable,” 6.5 per cent were rated “opportunity for improvement,” and none (0 per cent) were rated “attention required.”

Source: TBS annual monitoring

The availability and quality of program-collected performance measurement data provided inconsistent support to evaluations.

The collection of performance data by programs is necessary to provide evaluators with the basis for examining two of the core evaluation issues (program efficiency and economy, and achievement of expected program outcomes) established by the Directive on the Evaluation Function.

The Secretariat's monitoring of departments in 2011–12 showed that 55 per cent of large departments and agencies reported that the availability or quality of performance information to support evaluation were sufficient. Twenty-nine per cent reported that performance information was partially sufficient to support evaluation, and 13 per cent indicated that it was insufficient.

5. Quality

Across large departments, the quality of evaluation reports was high overall in 2011–12.

The Secretariat assessed the quality of departments' evaluations according to defined criteria that had also been applied in the previous two annual reports. These criteria pertained primarily to the methodological quality of evaluations, including the adequacy of the evaluation methods used, the issues examined, the extent to which findings, conclusions and recommendations were supported by evaluation evidence, and the transparent reporting of limitations encountered during the evaluation (see Appendix B for further information).

In 2011–12, 84 per cent of departments achieved quality ratings of “acceptable” or “strong” (see Figure 4).

Figure 4: 2011–12 Overall Assessment Ratings for Quality (31 Large Departments and Agencies)
Bar chart shows the assessment ratings for quality for large departments and agencies. Text version below:
Figure 4 - Text version

This bar chart shows the percentage of the 31 large departments and agencies that received each of the four assessment ratings for evaluation quality in 2011–12. 48.4 per cent of large departments and agencies were rated “strong,” 35.5 per cent were rated “acceptable,” 6.5 per cent were rated “opportunity for improvement,” and 9.7 per cent were rated “attention required.”

Source: TBS annual monitoring

It became apparent to the Secretariat that in the context of this annual report, the method used to assess the quality of evaluation reports did not consider their relevance and usefulness in decision making. Along with strengthening its criteria for assessing the methodological quality of evaluations, the Secretariat will consider developing additional criteria related to the relevance and usefulness of evaluations for decision making. Such criteria did not form part of quality assessments in 2011–12.

6. Use

Evaluations were used extensively and for a variety of purposes, including expenditure management and public reporting.

The Secretariat assessed departments' use of evaluations according to defined criteria (see Appendix B for further information). In 2011–12, 90 per cent of departments were rated “acceptable” or “strong” for their use of evaluation results (see Figure 5).

Figure 5: 2011–12 Overall Assessment Ratings for Use (31 Large Departments and Agencies)
Bar chart shows the assessment ratings for use for large departments and agencies. Text version below:
Figure 5 - Text version

This bar chart shows the percentage of the 31 large departments and agencies that received each of the four possible assessment ratings for evaluation use in 2011–12. 45.2 per cent of large departments and agencies were rated “strong,” 45.2 per cent were rated “acceptable,” 6.5 per cent were rated “opportunity for improvement,” and 3.2 per cent were rated “attention required.”

Source: TBS annual monitoring

Departments were asked to report how, and to what extent, evaluation results were used in 2011–12. Large departments and agencies reported the following frequencies of use (see Figure 6):

In some cases, departmental heads of evaluation reported that they were not aware of all uses of evaluation findings, particularly in the preparation of Memoranda to Cabinet and in the Strategic and Operating Review.

Figure 6: 2011–12 Uses of Evaluation as Reported by Large Departments and Agencies (33 Large Departments and Agencies)
Stacked bar chart shows the uses of evaluation by large departments and agencies. Text version below:
Figure 6 - Text version

This stacked bar chart shows the extent of evaluation use in 2011–12 in five different applications. 81 per cent of large departments and agencies reported that almost all relevant evaluations had their findings considered in Treasury Board submissions, and another 3 per cent reported that several evaluations had their findings considered (“almost all” was defined as 80 per cent or more, and “several” was defined as 50 to 79 per cent); 62 per cent of large departments and agencies reported that almost all relevant evaluations had their findings considered in Memoranda to Cabinet, and another 3 per cent reported that several evaluations had their findings considered; 72 per cent of large departments and agencies reported that almost all relevant evaluations had their findings considered in the Reports on Plans and Priorities; another 6 per cent indicated that several evaluations had their findings considered; 88 per cent of large departments and agencies reported that almost all relevant evaluations had their findings considered in Departmental Performance Reports; and 78 per cent of large departments and agencies reported that almost all relevant evaluations had their findings considered in their 2011–12 Strategic and Operating Review.

Source: TBS annual monitoring

Evaluation recommendations were generally implemented as planned.

With respect to the implementation of recommendations from evaluations:

7. Support Provided by the Secretariat

The Secretariat supported departmental evaluation functions during the first three years of policy implementation, through a variety of means.

Prior to the policy's launch in 2009, the Secretariat set out a plan for successful policy implementation, focusing on the first four years. The plan outlined success factors and annual activities in the areas of leadership, capacity building and professional development.

Following the launch of the policy, the Secretariat developed a range of guidance materials to support policy implementation. Key guidance materials released since , and those in development at the time of this report, are listed in Appendix D. In addition to published guidance materials, the Secretariat's Centre of Excellence for Evaluation monitored and liaised regularly with departmental evaluation units, maintained an in-person and online community of practice for heads of evaluation, hosted capacity-building workshops, and engaged in frequent outreach activities, for example through symposia and conferences.

8. Summary and Next Steps

8.1 Summary

Annual reports on the health of the evaluation function have tracked changes in the government-wide evaluation function since the introduction of the 2009 Policy on Evaluation.

Departments have made solid progress during the policy's four-year transition period, building the capacity and infrastructure to support full implementation of policy requirements starting in .

Although financial resources for the evaluation function have remained relatively stable, the number of federal evaluators has increased since 2009.

In 2011–12, evaluation coverage of all types of direct program spending, and of grants and contributions programs in particular, increased significantly. Not all large departments and agencies fully met the requirement set out in the Financial Administration Act for evaluating all ongoing programs of grants and contributions over five years. However, the proportion of unevaluated spending in these cases was generally small in relation to total departmental spending on grants and contributions programs.

In general, departments established structures, roles and responsibilities for governing the evaluation function and for planning its activities. The availability and quality of performance measurement data to support evaluations was an area identified as requiring improvement.

Evaluations were being used extensively for a variety of purposes in departments, and recommendations of evaluations were generally being implemented as planned.

8.2 Next steps

The Secretariat's Centre of Excellence for Evaluation will continue to support the evaluation function in 2012–13 and 2013–14 through the conduct of an evaluation of the Policy on Evaluation in order to:

  • Assess the performance (i.e., the effectiveness, efficiency and economy) of the policy and develop a baseline of results;
  • Identify opportunities to better support departments in meeting their evaluation needs; and
  • Address the recommendation in the Spring 2013 Report of the Auditor General of Canada, Chapter 1, “Status Report on Evaluating the Effectiveness of Programs,” to review, in consultation with departments, the requirements to evaluate all direct program spending over a five-year cycle and to address all five core issues.

In addition, the Secretariat will:

  • Review its criteria for assessing the methodological quality of evaluations and consider additional criteria for assessing quality as it relates to the relevance and usefulness of evaluations for decision making;
  • Issue additional guidance in 2013–14 to support deputy heads in tracking whether they are meeting their accountabilities under the Financial Administration Act (section 42.1) for evaluating all ongoing programs of grants and contributions every five years; and
  • Continue to consult with departments to support their needs for capacity-building initiatives.

Appendix A: Main Reasons for Contracting by Departmental Evaluation Functions During 2011–12

Reasons for Using Contractors in 2011–12 Percentage of Large Departments and Agenciestable 5 note *

Table 5 Notes

Table 5 Note 1

Among departments that used contractors in 2011–12

Return to table 5 note * referrer

Pre-identified capacity constraints 68
Specialized technical expertise 50
Cost-effective data collection 41
Usual practice for the evaluation unit 21
Unanticipated capacity constraints 18
Quality assurance 12
Other 15

Appendix B: Criteria Used by the Secretariat for Assessing Evaluation Coverage, Governance and Support, Quality and Use

Coverage

For assessing evaluation coverage by departments in 2011–12, the Secretariat examined the following criteria:

  • Departmental Evaluation Plan;
  • Evaluation coverage of direct program spending;
  • Evaluation coverage of grants and contributions; and
  • Assessment of evaluation needs.

Governance and Support

For assessing governance and support of departmental evaluation functions in 2011–12, the Secretariat examined the following criteria:

  • Independence;
  • Control of resources;
  • Adequacy of resources;
  • Departmental Evaluation Committee;
  • Performance measurement effectively supports evaluation; and
  • Tracking development and implementation of performance measurement strategies.

Quality

For assessing the quality of evaluation reports submitted by departments in 2011–12, the Secretariat applied the following criteria:

  • Addressing value-for-money issues pertaining to program relevance, effectiveness, efficiency and economy;
  • Quality of evaluation methodology;
  • Reporting of limitations that affect the conduct of evaluations and the impact of limitations on evaluation findings;
  • Quality and substantiation of evaluation findings and conclusions;
  • Quality of recommendations; and
  • Quality of the management response and action plan.

Use

To assess how well departments used evaluation results during 2011–12, the Secretariat used the following assessment criteria:

  • Extent to which results of relevant evaluations were brought for consideration in Treasury Board submissions, Memoranda to Cabinet, and the organization's Report on Plans and Priorities and Departmental Performance Report;
  • Extent of relevant evaluation results available to be brought for consideration in the organization's expenditure reviews;
  • Use of evaluation results to inform other departmental decision making;
  • Systematic tracking of management action plans arising from evaluation and regular reporting on the status of the implementation of the evaluation recommendations;
  • Extent to which management responses and action plans were implemented as planned; and
  • Extent to which completed evaluation reports were submitted to the Secretariat as well as posted on the departmental website in a timely manner.

In monitoring the function, the Secretariat applied the criteria above to assess and rate departments in each of these four areas. The criteria and ratings were also used in the Management Accountability Framework assessment process in 2011–12.

Appendix C: 2011–12 Evaluation Products Not Required Under the Policy on Evaluation

Some large departments chose to allocate a portion of their 2011–12 evaluation effort to work that was not demanded by the policy's coverage requirements but was required for their own purposes. Specifically:

Figure 7: Number and Total Cost of Evaluation Products Not Included in Evaluation Coverage Calculations From 2009–10 to 2011–12
Combined bar chart and line graph. Text version below:
Figure 7 - Text version

This combined bar chart and line graph shows that the number of evaluation products submitted to the Secretariat but not included in its calculations of evaluation coverage rose from 24 products in 2009–10 to 32 products in 2010–11 and then decreased to 20 products in 2011–12. In addition, the chart shows that total evaluation costs associated with these products rose from $2.5 million in 2009–10 to $5.5 million in 2010–11 and then decreased to $2.4 million in 2011–12.

The following table summarizes the types of evaluation products large departments reported producing in 2011–12 that were not required by the Policy on Evaluation:

Evaluation Products Produced by Large Departments and Agencies in 2011–12 That Were Not Required by the Policy on Evaluation
Type of Evaluation Product Number of Products
Lessons-learned studies 5
Evaluations of time-limited programs 4
Evaluability assessments 2
Evaluations required by parliamentary or Cabinet committees 2
Other 7

Appendix D: Support to Policy Implementation From the Secretariat's Centre of Excellence for Evaluation

Key guidance materials developed by the Secretariat's Centre of Excellence for Evaluation released since are as follows:

This guidance was well-received by departments, but the federal evaluation community asked for additional guidance. Guidance documents currently in development by the Secretariat are as follows:

In addition to published guidance materials, the Secretariat's Centre of Excellence for Evaluation hosted a series of capacity-building workshops (including webcasts) and engaged in frequent outreach activities (e.g., speaking at symposiums, conferences, training workshops hosted by other organizations, and maintaining an online community of practice for heads of evaluation). Between 2009–10 and 2012–13, the Centre of Excellence for Evaluation made 78 presentations to various audiences, reaching an estimated 2,348 participants.

Date modified: