We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

The Health of the Evaluation Function in the Government of Canada Report for Fiscal Year 2004-05

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Highlights

Introduction

  1. The Federal Evaluation Function
  2. Evaluation Infrastructure
  3. Human and Financial Resources
  4. Evaluation Coverage
  5. Evaluation Quality
  6. Management Accountability Framework Review
  7. Evaluation Community Needs Assessment
  8. Leadership Provided by the Centre of Excellence for Evaluation

Conclusions

Appendix A: Funding for Evaluation

Appendix B: Methodology and Data Sources


Highlights

This report examines the health of the evaluation function in the Government of Canada during the fiscal year 2004–05. It provides heads of evaluation and senior officials with the findings of the Centre of Excellence for Evaluation (CEE), Treasury Board of Canada Secretariat (TBS), arising from its monitoring activities of the quality of evaluation reports and supporting departmental and agency infrastructure. The purpose of CEE's monitoring efforts is to strengthen the evaluation function within the Government of Canada to ensure it supports evidence-based decision making.

1)    The Federal Evaluation Function

Evaluation units produce a wide range of services to support quality performance reporting and promote an evidence-based decision-making culture. Evaluations need to be better aligned with the information requirements of deputy ministers and ministers.

Of the 41 departments and large agencies reporting, 39 have dedicated evaluation resources. However, because of these resources' limited capacity, evaluations are increasingly program-led, with no opportunity for independent review.

  • Evaluation Products and Services
    Evaluation units undertake a number of activities beyond the production of traditional evaluation studies in support of quality performance reporting and a culture of evidence-based decision making. Almost all evaluation units produce evaluation plans and undertake evaluation studies. However, program managers are increasingly undertaking their own evaluations without input from departmental evaluation units. For example, only 25 per cent of heads of evaluation sign off on program-led evaluations or accountability documents. Thus, in many instances departments are not subjecting their results reporting to independent, objective review in order to ensure credible reporting.
  • Executive Direction
    The percentage of departments having an evaluation committee grew from 90 per cent in 2003–04 to 97 per cent in 2004–05. Of these committees, only 73 per cent are chaired by a deputy minister. Of small agencies, only 37 per cent (13 out of 35 reporting agencies) have an evaluation committee or discuss evaluation issues at their senior executive committee.
  • Executive Oversight
    Most evaluation committees (95 per cent) approve departmental evaluation plans as well as evaluation reports and associated recommendations. In most cases, however, these committees do not review program-led evaluation work.

2)    Evaluation Infrastructure

Departmental evaluation plans are increasingly risk-based and strategic; however, they are often submitted to TBS late in the fiscal year. Most small agencies allocate little or no funding to evaluation.
  • Departmental Evaluation Plans
    Most departments complete their evaluation reports late in the fiscal year, and submit them to TBS in October or November. In selecting programs to be evaluated, 72 per cent of departments considered risk.
  • Evaluation Resources
    For 2004–05, total evaluation funding was $54.8 million. Expenditures on evaluation average 0.16 per cent of departmental expenditures. Only 11 out of 47 small agencies allocated funding to evaluation, for a total of $1 million — with 60 per cent of that amount spent by just 2 small agencies.

3)    Human and Financial Resources

Over-reliance on contracting out and program-led funding — both common practices — can compromise evaluation quality.

There are currently 293 evaluation FTEs, with the 41 large departments and agencies employing all but 9. Regular turnover will necessitate engagement of 54 FTEs in the near future.

The use of contracted resources is five times that of internal resources, affecting the quality and cost of reporting. Also affecting quality is the way an evaluation is funded: program-funded evaluations lose in independence and objectivity.

4)    Evaluation Coverage

There is a need for a broader range of evaluation instruments to meet the information needs of senior management and ministers in a timely, targeted fashion. Currently, evaluation units produce as many "reviews" as evaluations. These reviews have no TBS guidance for quality assurance.

Reviews constitute a large part of the work undertaken by evaluation units. At present, this work falls outside the scope of the current TB Evaluation Policy. Accordingly, there are no standards or protocols to ensure the quality of these reports, which are nevertheless accessible to the public.

5)    Evaluation Quality

Evaluation reports have improved in quality since April 2002 but further improvement is needed.

A detailed assessment of 115 evaluation reports found that most federal evaluation reports (77 per cent) are acceptable in quality, though almost one quarter (23 per cent) were rated as inadequate. A comparison of reports completed before and after April 2002 indicates that since the introduction of the CEE, the quality has improved on a number of criteria, such as cost-effectiveness and methodological rigour. Nevertheless, there is a pressing need for further improvement.

6)    Management Accountability Framework Review

As part of the Management Accountability Framework review, CEE assessed the state of 38 departments' evaluation functions.

As part of TBS's Management Accountability Framework (MAF) review, CEE assessed the state of 38 departments' evaluation functions. It had major concerns in 15 cases (39 per cent), minor concerns in 12 (32 per cent) and no concerns in 11 (29 per cent).

7)    Evaluation Community Needs Assessment

Evaluation activity and capacity need to be strengthened to meet departmental and agency needs for objective, independent information on program effectiveness and value for money.

Only 80 per cent of departmental evaluation units are able to address the demand for evaluations, and only 53 per cent are able to address demands to produce or assist in the production of results-based management accountability frameworks (RMAFs). Heads of evaluation report a shortfall of $4.5 million and an immediate requirement for 66 additional FTEs.

8)    Leadership Provided by the Centre of Excellence for Evaluation

In 2004–05, CEE was re-organized; it engaged new staff and adopted a new work plan. Efforts have focused on monitoring, community capacity building, and providing advice to TB ministers on expenditure management and accountability. 

To better fulfill its mandate, CEE was reorganized; a new senior director was engaged and eight positions were staffed. Under a new work plan, the priorities are monitoring, capacity building, and advice on evaluations and RMAFs. Internal procedures and service standards have been developed. In recognition of its expertise, the Centre has been much in demand to give presentations across Canada and abroad.


Introduction

This report - the first of its kind - examines the health of the evaluation function in the Government of Canada during the fiscal year 2004–05.

Evaluation is the systematic collecting of information about program performance in order to determine relevance, success and cost-effectiveness. Evaluation focuses on program results and value for money. It assesses the extent to which programs achieve their objectives and provide results for Canadians cost-effectively. The information thus obtained is used by decision makers to improve programming, shape new policies and programs, or reallocate resources.

Evaluation has had an official role in the Government of Canada as far back as the late 1970s. More recently, in 2001 the Treasury Board (TB) created the Centre of Excellence for Evaluation (CEE) within the Treasury Board of Canada Secretariat (TBS). The Centre's mandate is to strengthen the federal government's evaluation capacity. It ensures compliance with Treasury Board's Policy on Evaluation and seeks to promote high-quality results reporting across the government.

Since 2001, the following further steps have been taken:

  • On December 12, 2003, the Prime Minister announced the Action Plan for Democratic Reform. The Plan emphasizes the importance of results-based management and commits deputy ministers to using the TBS Management Accountability Framework (MAF) to publicly report on the stewardship of resources. Key MAF expectations include a strong evaluation function within departments, risk-based audit and evaluation plans, and the gathering and use of relevant information on results.
  • In the 2004 budget, the Public Service Management Action Plan committed the government to "re-establish the Office of the Comptroller General of Canada to rigorously oversee all government spending." An important part of this initiative is monitoring value for money through evaluation. Further, the Action Plan commits the government to "strengthen evaluation capacity and level of activity . . . and promote core learning curriculum and certification standards." The aim is to ensure credible reporting of results to Parliament and Canadians.
  • The government has also launched an ongoing expenditure review exercise, to which evaluation is an important contributor. It provides necessary information for evidence-based decisions so that resources can be directed from lower- to higher-priority programs.
  • The adoption by Treasury Board of the Management Resources and Results Structure (MRRS), beginning in 2005–06, requires that departments develop a program activity architecture (PAA). The PAA reflects how a department allocates and manages the resources under its control to achieve intended results and how programs are linked to the department's strategic outcomes. The MRRS also requires departments to provide information on results expectations and performance measures for elements and levels of the PAA. This will aid in improving the quality of evaluation reporting by increasing the amount and credibility of program performance data. Evaluations in turn represent an important source of information for a department's PAA.
  • The Government of Canada's Evaluation Policy (April 2001) also encourages the development of a results-based management accountability framework (RMAF). TBS has just issued revised guidance on RMAFs (January 2005), tying their level of complexity to the risks associated with a program. They provide a framework at the program level to help monitor performance, manage risk and demonstrate results. They are inextricably linked to the department's MRRS. Results of monitoring and evaluation activities will feed into the MRRS reporting process. This makes the development and implementation of an RMAF an essential task for all program managers regardless of the Policy on Transfer Payments requirements.

In 2004–05, the Centre of Excellence for Evaluation monitored the health of the evaluation function in the following ways:

  • It instituted an annual electronic survey of departments and agencies to assess their evaluation infrastructure, as well as the quantity and scope of evaluation services and products.
  • It undertook an in-depth examination of the quality of evaluation reports (see Review of the Quality of Evaluations Across Departments and Agencies , http://www.tbs-sct.gc.ca/cee/pubs/review-examem2004-eng.asp ).
  • It developed the Evaluation Review Information Component (ERIC) - a database that captures evaluation findings in a timely fashion so that they can be used to inform decision making by central agencies.
  • To help departments learn from each other, CEE conducted a study of effective practices in the administration of evaluation functions (see Report on Effective Evaluation Practices , http://www.tbs-sct.gc.ca/cee/tools-outils/eep-pee-eng.asp ).

In support of Treasury Board decision making, CEE also reviews all evaluations submitted in support of funding requests, as well as program accountability documents (results-based management accountability frameworks). In addition, CEE monitors departments and agencies through face-to-face meetings with heads of evaluation. The results of these monitoring activities are reported through TBS's Management Accountability Framework (MAF).

Finally, this year CEE has instituted a new requirement that all departments and agencies officially report their evaluation plans to Parliament through their reports on plans and priorities (RPPs), and that they report on completed evaluations through their departmental performance reports (DPRs). For parliamentarians and Canadians, this will ensure increased transparency of the effectiveness of government programs.

1)  The Federal Evaluation Function

This section examines the types of evaluation products and services provided by departmental evaluation units, as well as executive direction and oversight of the evaluation function within the federal government.

During 2004–05, of the 41 departments and large agencies reporting, 39 identified dedicated evaluation resources. This represents an increase of 10 per cent from 2003–04, when only 35 reported dedicated evaluation resources.

Evaluation Products and Services

Table 1 presents the types of evaluation services delivered by departments and large agencies. Almost all evaluation units are responsible for developing evaluation plans and undertaking evaluation studies. Some 38 per cent of evaluation units undertake reviews; these usually address a specific management question and do not follow formal evaluation protocols or standards. The majority of evaluation units support program development of RMAFs, but only 28 per cent of heads of evaluation sign off on these program-led documents. Further, only 25 per cent of evaluation heads sign off on program-led evaluations; there is thus little provision for monitoring or challenging the quality of these reports. Only a small number of evaluation units advise on program design or on departmental and program performance measurement frameworks, or offer training on performance measurement. This suggests that departments underuse the skills of their evaluation units. Surprisingly, only 10 per cent of evaluation units are involved in implementing and monitoring departmental management accountability Frameworks (MAFs). This raises issues regarding the quality of reporting of these documents to Parliament and the public.

Table 1

Types of Evaluation Products and Services Delivered by Departmental Evaluation Units

Services Delivered

Level of Involvement

Responsible for Service

Support Delivery of Service

Not Involved

No Response

Evaluation planning reports/assessments

85%

10%

3%

3%

Evaluation studies/reports

70%

20%

3%

8%

Reviews

38%

23%

28%

13%

Special studies

23%

40%

25%

15%

Results-based management accountability frameworks (RMAFs)

23%

40%

25%

15%

Advice to program managers - evaluation products

45%

38%

8%

10%

Training for program managers on evaluation products

30%

23%

38%

10%

Advice on program design

30%

45%

15%

10%

Departmental performance measurement frameworks

30%

45%

15%

10%

Individual program-related performance measurement frameworks

13%

63%

13%

13%

Training for departmental staff on results-based management

23%

28%

40%

10%

Management Accountability Framework (implementation and monitoring)

10%

58%

23%

10%

Review and/or sign-off of evaluations produced by program managers

25%

15%

48%

13%

Review and/or sign-off of RMAFs produced by program managers

28%

35%

25%

13%

N = 40 departments and large agencies 

Executive Direction

Text Box: Chart 1: Departments With Joint Audit and Evaluation (A&E) Committees In 2002–03, 90 per cent of departments and larger agencies reported having an evaluation committee in place. In 2004–05, this figure grew to 97 per cent (40 out of 41). However, of small agencies, only 37 per cent (13 out of 35 reporting agencies) have an evaluation committee or discuss evaluation issues at their senior executive committee.

In most cases (73 per cent), departmental evaluation committees are chaired by a deputy minister. Representatives of the Office of the Auditor General participate as full-time observers on 22 per cent of committees, and sometime participate on an additional 17 per cent. Treasury Board Secretariat officials participate as full-time observers on 20 per cent of the committees, and sometime attend an additional 7 per cent. Some 7 per cent of departments report that officials other than TBS and Office of the Comptroller General representatives attend departmental evaluation committees.

Most departments (85 per cent) have joint audit and evaluation committees (see Chart 1). Only 12 per cent have separate committees to deal with the two functions. The joint committee structure allows for coordination between audit and evaluation activities, and enables both audits and evaluations to be subject to the necessary challenges.

A factor in the timely reporting of evaluation findings is the frequency of evaluation committee meetings. Some 55 per cent of departments have either monthly or quarterly meetings, 12 per cent hold meetings twice a year, and 32 per cent hold ad hoc meetings.

Executive Oversight

Table 2 presents the types of activities undertaken by evaluation committees. Most (95 per cent) approve the Departmental Evaluation Plan. The vast majority of committees approve evaluation reports and associated recommendations; however, only 76 per cent approve management action plans associated with evaluation recommendations, and even fewer (66 per cent) monitor the follow-up to these action plans.

Table 2

Departmental Evaluation Committee Oversight of Evaluation Reports Produced by Evaluation Units and Related Outputs

 

All

Some

None

No Response

Review or approve evaluations

85%

0%

5%

10%

Review or approve recommendations

83%

2%

2%

12%

Review or approve management responses / action plans

76%

7%

7%

10%

Monitor follow-up to management responses / action plans

66%

12%

10%

12%

An area of concern is the amount of program-led evaluation work that falls outside the purview of evaluation committees. Table 3 describes the level of this activity. Only 15 per cent of evaluation committees review evaluation reports produced by programs. Departments report that only 20 per cent of management action plans associated with program-led evaluations are reviewed, and only 15 per cent monitor the follow-up to these action plans.

Table 3
Departmental Evaluation Committee Oversight of Program-Led Evaluation Reports and Related Outputs

 

All

Some

None

No Response

Review or approve evaluations conducted by program areas

15%

10%

46%

29%

Review or approve recommendations in program area reports

17%

10%

41%

32%

Review or approve management responses / action plans

20%

10%

37%

34%

Monitor follow-up to management responses / action plans

15%

15%

41%

29%

This low level of coverage causes concern for a number of reasons. First, departments are required by TB policy to submit all evaluation reports to TBS. Since these reports are not being submitted to evaluation committees, the ability to track and report to Parliament on this work is reduced. Second, when these reports are not reviewed, there is no demonstrable opportunity to challenge their quality, even though they will be accessible to the public. In addition, the utility of evaluation reports is reduced if the information is not being provided to those who need it the most: the deputy minister and the executive team. Finally, the cost-effectiveness of this work cannot be demonstrated since there is limited monitoring to determine whether evaluation findings and associated recommendations are addressed.

2)    Evaluation Infrastructure

Evaluation infrastructure is monitored in terms of departmental evaluation plans, resource allocation and human resource capacity.

Departmental Evaluation Plans

Departments and small agencies are required to develop evaluation plans and submit these to TBS for review. Table 4 presents the number of departments and small agencies that developed evaluation plans and submitted them to TBS as of September 2004. Most departments complete their evaluation reports late in the fiscal year. Submission to TBS typically occurs in October or November.

Table 4
Number of Departmental and Small Agency Evaluation Plans Developed and Submitted to TBS

Planning Activities

2003–04
(March 2004)

2004–05
(September 2004)

Departmental evaluation plans developed (N = 41)

34 (83%)

25 (61%)

Plans submitted to TBS
(N = 41)

28 (68%)

18 (44%)

Small agency evaluation plans developed (N = 30)

3 (6%)

5 (14%)

Plans submitted to TBS
(N = 30)

2 (6%)

2 (6%)

Text Box: Chart 2: Percentage of Departments with Risk-Based Evaluation Plans Departments are encouraged to focus evaluation efforts on areas of highest risk (see Chart 2). For 2004–05, 72 per cent of departments that prepared an evaluation plan used risk as a consideration in the selection of programs to be evaluated. The low level of risk-based decision making can be attributed partly to the TBS requirement that all grant and contribution programs be evaluated; this limits the flexibility of departments to target evaluation efforts on the basis of risk. However, it should be noted that grant and contribution programs (i.e. programs delivered through third parties) have been identified by TBS as higher-risk programs.

Evaluation Resources

Table 5 presents the total expenditures allocated to evaluation by 41 departments and large agencies. For 2004–05, total evaluation funding is $54.8 million. This is projected to increase to $55.4 million in 2005–06 and then decline to $54.5 million in 2006–07.

Table 5
Current and Projected Spending on Evaluation Within the Government of Canada

 

2004–05

2005–06

2006–07

Salary

$21,010,766

$23,507,608

$23,914,547

Professional services

$16,805,738

$18,528,143

$17,376,279

Operations & maintenance

$5,481,157

$5,700,508

$5,585,469

Total A-base

$43,297,661

$47,736,259

$46,876,295

Other departmental funding

$8,553,569

$7,634,429

$7,574,929

Total A-base and other departmental

$51,851,230

$55,370,688

$54,451,224

TBS funding

$2,935,862*

 

 

Total all funds

$54,787,092

$55,370,688

$54,451,224

*     Incomplete reporting of TBS funding. Total TBS one-time funding transferred to departments for 2004–05 was $3,945,526.

Table 6 presents evaluation funding as a proportion of total departmental expenditures by portfolio. Departments involved in social programming allocate 0.07 per cent to 1.05 per cent of total expenditures to evaluation, with the average being 0.29 per cent. Departments involved in economic programming allocate an average of 0.1 per cent of total expenditures to evaluation. Departments within the government operations portfolio allocate a similar amount. Annex A presents funding for evaluation by department.

Table 6
Evaluation Funding Relative to Total Departmental Expenditures

Type of Department

Average Expenditures on Evaluation as a Percentage of Departmental Expenditures

Lowest

Highest

Social

0.29%

0.07%

1.05%

Economic

0.1%

0.03%

0.32%

Govt. operations

0.1%

0%

0.83%

Text Box: Chart 3: Percentage of Small Agencies with Evaluation Expenditures Small agencies' total spending on evaluation in 2004–05 was $1 million (see Chart 3). Relative to their total funding (approximately $3 billion, with 3,000 FTEs), this amount is extremely low. Further, 2 organizations represent 60 per cent of total evaluation spending. Only 11 of the 47 small agencies reporting allocated funding to evaluation.

3)  Human and Financial Resources

There are currently 293 evaluation FTEs. The 41 large departments and agencies employ 284 FTEs; the 37 reporting small agencies employ 9 FTEs. Chart 4 presents the distribution of current and future evaluation FTEs. Mid-level evaluators (ES 04s and ES 05s) make up the largest group of employees, followed by senior evaluators (e.g. ES 06s).

Text Box: Chart 4: Total Current and Projected Distribution of Evaluation FTEs by Year

Chart 5 presents the distribution of FTEs by category. Economics and Statistics (ES) constitutes the largest category used (44 per cent), followed by Administrative Support (AS) (24 per cent). Other categories used include AV, CO, CP/UNI, FI, IS, RCO and SI. The Executive category represents 12 per cent.

Chart 6 presents the current and projected distribution of evaluation FTEs by category. All categories are projected to grow over the next three years, with the ES group projected to have the largest increase.

Text Box: Chart 5: Distribution of Evaluation FTEs by Category, 2003–04

Chart 6:
Distribution of Evaluaton FTEs by Category, 2003-04 to 2006-07


 

A large turnover in personnel is projected over the next three years. As Table 7 shows, in 2004–05 there is a projected departure rate of 11 per cent, followed by 7 per cent in 2005–06 and 4 per cent in 2006–07. Total departures consist of 31 FTEs. The greatest number of these are middle and senior evaluators (12.5 and 9.6 FTEs respectively).

Table 7
Projected Departures

 

2004–05

2005–06

2006–07

Executive

2.6

5.1

2.0

Senior

9.6

3.5

4.0

Middle

12.5

7.5

2.5

Junior

6.0

4.0

2.0

Support staff

0.5

0

2.0

Total replacements needed

31.2

20.1

12.5

Total FTEs

272.0

281.0

283.0

Replacements needed as a percentage of total FTEs

11%

7%

4%

However, as Table 8 shows, the real replacement rate - that is the actual projected number of FTEs needed to be filled - is 54 FTEs, representing 20 per cent of all evaluation FTEs. The actual need for middle- and senior-level positions is 36 FTEs. Filling these positions will be a considerable challenge, given the specific skills required.

Table 8
Total Replacements Required

 

2004–05

2005–06

2006–07

Executive

3.9

5.1

2.0

Senior

11.5

9.0

4.0

Middle

24.5

10.5

4.5

Junior

11.0

4.0

2.0

Support staff

3.5

0.5

0

Total replacements needed

54.4

29.1

12.5

Total FTEs

272.0

281.0

283.0

Replacements needed as a percentage of total FTEs

20%

10%

4%


 

Use of In-House Versus Contracted Resources

Chart 7 presents the current and projected use of in-house evaluation resources versus contracted resources. As the chart reveals, the use of contracted resources is five times that of internal resources.

Text Box: Chart 7: Internal Versus Contracted Evaluation Resources

Program Versus Evaluation Unit Funding of Evaluations

Text Box: Chart 8: Primary Sources of FundingChart 8 presents the primary sources of funding for evaluation projects. 37 per cent of departments fund evaluations primarily through resources allocated to the evaluation unit, whereas 25 per cent rely primarily on programs to fund evaluation projects. Only 13 per cent use cost sharing between the evaluation unit and programs, and 25 per cent undertake evaluations primarily through in-house evaluation unit resources. When an evaluation project is funded primarily by program managers, the perceived level of independence and objectivity is reduced.

4)    Evaluation Coverage

Table 9 presents the number of evaluations, RMAFs and reviews produced by departmental evaluation units between 2001 and 2003.

Reviews constitute a large part of the work undertaken by evaluation units. At present, this work falls outside the scope of the current TB Evaluation Policy. Accordingly, there are no standards or protocols to ensure the quality of these reports, which are nevertheless accessible to the public.

Table 9
Annual Production of Evaluations, RMAFs and Reviews

 

2001–02

2002–03

2003–04

2004–05
(initial estimates only)

Evaluations

138

116

139

61

Reviews

98

153

121

138

RMAFs

192

181

122

125

5)    Evaluation Quality

This section presents the findings of CEE's monitoring of the quality of evaluations. Evaluation report quality is assessed in terms of coverage of issues, validity and reliability of methods, as well as the independence and objectivity of reporting.

Overview

A detailed assessment of 115 evaluation reports found that most federal evaluation reports (77 per cent) are acceptable in quality, though almost one quarter (23 per cent) were rated as inadequate. No clear and consistent variations in quality were observed for federal organizations of different sizes or for departments versus agencies.

A comparison of reports completed before and after April 2002 indicates that quality has improved in a number of criteria in more recent evaluations. Improvements were found in the extent to which reports address cost-effectiveness issues, ensure methodological rigour, identify alternatives, present evidence-based findings and provide formal recommendations. The changes since the introduction of the CEE suggest that TBS's efforts to improve quality and strengthen capacity are having positive impacts.

Nevertheless, there is a pressing need for further improvement. Table 10 lists the strengths and weaknesses of evaluation reports.

Table 10
Quality of Evaluation Reports

Strengths
Weaknesses
  • Comprehensive description of the program/initiative under review, including its resources, beneficiaries and stakeholders
  • Clear statement of the evaluation objectives
  • Use of multiple lines of evidence
  • Strong presentation of findings, particularly on relevance and delivery/implementation issues
  • Formal recommendations or suggested improvements that flow logically from the findings and conclusions
  • Well-written and well-organized
  • Inadequate description of methodology
  • Inadequate use of performance monitoring data and the views of independent key informants with no stake in the program
  • Inadequate assessment of incremental program impacts, and insufficient use of comparison groups and baseline measures in evaluation design
  • Superficial coverage of cost-effectiveness issues

Source : Review of Federal Program Evaluations (N = 115)

Coverage of Issues

The TB Evaluation Policy requires evaluations to address the relevance, success and cost-effectiveness of programs, policies or initiatives. Table 11 presents the extent to which these issues are being addressed. As the table notes, 94 per cent of evaluation reports address the issue of program success, 74 per cent address relevance, and only 44 per cent address the issue of cost-effectiveness. A large number of reports examine issues of implementation and delivery, as well as management practices. These findings suggest that evaluation work tends to focus on formative evaluation (e.g. management practices) rather than summative assessments (e.g. what difference did the program make?).

Table 11
Coverage of Evaluation Issues

Issue

Coverage (%)

Relevance

74

Success

94

Cost-effectiveness

44

Implementation/delivery

72

Management practices

47

Source : Review of Federal Program Evaluations (N = 115)

Relevance: Over half of the evaluations (just under 60 per cent) provided findings related to the continuing need for and relevance of the program. Of these evaluations, most (85 per cent) were rated as adequate or more than adequate on these criteria. Only about one third (34 per cent) of evaluations presented findings on whether the program duplicated or worked at cross-purposes to other programs/initiatives. Of this one third, the presentation of the information was rated as inadequate in 18 per cent of reports assessed.

Success: Most evaluations (87 per cent) reported findings demonstrating whether the program/initiative in question produced results supporting program continuation. Of these reports, one quarter (26 per cent) were rated as inadequate in how they addressed this question; however, the proportion with a less-than-adequate rating decreased from 39 per cent for the period before April 2002 to 19 per cent after that date. Only one quarter of the evaluations discussed unintended outcomes (25 per cent) or addressed incremental impacts (26 per cent).

Value for Money: About one quarter of the evaluations reviewed (26 per cent) discussed alternative approaches that could produce more cost-effective ways of achieving results. The proportion is much larger in evaluations after April 2002 than in those before that date (31 per cent versus 16 per cent), and somewhat larger in agencies than departments (38 per cent versus 23 per cent).

Implementation/delivery: With respect to implementation/delivery issues, most evaluations presented findings related to the appropriateness of the program's delivery model and/or management practices (81 per cent), and to the need for improving the program structure or delivery arrangements (76 per cent). The evaluations were rated highly on the former criterion (89 per cent adequate or more than adequate).

Validity and Reliability

Most evaluations (72 per cent) were found to employ an appropriate research design in light of the study's objectives. Only 5 per cent were found not to have an appropriate design (e.g. because they consulted few respondents or included a limited range of perspectives). More significant, however, is that lack of detail made impossible an assessment of this criterion for almost one quarter (23 per cent) of the reports. Among the reports assessed, the quality of methodological design was rated as adequate or better for 87 per cent of evaluations.

Virtually all of the evaluations (97 per cent) included multiple lines of evidence. Chart 9 presents the types of methods used. As the data suggest, evaluations tend to rely mainly on information from interviews, file and document reviews, case studies, and surveys, and less so on statistical analysis, expert opinion, and cost-effectiveness analysis. Most alarming, only a minority of the evaluation designs included a comparison group (13 per cent), baseline measures (14 per cent) or a comparison to norms, literature or some other benchmark (22 per cent) - features that can enhance the rigour of the methodology.

 Chart 9: Methods Used in Evaluation Reports Data Collection Methods
 

Program data can enhance the validity and reliability of evaluation reports.  Such data must be available if program managers are to be able to manage by results (i.e., direct resources and activities to areas that advance program objectives).

Chart 10 presents the findings of CEE's ongoing monitoring efforts, based on a sample of 96 evaluations. Few programs were found to have in place reliable performance monitoring systems. In most cases, either there was no system for collecting and reporting on performance information, or any such system was nominal. The lack of program data reduces the overall quality of evaluation reports: evaluations have to recreate the data to assess program impacts.

Chart 10:
Number of Programs with Performance Monitoring Systems

 

Independence and Objectivity

The independence and objectivity of evaluations are of critical importance. They provide the confidence to Parliament, ministers, deputy ministers and central agencies that the reports can be used for decision making. Independence and objectivity are assessed in terms of whether report findings drive the conclusions, and the extent to which recommendations are based upon these conclusions. An additional consideration is the use of management action plans that help ensure the independence and objectivity of report recommendations.

Three quarters of evaluations were rated as adequate or better, and one quarter (24 per cent) as inadequate, in their provision of objective, evidence-based conclusions related to relevance, success and/or cost-effectiveness. The quality of evaluations is improving on this criterion: 40 per cent of the evaluations completed after April 2002 were rated as more than adequate in this regard, compared with only 20 per cent of reports completed before that date.

Chart 11 presents the findings of CEE's ongoing monitoring of whether evaluation recommendations stem from reports conclusions. As the chart suggests, while there is a general alignment of recommendations and conclusions, this is not always the case. The data are in line with the findings of CEE's in-depth Review of the Quality of Evaluations , which found that most evaluations included either formal recommendations (77 per cent) or suggestions for further action (13 per cent). In almost all cases, the recommendations were found to address significant evaluation findings and flowed logically from the findings and conclusions (94 per cent). On the other hand, among the reports with recommendations, only 26 per cent identified alternative scenarios and just 35 per cent took into account practical constraints (e.g. regulations, budgets).

Text Box: Chart 11: Number of Evaluation Reports Where Recommendations Stem from Conclusions  

Management Response and Action Plan

Less than half of the evaluation reports reviewed included a publicly available management response (48 per cent) or action plan (33 per cent). The low response level causes concern, for a number of reasons. First, such plans demonstrate transparency and accountability to Canadians and Parliament by documenting how the Government of Canada intends to deal with evaluation findings and recommendations. Further, management plans demonstrate the value for money produced by the evaluation function as they document the actions taken as a result of the evaluation. They are also important management accountability tools that enable audit and evaluation committees to monitor decisions taken. Finally, they promote independence on the part of the evaluation function. Evaluations have more liberty to make directed recommendations, with less influence from program managers. If program management or senior management disagree with evaluation findings or recommendations, they can express their views through the management response. This allows the evaluation reports to remain independent.

6)    Management Accountability Framework Review

As part of TBS's Management Accountability Framework (MAF) review, CEE assessed the state of 38 departments' evaluation functions. The review assessed each department's evaluation function in terms of whether

  • there is a commitment to building capacity;
  • the deputy head chairs an active evaluation committee;
  • a risk-based evaluation plan exists;
  • management action plans address evaluation findings/recommendations; and
  • evaluation reports are submitted to TBS and posted in a timely fashion (three months after a report is approved).

Of the 38 departments reviewed, CEE had major concerns about 15 (39 per cent), minor concerns about 12 (32 per cent) and no concerns about 11 (29 per cent). Of the major concerns, 9 departments will not meet their commitment to sustaining 2005–06 resource levels agreed to in a memorandum of understanding signed to obtain additional Treasury Board funding for strengthening evaluation functions. Other major concerns involved lack of capacity-building efforts, limited infrastructure, too few evaluation products or lack of dedicated evaluation resources. Minor concerns included resource levels and the number of evaluation reports produced.

7)    Evaluation Community Needs Assessment

Table 12 presents the gaps identified by departmental evaluation units between the types of products and services they are able to produce and those that are being demanded of them. Only 80 per cent of departmental evaluation units are able to address the demand for evaluations. In terms of producing or assisting in the production of RMAFs, only 53 per cent noted that this demand is fully or mostly met.

Evaluation units represent a valuable source of expertise on results-based management and performance measurement. Unfortunately, they are not able to keep up with senior and program managers' demands for their advice. In areas where evaluators can have a real impact, such as advice on program design, the demand is being met only 26 per cent of the time. Input on departmental (or individual program) performance measurement frameworks is addressed only 40 per cent of the time. This lack of expert advice contributes to lower the quality of programming. The quality of reporting to Canadians and parliamentarians also suffers, as shown by TBS's Management Accountability Framework (MAF) review findings. The quality of reporting is an ongoing concern of the Office of the Auditor General. [1]

Training for program and senior managers on results-based management and performance measurement is part of the solution. Unfortunately, such training is being provided by evaluation units only 20 per cent of the time. There are few other options for such training, which is currently not provided by the Canadian School of Public Administration.

Table 12
Evaluation Unit Ability to Provide Types of Services Demanded by Federal Departments and Large Agencies


Evaluation Services Requested

Ability to Meet Demand

Fully or Mostly Met

Minimally Met

Not Met at All

No Demand

N/A

Evaluation planning reports/assessments

73%

13%

0%

3%

13%

Evaluation studies/reports

80%

15%

0%

3%

3%

Reviews

40%

10%

8%

23%

20%

Special studies

36%

20%

5%

18%

23%

Results-based management accountability frameworks (RMAFs)

53%

20%

5%

10%

13%

Advice to program managers - evaluation products

60%

10%

0%

8%

23%

Training for program managers on evaluation products

23%

28%

8%

25%

18%

Advice on program design

26%

23%

0%

33%

20%

Departmental performance measurement frameworks

41%

30%

5%

8%

18%

Individual program-related performance measurement frameworks

38%

28%

3%

13%

20%

Training for departmental staff on results-based management

21%

25%

8%

30%

18%

Management Accountability Framework (implementation and monitoring)

30%

25%

10%

18%

18%

Review and/or sign-off of evaluations produced by program managers

23%

13%

8%

35%

23%

Review and/or sign-off of RMAFs produced by program managers

28%

18%

5%

28%

23%

N = 40 departments and large agencies

Chart 12 presents the additional funding required to strengthen the evaluation function. Heads of evaluation report a shortfall of $4.5 million. The areas of greatest need are evaluation products, followed by small agency capacity, provision of advice, training and monitoring commitments.

Text Box: Chart 12: Funding Required to Address Gaps in Provision of Evaluation Products and Services ($000)

 

Text Box: Chart 13: Additional FTEs Required to Strengthen the Evaluation Function

Chart 13 presents the number of additional FTEs required to strengthen the evaluation function. Heads of evaluation have identified an immediate need for an additional 66 FTEs. 

8)    Leadership Provided by the Centre of Excellence for Evaluation

This section describes the role of TBS's Centre of Excellence for Evaluation, including major accomplishments over the last year.

The objectives of the CEE are to:

  • renew and strengthen the Government of Canada's evaluation function;
  • undertake research and develop policy to enhance evaluation effectiveness; and
  • work with departments, agencies and TBS to better integrate evaluation results into management decision making.

Table 13 presents the types of activities CEE undertakes to achieve its objectives.

Table 13
Activities of the Centre of Excellence for Evaluation

Business Lines Activities
Monitoring Monitor state of the evaluation function (human and financial resources, core competencies, infrastructure), departmental evaluation plans, and quality and quantity of evaluation studies
Advisory Services to Program Sector Review TB submissions, and related evaluations and RMAFs
  Provide guidance on whether recommendations to TB are warranted on the basis of evaluation findings
  Provide policy advice and guidance on accountability and results-based issues
Capacity Building Provide training and professional development opportunities
  Develop a comprehensive learning strategy
  Advisory services
Policy Analysis Conduct policy analysis and research
  Provide input to government-wide policies
  Develop standards and practices
  Maintain evaluation policy
Direct Evaluation Services Provide assistance on horizontal evaluations
  Evaluate government-wide corporate issues
  Provide evaluation services for small agencies, where possible
Management and Accountability Reporting Transmit evaluation findings to inform
decision making
  Provide to TB ministers annual report on evaluation and its contribution to strengthening accountability

The intended results of CEE's activities are as follows:

  • The evaluation activities of departments and agencies lead to policies, programs and initiatives that are more relevant and cost-effective, and that demonstrate results on an ongoing basis.
  • Departments and agencies use evaluation results for departmental and agency decision making, including resource reallocation, priority setting and ongoing expenditure management review activities.
  • TB ministers and parliamentarians have the necessary evidence-based information to inform management decision making and support greater transparency and accountability to Canadians.

Major CEE Accomplishments, 2004–05

Chart 14: Number of CEE Events / Meetings, 2004-05

 

 Just prior to the beginning of the year, a new senior director was engaged. CEE was subsequently reorganized, adopting a portfolio approach to better serve departmental and TBS clients. Eight positions in the Centre were staffed. A detailed work plan was developed, focusing on monitoring, capacity building, and advising program sector and departments on evaluations and RMAFs associated with TB submissions. In support of these priorities, internal administrative and quality control procedures were put into place, and service standards for responding to program sector were developed.

Chart 14 presents an overview of CEE activities, such as events, meetings and presentations. Much of the efforts were focused on monitoring and capacity building (51 meetings held with heads of evaluation and departmental officials). Guidance to TBS program sector and departmental officials also represents a significant amount of time (63 meetings over the year). In all, 17 presentations were given for audiences such as the Canadian Evaluation Society (CES), the Financial Management Institute (FMI), and federal and provincial executives in Ottawa, Toronto, St. John's, Fredericton, Halifax and Montréal. Some 19 presentations were given to international delegations interested in learning from Canada's experience. This included representing Canada in Seoul, Republic of Korea, at an international meeting of six countries at the request of the Korean government, and a visit to Moscow to assist the Russian government as part of Canada's international development activities. To learn from Canada's experience with evaluation, delegations visited CEE from Australia, France, the U.K., China, the Republic of Korea (three times), New Zealand and several African nations.

Following are CEE's major accomplishments for the year by business line.

Monitoring

  • First Annual Survey of Evaluation Capacity: An electronic survey was sent out to 90 departments and agencies to assess the health of the evaluation function.
  • Meetings With Departmental Heads: Over 51 meetings were held with departments for the purposes of monitoring and capacity building.
  • Review of Departmental Evaluation Plans: 21 plans have been received and reviewed by CEE.
  • Management Accountability Framework Assessments: Assessed the evaluation capacity in 38 departments and agencies as input for deputy minister bilaterals. This information also feeds into the assessment of deputy minister performance agreements.
  • Report of the Quality of Evaluations: Developed a reporting template and undertook an in-depth review of 115 evaluation reports.
  • Evaluation Information Review Component (ERIC): Developed an electronic database for ongoing monitoring of high-level indicators of evaluation quality. Reviewed 97 evaluation reports.

Advisory Services to Program Sector

Much of CEE's work involves recommendations and guidance to program sector on program funding proposals, based on a review of evaluations, program accountability frameworks and TB submissions. All transfer payment programs require a results-based management accountability framework, and all funding renewals require an evaluation. CEE is the office of primary responsibility for reviewing these documents. In addition, given CEE's expertise in assessing program accountability frameworks, program sector often refers general TB submissions for review. As shown in Table 14, over the past calendar year CEE reviewed 309 RMAFs, evaluations and TB submissions.

Table 14
Results-Based Management Directorate, December 2003 to December 2004 

Documents Reviewed

Portfolios

TOTAL

Government Operations Sector

Economic Sector

Social Sector

Small Agencies

Horizontal

RMAFs

39

44

42

3

22

150

Evaluation

8

19

52

0

12

91

TB Submissions

22

18

14

0

5

59

Other

 

4

2

 

3

9

Subtotal

69

85

110

3

42

309

In the past year, CEE established service standards in meeting program sector needs for the review of these documents. Table 15 presents these standards.

Table 15
Results-Based Management Directorate Service Standards

Halfway into the year, the Results-Based Management Directorate established service standards to better meet its commitments to providing timely guidance to program sector.
Service Standard 1 Set a target delivery date within 2 days of receipt of request

 Met: 73% of the time

Service Standard 2 Meet target delivery date

 Met: 88% of the time.

In the majority of cases, we delivered guidance on the same day that a request was made.

Policy and Guidance

Policy activities included the following:

  • Published the Guide for the Review of Evaluation Reports ( http://www.tbs-sct.gc.ca/cee/tools-outils/grer-gere-eng.asp ) in order to increase the quality of CEE's review and ensure a standardized, transparent approach.
  • Published revised guidance on the preparation of RMAFs and revised RMAF criteria in order to streamline RMAF requirements and raise the quality of CEE review.
  • Published Evaluation Function in the Government of Canada ( http://www.tbs-sct.gc.ca/cee/pubs/func-fonc-eng.asp ), providing a description of Canada's approach to evaluation and related requirements.
  • Participated in TBS's Transfer Payment Steering Committee as part of the renewal of the policy, and provided guidance to program sector on processing transfer payment program for TB submissions.

Capacity Building

Capacity building efforts focused on human resource skills development, assisting departments to increase access to qualified personnel and communications.

Human Resources
  • Created the "HR Genie" - a Web-based application containing human resource tools. This work was undertaken in consultation with a number of departmental heads of evaluation. A key component of the tool was the creation of generic statements of qualifications and job descriptions from the ES-1 to ES-6 levels.
  • Provided ongoing assistance to departments in recruiting employees and on hiring committees.

 Text Box: Chart 15: Level of Satisfaction of Participants in CEE's Report-Writing Course

Learning
  • Enhanced development opportunities by redesigning the Internship Program, New Developmental Program for Evaluators. Some 15 interns are enrolled in the program; individual courses have been expanded and are now open to all evaluators. Courses include essential skills, evaluation methods, interviewing techniques, evaluation contracting and project management.
  • Developed a course on evaluation to improve the quality of reporting and communicate CEE's criteria for reviewing evaluation reports. With the creation of a student manual and a training manual, this course is now being delivered by the Canadian Evaluation Society in the regions.
  • Held a number of RMAF training sessions for program sector analysts and departmental and agency personnel.
  • Held a conference on cost-effectiveness with over 100 participants.
Products
  • Developed a guidebook for undertaking evaluations in small agencies, in consultation with a group of representatives of small agencies.
  • Undertook a study on the effective evaluation practices of 14 departmental evaluation units. The purpose was to assist in improved administration of evaluation units in order to increase the timeliness of reporting.
  • Developed a paper and a number of presentations on performance measurement in support of the new Management, Resources and Results Structure in order to better inform departments of the performance measurement aspects of the new policy.

Communications and Advisory Services to Departments

  • Held 11 orientation meetings with new heads of evaluation.
  • Redesigned the CEE Web site to make it more accessible, with a focus on developing new content.
  • Developed a brochure on the government's renewed commitment to evaluation, communicating the contents of the 2004 budget commitments to strengthen the evaluation function.
  • Organized booths at conferences such as APEX, the Canadian Evaluation Society (national), CES NCC, PPX and FMI in order to communicate the importance of evaluation and performance management.
  • Issued five editions of the CEE Newsletter, with a distribution to approximately 300 members of the evaluation community. The focus is on sharing lessons learned and communicating TBS guidance.
  • Initiated the Monthly Performance Breakfast Series for Heads of Evaluation. Four sessions have been held to date, attended by approximately 160 participants.
  • Held a number of departmental senior advisory committees to obtain guidance from departments in the management of the evaluation function, and held a heads of evaluation meeting in February 2004.
  • Worked with a number of provincial governments and the Canadian Evaluation Society to strengthen the function across Canada.

Direct Evaluation Services

  • Represent TBS on three departmental audit and evaluation committees (Foreign Affairs, Heritage and Health).
  • Participate in interdepartmental evaluation committees (Climate Change, HRSDC's Youth Initiative).
  • Participate in individual departmental program evaluation steering committees.
  • Currently planning an evaluation of a small agency to provide a framework for a group of regulatory small agencies.
  • Initiated an evaluation of Parliamentary Reporting as part of the Improved Reporting to Parliament Project.

Management and Accountability Reporting

  • Evaluation Information Review Component (ERIC)
    Provides ongoing electronic tracking of the quality of evaluation reports (90 evaluations reviewed thus far). To be placed on program sector analysts' desktops. Also transmits evaluation findings to decision makers, based on Expenditure Review Committee questions.
  • Annual Report to TB Ministers on the Evaluation Function and Its Contribution to Strengthening Accountability in the Government of Canada
    Underway.  

 

Conclusions

Evaluation is the foundation of the government's results-reporting agenda. Ministers, parliamentarians and senior federal public-sector decision makers require credible results information in order to be better accountable for results, and use this information for evidence-based decision making.

Evaluation is the only function that provides an independent, objective assessment of program effectiveness and value for money. To successfully fulfill this role, the evaluation function needs to be better aligned with the decision-making and resource allocation process, and must be able to demonstrate credible, objective, independent reporting.

 

 

Appendix A:  

Funding for Evaluation, by Department/Agency - Economic Portfolio

Dept./ Agency

Eval. FTEs

Dept'l FTEs

Eval. FTEs as % of Dept'l FTEs

Evaluation Salary Budget

Evaluation Professional Services Budget

Evaluation O&M

Budget

Evaluation A-base (Total Salaries, Professional Services and O&M)

Other Dept'l Funding

TBS Evaluation Funding

Total Evaluation Resources

Total Dept'l Expenditures

Evaluation
A-base as % of Total Dept'l Expenditures

ACOA

10.5

624

1.68%

$754,000

$518,000

$235,000

$1,507,000

$678,400

$115,742

$2,301,142

$476,552,000

0.32%

CEDRQ

5.85

413

1.42%

$385,278

$510,833

 

$896,111

$0

$66,762

$962,873

$428,091,000

0.21%

PCA

4

5,014

0.08%

$254,000

$170,000

$100,000

$524,000

$0

$62,453

$586,453

$456,538,000

0.11%

CFIA

4

5,621

0.07%

$350,000

 

$120,000

$470,000

$90,000

$40,594

$600,594

$476,880,000

0.10%

WD

1.5

381

0.39%

$140,000

$190,955

$40,000

$370,955

$0

$259,045

$630,000

$390,806,000

0.09%

NRC

5.5

4,075

0.13%

$416,921

 

$225,000

$641,921

$0

$90,931

$732,852

$695,377,000

0.09%

TC

16

4,707

0.34%

$873,240

$352,260

$177,881

$1,403,381

$150,000

$242,317

$1,795,698

$1,647,213,000

0.09%

NRCan

7.5

4,596

0.16%

$644,099

$100,000

$162,643

$906,742

$1,000,000

$0

$1,906,742

$1,092,925,000

0.08%

F&O

14

10,399

0.13%

$814,015

$60,000

$216,000

$1,090,015

$0

$189,716

$1,279,731

$1,470,799,000

0.07%

CSA

1.5

560

0.27%

$143,714

$15,375

$23,517

$182,606

$29,854

$30,146

$242,606

$322,920,000

0.06%

EC

3.4

5,845

0.06%

$291,981

$16,632

$90,400

$399,013

$160,000

$69,733

$628,746

$805,234,000

0.05%

AAFC

8.5

5,823

0.15%

$575,000

$565,095

$80,000

$937,349

$0

$282,746

$1,220,095

$2,110,846,000

0.04%

SC

8.5

5,449

0.16%

$166,100

$0

$10,000

$176,100

$600,000

$0

$776,100

$415,132,000

0.04%

IC

4

5,720

0.07%

$326,000

$0

$54,000

$380,000

$500,000

$75,099

$955,099

$1,477,756,000

0.03%

Note :   

Source for Dept'l FTEs: Population Affiliation Report / Reports on Planning and Priorities.
Source for Total Dept'l Expenditures: Main Estimates Part II
Some departments' expenditures have been adjusted downward to reflect block transfers to provinces or statutory payments.
Source for all other data: 2004 Capacity Assessment Templates.

Funding for Evaluation, by Department/Agency - Government Operations Portfolio

Dept./ Agency

Eval. FTEs

Dept'l FTEs

Eval. FTEs as % of Dept'l FTEs

Evaluation Salary Budget

Evaluation Professional Services Budget

Evaluation O&M Budget

Evaluation A-base (Total Salaries, Professional Services and O&M)

Other Dept'l Funding

TBS Evaluation Funding

Total Evaluation Resources

Total Dept'l Expenditures

Evaluation
A-base as % of Total Dept'l Expenditures

NPB

4

316

1.26%

$266,353

$0

$15,000

$281,353

 

 

$281,353

$33,848,000

0.83%

CIDA

12

1,559

0.77%

$703,622

$2,100,000

$250,000

$3,053,622

 

$120,866

$3,174,488

$2,654,981,000

0.12%

CSC

17

1,440

1.18%

$994,537

$20,000

$230,000

$1,244,537

 

$142,858

$1,387,395

$1,571,272,000

0.08%

Jus

6

4,575

0.13%

$409,019

 

$365,000

$774,019

$1,586,179

$173,348

$2,533,546

$1,004,788,000

0.08%

PWGSC

9

12,818

0.07%

$596,000

$305,000

$92,500

$993,500

 

 

$993,500

$2,410,952,000

0.04%

PCO

0.5

850

0.06%

$30,000

$25,000

 

$55,000

 

 

$55,000

$141,861,000

0.04%

FAC

8.5

8,787

0.10%

$354,000

$200,000

 

$554,000

$1,180,000

$170,217

$1,904,217

$1,728,234,000

0.03%

PSEPC

1.5

772

0.19%

$130,000

 

 

$130,000

 

 

$130,000

$414,016,000

0.03%

PSC

0.6

997

0.06%

$40,000

 

 

$40,000

$147,886

$23,670

$211,556

$147,409,000

0.03%

ND

15.9

20,544

0.08%

$1,411,640

$141,200

$150,400

$1,703,240

 

 

$1,703,240

$13,287,516,000

0.01%

RCMP

2

5,285

0.04%

$130,000

 

$30,000

$160,000

 

$20,403

$180,403

$1,841,100,000

0.01%

Fin/TBS

0

1,810

0.00%

 

 

 

$0

 

 

$0

$1,874,623,000

0.00%

 Note

Source for Dept'l FTEs: Population Affiliation Report / Reports on Planning and Priorities.
Source for Total Dept'l Expenditures: Main Estimates Part II
Some departments' expenditures have been adjusted downward to reflect block transfers to provinces or statutory payments.
Source for all other data: 2004 Capacity Assessment Templates.

Funding for Evaluation, by Department/Agency - Social Portfolio

Dept./ Agency

Eval. FTEs

Dept'l FTEs

Eval. FTEs as % of Dept'l FTEs

Evaluation
Salary Budget

Evaluation Professional Services
Budget

Evaluation O&M
Budget

Evaluation A-base (Total Salaries, Professional Services and O&M)

Other Dept'l Funding

TBS Evaluation Funding

Total Evaluation Resources

Total Dept'l Expenditures

Evaluation
A-base as % of Total Dept'l Expenditures

SDC

12

3,055

0.39%

$1,893,000

$1,751,108

$250,000

$3,894,108

$548,250

 

$4,442,358

$372,505,000

1.05%

HRSDC

28.75

12,102

0.24%

$2,100,000

$5,500,000

$200,000

$7,800,000

 

 

$7,800,000

$1,993,422,000

0.39%

CanHer

9

2,061

0.44%

$773,469

$500,800

$1,700,000

$2,974,269

 

$107,882

$3,082,151

$1,127,097,000

0.26%

LAC

1.5

1,143

0.13%

$120,000

$100,000

$10,000

$230,000

 

 

$230,000

$96,461,000

0.24%

VAC

8.5

3,540

0.24%

$729,500

$151,000

$115,000

$995,500

 

$97,482

$1,092,982

$852,743,000

0.12%

CIHR

10

347

2.88%

$716,638

$0

$66,195

$782,833

 

$241,400

$1,024,233

$751,602,000

0.10%

HCan

19

9,558

0.20%

$1,224,709

$151,000

$222,750

$1,598,459

 

$115,689

$1,714,148

$1,822,522,000

0.09%

INAC

13

3,797

0.34%

$1,294,090

$2,690,000

$178,000

$4,162,090

$1,883,000

$341,667

$6,386,757

$5,760,763,000

0.07%

Note :

Source for Dept'l FTEs: Population Affiliation Report / Reports on Planning and Priorities.
Source for Total Dept'l Expenditures: Main Estimates Part II
Some departments' expenditures have been adjusted downward to reflect block transfers to provinces or statutory payments.
Source for all other data: 2004 Capacity Assessment Templates.

Appendix B: Methodology and Data Sources

The primary data source used to support the findings in this report was the results of a survey conducted by the CEE between June and December 2004. The survey instrument, an electronic data collection template, was forwarded to 90 departments and agencies on June 18, 2004. This template was based in large part on a similar instrument used to collect data in support of the 2002 formative evaluation of the evaluation policy, completed in December 2002. As with the exercise conducted in 2002, two versions of the data collection template were used as described below.

1.      The first instrument was directed to 46 small agencies (those with fewer than 500 FTEs) and requested minimal information regarding the current infrastructure and evaluation work underway. The small agency collection instrument contained 12 questions, which were grouped into the following two themes or categories:

  • Evaluation Infrastructure and Resources
  • Evaluation Production and Results

2.      The second instrument was directed to 44 departments and agencies known to have some level of evaluation capacity. This instrument contained a total of 18 questions grouped into the following three themes:

  • Evaluation Infrastructure and Use of Results
  • Evaluation Planning and Production
  • Evaluation Resource Utilization and Requirements

This second instrument also contained an additional three questions directed specifically to departments and agencies that received TB funding to support evaluation capacity development in 2003–04 and 2004–05. By completing these three questions, funded departments met reporting requirements set out in their memoranda of understanding to receive TB funds. The population receiving this template was comprised of the following:

  • 37 larger departments and agencies known to have some level of evaluation capacity (25 of these having received TB funding); and
  • Seven small agencies that also received TB funding over 2003–04 and 2004–05.

Follow-up required to ensure receipt of templates continued until December 2004.

Response rates for the survey were very high as indicated below.

 

Sent

Received

Response Rate

Small Agency Templates

46

38

82.6%

Large and Funded Departments and Agencies

44

40

90.9%

Total

90

78

86.6%



[1] . Most recently, see Chapter 3 of the 1996 Report of the Auditor General of Canada .

Date modified: