This page has been archived.
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.
March 29, 2005
Changes in government decision-making over the past three decades have had important implications for the use of performance or effectiveness evidence in decision-making on government programs and expenditures. In addition, there have been changes in assumptions about the connections between political responsiveness, fiscal discipline, and program effectiveness that affect the use of evidence in decision-making on policy priorities and resource allocation.
The advent of Program Evaluation as a component of decision-making in the Canadian government in the 1970s constituted a significant innovation that was internationally recognized. Although a great deal has changed over the past three decades, the current emphasis on Results-Based Management as well as Results-Based Reporting indicates the extent to which the pursuit of continuous improvement through the evaluation of performance continues to be front and centre on the governance and public management reform agenda.
Decision-making in government is a process in which evidence, both from systematic research and practical experience, mixes with a complex interaction of ideas, interests, ideologies, institutions and individuals. These several factors are the determinants of decisions at the political and administrative levels. At different times and under different regimes, the decision-making process will be structured and managed in ways that seek to give more or less weight to evidence. No one process is necessarily or always more 'rational' than the others. It all depends on what questions need to be asked by decision-makers in the circumstances and context of the times in order to make the best possible decisions for their agendas and/or public expectations of good governance.
As a consequence, the importance attached to the use of evidence in decision-making invariably waxes and wanes over time. The relative positioning of the program evaluation function, accordingly, has not been constant. At the same time, it is important to note that no regime can effectively govern or manage its fiscal resources well if it does not invest in the evaluation of what government does (its "programs") and how well it performs (the "effectiveness" of its programs). Neither ignorance of performance nor a blind eye to performance is ever a recipe for good governance.
The question considered in this discussion paper is thus how best to organize to ensure that evidence of performance is vigorously pursued and effectively used in government decision-making on policy and resources allocation. The first condition requires that government programs be evaluated as a core function of public management; the second, that there be a decision-making process that is able to use the evidence from these evaluations to assist in decisions on policy and resource allocation.
The paper assumes that good governance and public management require decision-making processes that do more than secure political responsiveness and fiscal discipline, as crucial as these two criteria are. The processes also require attention to evidence on program effectiveness, lest government allocate resources to programs that do not constitute (or cannot be shown to constitute) public value for money. Indeed, the ultimate test of the quality of decision-making in government may well be this third criterion, since the first two can be accomplished primarily with "political will". The third criterion demands much more; it requires competence as well.
The Current Structure
Cabinet decision-making is currently highly centralized and integrated at the apex of government around the prime minister and his principal ministerial colleagues and political and public service advisors . Strategic decisions on policy, program design, priorities, and resource allocation are driven by shifting combinations of political responsiveness to public demands and needs, implementation of the government election platform, and the personal policy agenda of the prime minister. The Canadian structure exhibits a comparatively high degree of concentration at the centre because of the relative low level of checks and balances on the executive and political powers of a Canadian prime minister compared to other Westminster systems of government or, for that matter, to other parliamentary systems . In addition to political responsiveness, however, the structure has been consciously established to impose a high degree of fiscal discipline, an objective that has been effectively accomplished over the past decade.
Increased centralization in decision-making and increased efforts at decision-making integration is not a uniquely Canadian government phenomenon, however . Rather, this structure reflects the pressures of modern governance experienced everywhere. These pressures include greater transparency and access to government information, greater horizontal interdependence of policy issues, increased globalization and inter-governmental governing, more aggressive media operating 7/24, greater expectations by stakeholders for engagement and consultation, increased demands for quality service and value for money, and increased demands for demonstration of value for money. In turn, these pressures require governments to pay greater attention on on-going expenditure review and reallocation to secure both fiscal responsibility and the capacity to respond to changing priorities. Paradoxically, the governments that have been most open to democratic and public management reform have experienced these pressures to their fullest.
In Contrast to the Past
The current Canadian structure contrasts with the more decentralized and diffused Cabinet decision-making system in place at the time that program evaluation as a core function of management was formally introduced almost 30 years ago . At that time, government decision-making entailed decisions on the merits of proposals and programs by the collective cabinet and its elaborate set of cabinet committees, assisted by a full array of central agencies. This system had been established expressly to overcome what had been the even more decentralized and diffused system of decision-making that had been developed with the emergence of powerful ministerial departments from the depression through the war years and their aftermath. In this structure, cabinet invariably deferred to individual ministers and their departments in deliberating on their proposals.
The dominant assumption of the early 1970s was that evidence about the effectiveness of government programs in achieving public policy objectives could and should be given a significant place in decision-making on policy and resource allocation. "Knowledge as power" was an assumption that defined the era . It also was instrumental in the design of the Planning, Programming and Budgeting System (PPBS) and its successor, the Policy and Expenditure Management System (PEMS). Each assumed that evidence on performance could and should be brought to bear in government decision-making .
PPBS and then PEMS in turn gave way to changes in the process of government decision-making as governments struggle to simultaneously accommodate the three main demands on government: political responsiveness, fiscal discipline, and program effectiveness. The failure of these systems to secure the desired degree of fiscal discipline led inexorably to increased political direction and control over the decision-making process. This resulted in increased centralization.
Program Review in the mid-90s represented an ad hoc deviation from the above process. It acknowledged the need for an integrated policy-expenditure system in which budget review and reduction would proceed in light of government priorities mixed with evidence about departmental performance and program effectiveness. The process was a success in several important respects, especially as it asserted the policy perspective into the decision-making. Equally important, some strategic budget reductions were based in part on effectiveness evidence. The process in this case, accordingly, had to have a mix of centralized direction on budget reduction targets, decentralized review and reduction recommendations, and collective challenge and decision-making.
In the immediate aftermath of Program Review, a new Expenditure Management System (EMS) was adopted. But, despite the successes of Program Review, the new system did not establish a continuous process of review and reallocation for the whole of government. For the whole of government, the system was centralized to secure the twin objectives of political responsiveness and fiscal discipline. Performance management and program evaluation, for the purposes of review and reallocation, were essentially devolved to the portfolio level.
Under EMS, the Finance Minister and the Prime Minister assumed the major decision-making roles; indeed the process gave them final authority over Government decisions. The Privy Council Office (PCO) was less central to the EMS process than it had been under Program Review, if only because the fiscal policy decision-making process became the central policy arena so long as deficit-reduction and elimination were the overriding fiscal policy priorities. When surpluses replaced deficits, however, Finance sought to maintain its dominance in determining the allocation of 'surplus' monies.
The Present Pattern
The decision-making system since the demise of PEMS has diminished the ambition of an integrated policy and expenditure decision-making system that gives pride of place to evidence based on evaluation of government programs and performance. The process has been dominated by the twin pillars of the priorities of the government, as enunciated primarily by the prime minister, and the fiscal policies of the Finance portfolio. Political responsiveness has been accorded importance, as the government has sought to  discern public priorities through public opinion surveys, focus groups and stakeholder consultations. The fiscal policies of the Finance portfolio have been crafted by a mix of dominant economic policy ideas combined with sensitivities to the market as well as consultations with major economic stakeholders.
The annual Government priorities process is managed primarily by the Prime Minister's Office and the Privy Council Office, as the principal advisers to the prime minister. The priorities process links to Finance's fiscal budget decision-making process in an increasingly integrated manner, with PMO and PCO, on the one side, and Finance, on the other. For its part, the fiscal budget constitutes an increasingly comprehensive policy instrument, its scope going well beyond the contents of the traditional fiscal plan. It thereby substitutes for a policy-expenditure plan.
The closed nature of the budgetary decision-making process is coupled, paradoxically, with extensive public consultation with major stakeholders outside government, thus linking Finance's policy ideas with the political realities of challenges from stakeholders. The operative assumption is that the budget must accommodate both political objectives and political feasibility with sound policy ideas. Finance, however, dominates the debate on most policy fronts from the strengths of its prevailing policy paradigm and its considerable intellectual capacities. The Treasury Board and Secretariat (TBS) do not figure prominently in this process.
The Recent Adaptation
Expenditure Review, as recently introduced by the new government, constitutes an effort to re-establish the policy-expenditure decision-making linkage of Program Review, including the policy challenge and coordination role of PCO. The assumption is that reviews should be based on policy priorities as well as program effectiveness and performance evidence.
In practice, Expenditure Review has operated without the rigour of Program Review, in part because, in the first major round in 2004-5, ministers were not required to make the deep program cuts that might force the issue of priorities to address the question of program effectiveness. Rather, almost all of the required reduction came from various kinds of efficiencies to existing programs and operational activities. To this point, in other words, expenditure review has not focused primarily on programs, let alone program effectiveness. This reality has reduced the demand that program evaluation be undertaken in ways that generates evidence that can contribute to expenditure review and re-allocation. Decisions on programs and their effectiveness thus remain on the margin of review. Decisions on reallocations, accordingly, follow the pattern of cabinet decision-making noted above.
At the departmental level, decision-making processes must conform in important respects to government-wide requirements respecting planning, budgeting and reporting. But, in terms of their use of performance or effectiveness evidence in departmental decision-making and submissions to cabinet, there are considerable variations. These variations are due to uneven demand by deputy ministers and they are tolerated because there is little demand by PCO for evidence respecting its submissions, so long as the cabinet process expectations are met and little demand from TBS for performance or effectiveness evidence in the Estimates process. There is, in short, no corporate or whole-of-government decision-making process into which performance or effectiveness evidence can be fed to good effect.
Government-wide attention to Results-Based Management has required processes for reporting on plans and performance to Parliament. The initiative, as an effort to better involve Parliament in the Estimates process and to strengthen accountability, has yet to meet the expectations of MPs and remains a work in progress. This means, of course, that the effective demand from Parliament for evidence respecting performance or effectiveness is also virtually non-existent.
As a consequence, deputy ministers tend to focus primarily on implementation of minister's agenda, where consultations and engagement of stakeholders is deemed by most to be more critical in defining success than performance measurement or program evaluation, and on the implementation of government-wide management priorities, as set by the Clerk in deputy minister performance agreements on corporate priorities.
The Modern Comptrollership Initiative has some relevance to the use of performance evidence insofar as it has sought to integrate financial and non-financial performance information as a central pillar of the better stewardship and management of resources generally. And, more recently, the Strengthening Public Sector Management initiative re-asserts priority of management performance, although primarily in terms of improved financial management control over financial resources.
In this context, the Treasury Board approval process is peripheral to all but management improvement exercises and the technical processes that are part and parcel of the preparation of the annual Estimates as a statement of proposed expenditures. TBS's influence in the expenditure decision-making process insofar as departments are concerned was diminished significantly by decline of Program Branch in allocating resources to existing and new programs. The process now is essentially an administrative process with no review and reallocation. The new "management board" roles of TB/TBS do not compensate for its loss of influence. As things stand, it has insufficient 'direction and control' leverage vis-à-vis departments, even for its various management and service delivery improvement initiatives.
Program evaluation was introduced as a centrally recognized and formal function of management in order to assist in decision-making on planning and designing programs, determining their respective priorities, ascertaining and assessing the extent to which existing programs were achieving their objectives with the resources provided, and considering cost-effective alternatives to existing programs. Program evaluation was introduced as a core function in what was established to be a continuous cycle of management.
At the outset, it fit well with PPBS as well as with PEMS, two different decision-making processes that shared a commitment to the use of evidence in decision-making. In each case program evaluation was meant to be a major source of evidence on program effectiveness in the context of decision-making on priorities and expenditures, as well as for on-going program improvement and/or redesign. Program evaluation for effectiveness, in other words, was meant to address major policy and expenditure management questions. It was not to be limited simply to program management questions of economy, efficiency or service delivery.
Program evaluation was affected by the various efforts to restructure the government's overall decision-making system as when PPBS was replaced by PEMS in the late 1970s. The struggle to find a process that would simultaneously meet all three criteria of good government decision-making – political responsiveness, fiscal discipline, and evidence on program effectiveness – left the third criterion in the wake of political responsiveness (the late 1970s to the early 1990s) and fiscal discipline (starting in the mid-1990s). As a consequence, program evaluation began to focus more on issues of program management, including questions of efficiency and service delivery. In 1984, the Nielsen Task Force on Program Review lamented the fact that the existing stock of program evaluations were of less use than they might have been had they been more explicitly focused on the effectiveness of programs. Unfortunately, this report's lament did not lead to a major revision to the role of program evaluation, in part because the system resorted to a continuing series of across-the-board budget reductions, focused primarily on administrative or operating expenditures. Almost ten years later, in 1993, the Auditor General expressed major concern that: "The need for sound effectiveness information is greater than ever….There is potential in the current approach to evaluation that has not been exploited." A year later, in 1994, program evaluation was linked to internal audit under a broader umbrella of "review". In 2001, however, the two functions were separated, and the program evaluation function was subject to a new government policy.
The link of program evaluation to results-based management is clearly stated in the 2001 policy. The policy requires that evaluations provide "evidence-based information" on "the performance of government policies, programs and initiatives". It makes "achieving…results" the "primary responsibility of public service managers." Evaluation is to be "an important tool in helping managers to manage for results". Public service managers are "expected to define anticipated results…[and] be accountable for their performance to higher management, to ministers, to Parliament and to Canadians."
With program evaluation an integral part of results-based management, the focus of program evaluation obviously is on management performance rather then program effectiveness. The program evaluation function serves the management process alongside other current initiatives (the MCI, the Management Accountability Framework, the SPSM, and Program Activity Architecture) that are part of the results-based management regime. With the advent of Expenditure Review as a continuing process, however, program evaluation can and should be a major element in government decision-making. The development of Expenditure Review offers the opportunity to establish a budgeting system that, as an integral part of the government's decision-making process, allocates resources in part on the basis of evidence on program effectiveness.
The Canadian Record
The Canadian record in policy and expenditure management over the past decade is a very good one in at least two of three respects. As measured by public opinion surveys and consultation exercises with major stakeholders, the policy responsiveness of the government gets high scores . The record of fiscal discipline over this same period is also very good. Canada is the only member of the G-8 that has been in surplus for several consecutive years (although it should be noted that both Australia and New Zealand are also in the same position, with Australia having a better debt situation than Canada).
Canada, however, does not have a good record when it comes to the third criterion of good policy and expenditure management, namely the expenditure budgeting function where one would expect to find the use of evidence on program effectiveness or departmental performance in allocating resources. In this front, the demise of PEMS in the 1980s, in large because there was too little fiscal discipline, was not followed by the development of a more rigorous policy and expenditure management process. Program Review in the mid-1990s was such a process but it was conducted essentially as a one-off exercise. The question now is whether the system can build on Expenditure Review to develop this third component of a good policy and expenditure management system.
The Comparative Context
In terms of budgeting as expenditure management Canada can learn from the international comparative experience. The experience elsewhere varies across jurisdictions and, within jurisdictions, over time, but a number of themes stand out where serious efforts have been made to sustain a critical role for the use of evidence about program effectiveness or departmental performance in the context of government decision-making.
The Australian experience is relevant for a number of reasons, including the extent to which Australian governments have adapted Canadian developments to their own circumstances. The separation of the Treasury Board Secretariat from the Department of Finance and the development of the program evaluation function as an integral part of PPBS are two examples where the Australians explicitly followed Canada. In Australia, program evaluation became a crucial element in the budgeting process. In the words of the Australian Auditor General: "In my view, the success of evaluation at the Federal level of government was largely due to its full integration into the budget process. "
Although the current system is more diverse in its performance measurement and evaluation processes, because responsibility has been devolved from the centre, the experience illustrates the need to keep program evaluation, and all other evidence-based forms of analysis, continuously responsive to decision-makers' requirements for timely and relevant evidence. Indeed, the greater use of program evaluation evidence by ministers and central agencies in government policy and budgetary decision-making in the decade from the mid-1980s to the mid-1990s compared to the more limited use of various kinds of evidence from the current performance measurement regime (where program evaluation is not as central for all departments) provides an important lesson for future developments in Canada: effectiveness evidence from program evaluations is more likely to provide useful information for ministers and central agency officials than information from performance or results-based measurement regimes. 
The Australian experience demonstrates a second crucial point, namely, how important it is to have a budget management function that is an integral but distinct part of the policy and budget allocation decision-making system. In Australia, the budgetary levers of the Department of Finance and Administration show that budget management can be a critical part of the overall policy and expenditure management system. Political responsiveness to changing priorities and fiscal discipline respecting total government expenditures are necessary but not sufficient to secure the greater possible degree of cost-effectiveness in budgeting. Budget management conducted on the basis of program effectiveness evidence is also required.
The British experience is quite different. It builds on one of the most rigorous permanence measurement regimes. It is driven by the biennial spending-review process in which the Treasury continuously monitors progress on the performance indicators of departments and agencies in respect to their performance targets and, secondly, strategically poses in-depth questions of departments and agencies that demand evidence of performance . Unlike Australia, the Treasury manages all parts of the process, along with the Prime Minister's chief political and public service advisors, of course, but, within the Treasury there are distinct responsibilities for fiscal discipline and expenditure management, with those responsible for the expenditure management being especially concerned with the use of performance evidence on the achievement of results or outcomes in budget allocations. The British system demonstrates that a strong commitment to performance management eventually requires attention to program effectiveness, a function that is given increasing attention in Britain as the government has increasingly placed top priority on improving outcomes from the delivery of public services. Only so much can be done by securing the greatest possible efficiencies, something which the British have been pursuing for over two decades with great enthusiasm, and by being fiscally responsible, where the British also have a reasonably good record. At some point, the question of the effectiveness of public services as programs designed to achieve specific results or outcomes must come into play. Otherwise money well managed is still money poorly spent.
A third approach is found in the United States where over the past decade an initiative has been in progress to develop performance management and budgeting . The system explicitly builds on the regime of results-based management and reporting put in place a decade ago. The performance budgeting process is designed to be as simple and straightforward as possible. In the American context this is a virtual necessity, given that performance evidence must be used not only by the Administration in developing its proposed budget but also by the two houses of Congress if a performance budgeting system is to work.
Under the direction of the Office of Management and Budget (OMB), the process of rating the performance of all agency programs seek to be comprehensive in its coverage so that agencies are rated on a scorecard that is comparative across the federal government. The process is based on agency scores on criteria established by the OMB but after much consultation inside and outside government. The agency scores are calculated based on evidence from the agencies that comes from credible sources, including program evaluations and external audits. The agencies are also rated on their capacity to provide evidence on their own performance. The more independent the evidence, of course, the higher the rating is on this score.
What is most instructive from the American experience is the recognition that program evaluation is the best way to ascertain and assess program effectiveness in securing desired impacts, results and outcomes. It acknowledges that program evaluation is not the only tool to measure performance; that program evaluations consist of various types of evaluations; and, that the most robust types of evaluation cannot be applied to all kinds of government programs, for methodological and/or cost reasons. However, the bottom line is that if government decision-makers want to strengthen the performance of government they must pay attention to program effectiveness. Program evaluation, accordingly, must be undertaken as a core function of governance and public management .
Two lessons from these comparative experiences are significant to this discussion.
First, the evolution in these three systems illustrates, in different ways, how critical and yet difficult it is to generate and use evidence on program effectiveness and/or departmental performance in decision-making on policy, priorities, budget allocations, and program and management improvement. The evolution, moreover, is not necessarily a continuous record of progress. Commitment to the use of evidence is not always what it needs to be, at the levels of ministers or senior officials or both; competence in understanding the need for and yet limitations of effectiveness evidence is missing in some cases; and, the processes for using evidence can be or become deficient in not embedding the use of evidence in the decision-making process.
Second, these experiences, taken together, illustrate that the use of evidence on effectiveness and performance in government decision-making is a necessary condition of good governance and public management. Political responsiveness and fiscal discipline are necessary but they are not sufficient. All three elements must be present. The point is that government must have programs that have their desired effect. Continuous efforts to ascertain and assess program effectiveness are therefore incumbent on government decision-makers and managers.
Building on Experience
For the Canadian government to build on what it has accomplished to date and to learn from its own experience as well as from international experience, several principles to govern the decision-making process must be accepted.
1. Recognizing Program Evaluation as a Core Function of Public Management
Program evaluation is a core function because it seeks to ascertain and assess the effectiveness of government programs in achieving desired results, impacts and outcomes. The purpose and business of government is to provide programs of various kinds to achieve desired results, that is, to be effective.
Governments should be responsive to citizens and their priorities, including especially those of the major stakeholders of the various programs government provides. But a government is not fully or sufficiently responsive, if the programs that it provides, in order to be responsive, are not effective, or not as effective as they could be, within whatever restraints government decision-makers must operate.
Governments should also be responsible in the spending of public monies; ministers and officials must exercise fiscal discipline. But if programs are not effective in achieving results, it matters for nothing that they may be affordable, economical or efficiently managed. Having a budgetary surplus or a balanced budget does not mean that every program provided by government provides value for the money spent. An ineffective program is a waste of public money.
Results-based management, except insofar as it fully incorporates program evaluation, is no substitute for program evaluation, however useful it may be for management control and improvement. Performance measurement regimes do not seek to ascertain or assess program effectiveness. Rather they seek to determine the extent to which departments achieve results or outcomes. They measure achievement against targets. They do not attempt to explain or account for the performance in question, let alone the effectiveness of their programs. In the case of some programs, performance measurement may be all that is required, especially when the outcome is nothing more than the delivery of an output (a good or service). In other cases, it may be all that is possible or feasible, as when it is clear an evaluation would be methodologically impractical or too costly.
At a time when there is more than a little confusion over the numerous "initiatives" undertaken to improve management performance, financial control and public accountability, it is important that the core function of program evaluation be clearly understood by politicians and public servants. Program evaluation is not just another initiative; it is a core function of governance and management. It is not an optional method or technique. Good governance and public management require on-going program evaluation.
2. Embedding Program Evaluation in the Decision-Making Process
Even though program evaluation is a core function of governance and public management, it stills need to be embedded in the decision-making process. Program evaluation is one of those functions, along with planning, coordinating and reporting, that can be ignored by decision-makers who are inattentive to the requirements of good governance and public management. A critical test of the quality of a government's decision-making process, accordingly, is whether there are requirements to ensure that evidence on program effectiveness is brought to bear in decision-making.
Embedding program evaluation in decision-making requires that there be a corporate or whole-of-government requirement that program evaluations be conducted and that the evidence from them be used in decision-making. Letting departmental managers decide whether or not to do program evaluations, under a philosophy of management devolution, ignores the fact that government decision-making, including government budgeting, at some point becomes more than a departmental responsibility; it becomes a corporate or whole-of-government responsibility. Departments are not independent of government as a whole or, for that matter, of each other. They have corporate and, increasingly, horizontal responsibilities where decision-making must involve central agencies and other departments.
In recognition of the particular circumstances and undertakings of different departments, departments may be given some flexibility or discretion in regard to the coverage, timing and types of the program evaluations that they use. But, in principle, there should be no exceptions to the requirement for evidence on program effectiveness in government decision-making. Requiring all departments to conform to the requirements of this core function is not a step backwards in public management reform. Public management reform over the past twenty-five years has always and everywhere assumed that some dimensions of governance have to be conducted at the centre and according to corporate principles. Even those who most favour decentralization, deregulation and devolution recognize that there must be some "steering" from and by the centre. The idea that the centre should not prescribe at all represents a misreading of the credible public management reform models as well as a failure to acknowledge the legitimate demands of Parliament and citizens for a central government structure that directs, coordinates and controls the various departments of government.
3. Linking Program Evaluation to Budgeting and Expenditure Management
Embedding program evaluation in the decision-making process means that evidence on program effectiveness should be brought to bear in government decision-making. In particular, it should be brought to bear in budgeting and expenditure management, since this is the place where evidence about program effectiveness is most likely to have the greatest use. First, evidence on effectiveness can help to decide on priorities in terms of resource allocation, including reallocation. Second, it can help to decide on changes to existing budgets, where the evidence suggests that changes are required, including, but not only, incremental adjustments upwards.
It follows, of course, that the budgeting and expenditure management process must incorporate a 'budget office' function that constitutes the primary centre for the consideration of evidence from program evaluations: first, as a source of advice into the policy and resource allocation processes; and, second, as the main decision-making centre for making changes to the existing budgets of programs. The budget office function requires that there be discretionary funds available for enriching programs, as necessary based on evidence, but also the discretion to alter program budgets as necessary. In practice, this means that there should always be a budget reserve in order that the expenditure management function can be performed without having to resort to a review and reallocate exercise. In the Canadian context, the budget office function is obviously one that should be performed by the Treasury Board/Secretariat. These two dimensions are distinct activities but need to be closely interrelated in practice since each is likely to be best performed by a single set of central agency officials who are as fully informed as possible of the programs in a given department or area of government.
This function also brings with it the leverage the Treasury Board/Secretariat needs to interact with departments in ways that makes departmental managers pay close attention to the demands by TBS for evidence on the effectiveness of departmental programs. It also justifies the TBS assessing the extent to which the evidence on effectiveness meets the standards of quality expected of departments, even allowing for variations, given the nature of each department's programs.
4. Securing Independence for Program Evaluation
Program evaluation is a core function that requires a degree of independence from those directly responsible for the programs being evaluated. The function is one that, like internal audit, must be undertaken by those at least one step removed from program management. There is a tradeoff, however, in securing the independence of program evaluation: the more external, the greater the independence; the more internal, the greater the ownership of the findings. There is no easy resolution.
Placing responsibility for program evaluation at the departmental level means that deputy ministers decide on the importance of the function in departmental decision-making. This runs the risk that deputy not see the function as critical to their agenda. It also runs the risk that the deputy devolves responsibility to the department's functional specialists in program evaluation and thereby pins primary responsibility for the use of evidence down the line to program managers. When this happens, program evaluators invariably adjust what they do to focus on being helpful to program managers. This is understandable. But it invariably diminishes the extent to which the program evaluations themselves are relevant to larger purposes of government decision-making, including expenditure management.
Program evaluation should not focus primarily on assisting managers at the level of program management in improving program management or delivery. Performance measurement systems are better at helping in these regards. The value of program evaluation is best suited to raising demanding questions about program effectiveness for senior departmental officials. To do so, they need to be independent of program managers. And, they should do so in ways that also provides evaluations with which central agency officials can challenge the claims of senior departmental officials in the government decision-making process.
5. Enhancing Quality in Program Evaluation
Program evaluations are demanding exercises not only to undertake but to use. Decision-makers need assurance that the evidence presented by program evaluations is credible. In short, they need to be assured that the program evaluations they use are of the highest quality.
The quality of program evaluations is due to the quality of the staff who carry out this function, the resources devoted to it, and the extent to which the functional community is developed and maintained as a professional public service community. None of this will happen unless there is the demand for quality from senior officials or ministers. The demand, in turn, will come if deputy ministers are required to provide evidence of the effectiveness of their departmental programs in the government's decision-making process, if the Treasury Board/Secretariat assesses the quality of the evidence provided by departments, and if the Treasury Board/Secretariat makes decisions on program funding based on demonstrated program effectiveness.
The quality of program evaluation will also be enhanced to the degree that the decision-making system allocates resources to priorities that are linked to actual programs or new program designs. Linking priorities to actual or proposed programs is clearly the hardest part of government decision-making since program effectiveness evidence is seldom definitive. But absent evidence on program effectiveness priorities will be established merely on the basis of preferences. That is rarely good enough for achieving value for money.
1. Canadian Government Documents
Auditor General, Report to the House of Commons, 2004, Chapter 7, "Managing Government: A Study of the Role of the Treasury Board and its Secretariat".
Auditor General, Report to the House of Commons, 1993, Chapter 8, "Program Evaluation in the Federal Government: The Case for Program Evaluation"; Chapter 9, Program Evaluation in Departments: The Operation of Program Evaluation Units"; Chapter 10, "The Program Evaluation System: Making It Work"
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat, 2004. Draft, July 6. "Evaluation Function in the Government of Canada"
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat, 2005. "Preparing and Using Results-Based Management and Accountability Frameworks." January.
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat, 2004. "Report on Consultations," by Peter Hadwen Consulting INC. March.
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat. 2004. "Capturing Evaluation Findings: Evaluation Review Information Component (ERIC)". October.
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat. 2004. "Centre for Excellence for Evaluation: 2003-04 to 20004-05." September RDIMS #247843.
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat. 2004. "Review of the Quality of Evaluation across Departments and Agencies." October. Final Report.
Centre for Excellence for Evaluation, Treasury Board of Canada Secretariat. 2003. "Interim Evaluation of the Treasury Board Evaluation Policy." January.
Comptrollership Modernization Directorate, Results-Based Management and Accountability Framework of the Modern Comptrollership Initiative, April, 2003.
Lee McCormack, 2004. "Getting the Foundations Right," November 30, 2004 (Executive Director, Results-Based Management Directorate, Treasury Board Secretariat).
Hunt, Terry Dale. "Policy Evaluation System for the Government of Canada" (Centre of Excellence for Evaluation, Treasury Board of Canada Secretariat). Undated.
Privy Council Office, Guide to Making Federal Acts and Regulations, 2nd Edition, 2001.
Privy Council Office, Memoranda to Cabinet, January 2, 2004.
Treasury Board of Canada Secretariat. 2004. "Report on Effective Evaluation Practices"
Treasury Board of Canada Secretariat. 2005. "Proposal for Departmental Comptrollers," Consultation Draft, January 19.
Treasury Board of Canada Secretariat, Evaluation Policy, April 1, 2001.
Treasury Board Secretariat. 2003. TBS Management Accountability Framework.
Treasury Board Secretariat. 2004. "Awareness Session on the Management Accountability Framework – Draft Course Deck. June 22. (Industry Canada and Transport Canada)
Treasury Board Secretariat. 2004. "Evidence Document: Commission of Inquiry into the Sponsorship Program and Advertising Activities (Gomery Commission)", October 22.
2. Other jurisdictions
Barrett, Pat, "Results Based Management and Performance Reporting – An Australian Perspective," Address to the UN Results Based Management Seminar, Geneva, October, 2004. (Auditor General of Australia)
Bushnell, Peter. 1998. "Does Evaluation of Policies Matter?" Evaluation (Sage), Vol. 4, No.3, 363-371. (New Zealand Treasury)
Davies, Philip Thomas, "Policy Evaluation in the United Kingdom" (Prime Minister's Strategy Unit, Cabinet Office). Undated
Le Bouler, Stephane, "Survey of Evaluation in France" (Commissart General du Plan), Undated.
Lyon, Randolph Matthew, "The U.S. Program Assessment Rating Tool (PART)" (Office of Management and Budget). Undated.
OECD, Public Sector Modernization: Governing for Performance, Policy Brief, 2004.
3. Academic Publications
Ellis, Peter. 2004. "Evaluating the Australian international development cooperation – A third generation in program evaluation?" Canberra Bulletin of Public Administration, No. 111, March, 1-7.
Halligan, John. 2003. "Public sector reform and evaluation in Australia and New Zealand," in Helmut Wollman (ed.), Evaluation in Public Sector Reform (Cheltenham: Elgar), 80-97.
Kelly, Joanne. 2003. "The Pursuit of an Illusive Idea: Spending Review and Reallocation under the Chrétien Government," in G. Bruce Doern (ed.), How Ottawa Spends, 2003-2004 (Don Mills: Oxford University Press), 118-133.
Mackay, Keith. 2003. "Two Generations of Performance Evaluation and Management System in Australia," Canberra Bulletin of Public Administration, No. 110, December, 9-20.
McGuire, Linda. 2004. "Contractualism and performance measurement in Australia," in Christopher Pollitt and Colin Talbot (eds.) Unbundled Government: A Critical Analysis of the Global Trend to Agencies, Quangos and Contractualisation (London: Routledge), 113-139.
Muller-Clemm, W.J. and Maria Paulette Barnes. 1997. "A Historical Perspective on Federal Program Evaluation in Canada," Canadian Journal of Program Evaluation, Vol. 12, No. 1, Summer, 47-70
Perrin, Burt. 1998. "Effective Use and Misuse of Performance Measurement," American Journal of Evaluation, Vol. 19, No. 3, 367-379.
Pollitt, Christopher. 2000. "How Do We Know How Good Public Services Are?", in B. Guy Peters and Donald J. Savoie (eds.), Governance in the Twenty-First Century (Montreal: McGill-Queen's University Press)1`19-152.
Pollitt, Christopher, Colin Talbot, Janice Caufield and Amanda Smullen, 2004 Agencies: How Governments Do Things through Semi-Autonomous Organizations (Houndmills, Basingstoke: Palgrave Macmillan).
Prince, Michael. 2004. Draft. "Soft Craft, Hard Choices, Altered Context: Reflections on 25 Years of Policy Advice in Canada". Undated.
Talbot, Colin. 2004. "The Agency idea," in Pollitt and Talbot, Unbundled Government, 3-21.
Wanna, John, Joanne Kelly, John Forster. 2000. Managing Public Expenditure in Australia (St Leonards, NSW: Allen & Unwin).
 Joanne Kelly, "The Pursuit of an Illusive Idea: Spending Review and Reallocation under the Chrétien Government," in G. Bruce Doern (ed.), How Ottawa Spends, 2003-2004 (Don Mills: Oxford University Press, 2003, 118-133.
 Pat Barrett, "Results Based Management and Performance Reporting – An Australian Perspective," Address to UN Results Based Management Seminar, Geneva, October, 2004, 14. See also John Wanna, Joanne Kelly, and John Forster, Managing Public Expenditure in Australia (St Leonards, NSW: Allen & Unwin, 2000).
 Keith Mackay, "Two Generations of Performance Evaluation and Management System in Australia," Canberra Bulletin of Public Administration, No. 110, December 2003, 9-20. Also see Burt Perrin, "Effective Use and Misuse of Performance Measurement," American Journal of Evaluation, Vol. 19, No. 3, 1998, 367-379.