We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Five-Year Evaluation of the Management Accountability Framework

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.



Treasury Board of Canada Secretariat - Five-Year Evaluation of the MAF - Final Report - PricewaterhouseCoopers LLP and Interis Consulting Inc

Final Report

Executive Summary

Performance management frameworks are integral to the success of organizations within the public and private sectors.  This is especially true for complex organizations such as the Federal Government of Canada. 

In November 2008, the Treasury Board Secretariat (TBS) commissioned PricewaterhouseCoopers LLP and Interis Consulting Inc. to conduct a five-year evaluation of the Management Accountability Framework (MAF), TBS’ performance management framework.  As part of this evaluation, we considered how TBS should continue its evolution of the MAF. 

Our evaluation approach involved interviews, consultations, literature review, international comparison, a costing survey and cost analysis.  We have been greatly assisted by feedback provided by various advisory groups, a DM Steering Committee and discussions with the MAF Directorate within TBS.  We note that there were some limitations with respect to the evidence – primarily around the lack of robust costing data as departments and agencies are not tracking costs related to the MAF assessment and empirical evidence with respect the improvement of management practices due to the evolving nature of MAF.

Our evaluation compared MAF against three comparable frameworks in the following jurisdictions:  the United Kingdom, the European Union and the United States.  The frameworks in place in these jurisdictions are similar in intent (performance improvement) and methodology (regular diagnosis of organizational capability) to the MAF.  Key elements of these frameworks that may be leveraged to further enhance the usefulness and sustainability of MAF are as follows.

A summary of our findings from the multiple lines of evidence is as follows:

  1. Through formalizing expectations of management, MAF has led to increased focus on management practices, i.e., management matters.  MAF is becoming a catalyst for integrating best practices into departments and agencies.
  2. There continues to be a place for ongoing dialogue between TBS and deputy heads during the MAF process.
  3. TBS has in place a structured and rigorous process for reviewing the MAF assessment results.
  4. There has been increasing stability of the MAF indicators in recent rounds
  5. The current approach of assessing each AoM annually for large departments and agencies does not consider the unique risks and priorities of organizations.
  6. While the reporting burden associated with MAF has been reduced, there are further opportunities to reduce the impact of the MAF assessment on departments and agencies.
  7. The subjectivity of the MAF assessments is a result of a significant number of qualitative indicators built into the assessment approach.
  8. Many of the lines of evidence from Round VI are process-based, which do not measure effectiveness of the outcomes resulting from the process.
  9. Our consultation indicated that most departments and agencies wee unsure of the cost effectiveness of MAF.
  10. TBS is the appropriate entity to measure managerial performance with in the Federal Government.

Based on our analysis of the evidence arising from the multiple lines of enquiry, we have concluded that MAF is successful and relevant.  MAF is meeting its objectives and it should continue to be maintained and supported.  MAF provides a comprehensive view to both deputy heads and TBS on the state of managerial performance within a department or agency.  In support of prioritization and focus on management practices across the Federal government, MAF is a valuable tool.

Due to the limitations of the costing data, as noted above, we are unable to conclude on the cost effectiveness of MAF.  Going forward, TBS may want to consider providing guidance to departments and agencies to allow the costs of the MAF process to be tracked.

Based on our analysis of the inputs and comparison to comparable jurisdictions, our recommendations for the continued evolution of MAF as a performance enabler are as follows.

  1. Implement a risk/priority based approach to the MAF assessment process; possible options include:
    • Assessment based on the risks/priorities unique to each organization;
    • Clustering of the MAF indicators subject to assessment into categories:  mandatory, optional, and cyclical;
    • Assessment based on the results of previous rounds; and
    • Assessment based on department/agency size.
  2. Develop guiding principles (“golden rules”) for assessing managerial performance and incorporate into the existing MAF assessment methodology, including:
    • Maintain an appropriate balance between quantitative and qualitative indicators within each Area of Management (AoM);
    • Develop outcome-based indicators of managerial performance;
    • Leverage information available through existing oversight activities;
    • Maintain the stability of indicators, which is critical to the assessment of progress over time;
    • Ensure clarity and transparency of indicators, measurement criteria and guidance documentation;
    • Engage functional communities in open dialogue in the ongoing development of AoM methodology and assessment measures;
    • Include assessment and identification of both policy compliance and result-based managerial performance; and
    • Seek ways to recognize and provide incentives to encourage innovation.
  3. Introduce a governance body with senior representatives from departments and agencies to guide MAF.  This deputy head or departmental senior executive committee could be used to advise on changes to the MAF process and the MAF system-wide results to guide government management priorities.
  4. Develop a stakeholder communication/engagement strategy and plan, including early engagement when changes are made to the MAF assessment process.  TBS could leverage existing communities and forums across the Federal government to ensure timely communication of changes, which will allow stakeholders sufficient time to respond and accept any changes.
  5. Assign formal responsibilities within TBS to oversee the MAF assessment methodology and framework and management of horizontal issues.  This could be achieved by expanding the role of the MAF Directorate within TBS to provide horizontal oversight and integration across AoMs for both methodology and results.


Table of Contents



1 Introduction

1.1  Overview

In 2003, the Treasury Board Secretariat (TBS) introduced the Management Accountability Framework (MAF) with the intent of strengthening deputy heads/departmental accountability for management. As a performance management framework, the purpose of MAF1 is to:

To emphasize the importance of sound managerial skills for deputy heads, the MAF assessment results form an input into the Privy Council Office (PCO) process for assessing deputy head performance.

The MAF assessment process is managed by the MAF Directorate in the Priorities and Planning Sector within TBS.  The MAF is an annual assessment and since 2003, six MAF assessments have been conducted. 

1.2  MAF Evaluation

In November 2008, TBS commissioned a five-year evaluation of the MAF.  The objectives of this evaluation were to:

A combined team from PricewaterhouseCoopers LLP (PwC) and Interis Consulting Inc. (Interis) were contracted to complete the evaluation.  The evaluation questions were developed by TBS with input from the deputy head community; with the overall evaluation approach agreed upon by TBS.

Evaluation Methodology and Approach

The evaluation was initiated with the clear objective of performing an assessment of MAF and developing recommendations for improvement.  The Statement of Work (SOW) developed by TBS identified 23 evaluation questions.  We based our evaluation framework on those questions, developing indicators and evaluation methods to address each question.  As well, to support an integrated analysis, we grouped the evaluation questions into four strategic questions (Annex E identifies the 23 evaluation questions and their grouping under the four strategic questions).

In addressing the evaluation questions, multiple sources of evidence were used.  This included document and literature reviews, interviews, consultations and roundtables with stakeholder group representatives, international comparison against comparative jurisdictions and a costing analysis.

A brief summary of our data gathering approach is as follows:

Evidence from each of these sources was analyzed and synthesized to develop the findings, conclusions and recommendations that are included in this report. Figure 1 below depicts, at a high level, the evaluation framework we used, including the linkages between the four strategic questions and the three main evaluation issues expected to be covered by the TB Evaluation Policy.

Figure 1: Evaluation Framework
Figure 1: Evaluation Framework

Figure 1 - Text Version

During the course of the planning, conduct and reporting phases of the evaluation, oversight and guidance was obtained from the following sources:

Limitations of the Evaluation.  The following are the main constraints we faced in completing the evaluation as planned:

MAF is an information-gathering tool used by TBS to assess departmental management performance; which in turn, is critical to ensuring that programs and services are delivered to the highest standards in the most cost-efficient fashion.  

Given the qualitative and subjective nature of MAF, as a tool to assess management performance, the foregoing limitations are reasonable.  The evaluation team was able to address the evaluation questions by engaging relevant stakeholders and corroborating information gathered through multiple lines of evidence.  As a result, it is our view that the evaluation standards, as defined in TB Evaluation Policy (2001), have been met.

1.3  Purpose of Report

The purpose of this report is to provide:

1.4  Conclusions

As we describe in detail in section 3 of the report, we have concluded that MAF is successful and is meeting its current objectives.  MAF has clarified management expectations for deputy heads, has guided TBS engagement with departments and agencies, and provided both an enterprise-wide view of management practices to departments and a view to government-wide trends and management issues to TBS. 

Further, we have concluded that MAF is a valuable and relevant management tool that should continue to be maintained and supported.  In stating this we note that, driven by the need to meet increasing expectations for clear demonstration of accountability within the public sector, MAF has evolved significantly since its inception in 2003, from a relatively informal approach to a much more rigorous assessment.  Based on the results of our international comparison, it is reasonable to conclude that had MAF not existed, something similar would have been needed to meet these increased accountability requirements.   

While we were unable to conclude on the cost effectiveness of MAF, due to limitations of the costing information available, or on the accuracy/validity of the assessment results, we were able to conclude that the MAF assessment process is robust and the results generally reflect the realities of organizations. 

We have identified areas improvements that can be made to enhance the efficiency and effectiveness of the MAF process and enhance the overall validity of the assessment results.  Going forward, TBS may consider developing a costing approach that once implemented, would establish a baseline to compare cost in future years.  Further, validation of the MAF logic model with key stakeholders will be essential for its use as a basis for future performance measurement.

Finally, to ensure that MAF continues to meet its objectives and continues to support efforts towards management excellence, we have concluded that MAF should continue its evolution as a performance enabler for deputy heads.  The recommendations outlined in Section 4.0 will support this transition.



2  Putting MAF in Context

2.1  Importance of Performance Management Frameworks

To achieve organizational goals, organizations must motivate individuals and groups towards a common goal; performance management is a means used to get there.  Performance management is defined as a cycle of managerial activities that includes planning, measuring results and using measurement to reflect on the accomplishment of objectives.4  This cycle can be applied at the various levels: individual, team/business unit or organizational level.  Fundamentally, this involves integrating goal setting, measurement control, evaluation and feedback in a single ongoing process aimed at fostering continuous improvement in the creation of value.  A performance management framework usually refers to a specific process to accomplish the performance management cycle.  These frameworks usually include:

Effectively managing business performance in today’s complex, ever changing, competitive environment is critical to the success of any organization.  In fact, studies5 have shown that organizations that manage performance through measurement generally are far more successful than those that do not.  For example, TBS’ implementation of the MAF has resulted in an increased focus on improving managerial performance within departments. 

There are a number of benefits that organizations can reap by implementing a performance management framework.  Some other benefits that can be accrued to organizations are:

These benefits can be realized when an effective performance management system is implemented.  An effective performance management system “encourages employee behaviours that drive positive results whereas an ineffective performance management system, at best, utilizes rewards inefficiently and, at worst, adversely affects the outcomes that it is intended to improve.”6 

2.2  International Comparison

As part of this evaluation, we conducted an international comparison of the MAF against approaches currently used by other public sector jurisdictions.  Specifically, we compared the MAF against management performance frameworks used in the United Kingdom (UK), the European Union (EU) and the United States (US).7 

We did review approaches in Australia and New Zealand but determined they were not comparable for our purposes because they either had a different focus or were not implemented public-sector wide.  In Australia, the Executive Leadership Capability Framework is a competency framework, not an overall management performance framework.  In New Zealand, the Capability Toolkit shares similarities with MAF but the framework is used on a voluntary basis and limited information is available on the impact of its application across the public service.

Internationally, MAF is considered to be one of the more sophisticated management practices systems.  For example, according to the Organization for Economic Co-operation and Development (OECD)8 "An exceptional model for widening the framework of performance assessments beyond managerial results to include leadership, people management and organisational environment is provided by the Treasury Board of Canada Secretariat (TBS) Management Accountability Framework (MAF)."

An overview of each framework is provided below, with more details provided in Annex D to this report.

Table 1 Overview of International Comparator Organizations

Attribute

MAF (Canada)

Capability Review (UK)

Common Assessment Framework (EU)

President’s Management Agenda (US)

Purpose

Clarify management expectations and to foster improvements in management

Assess organizational capability to meet the government’s delivery objectives

Improve public sector quality management

Improve agency performance in five specific areas

Methodology

Annual review by central agency (exceptions for small and micro-agencies)

External review with six, 12 and 18 month stock takes; second round review after two years

Internal reviews on an optional basis in line with the results of the self-assessment

Quarterly self-assessments

Assessors

TBS personnel review of documentation and questionnaires

Three assessors from outside central government and two director generals from other government departments review documentation and conduct interviews; input and analysis provided by the Cabinet Office

Self-assessment; project teams internal to the organization

Office of Management and Budget (OMB) reviews internal, progress indicators and Green Plans

Judgment vs. Evidence-based

Judgment and evidence-based criteria

Judgment and evidence-based criteria

Judgment and evidence-based

Evidence-based

Remuneration

Linkage to performance pay of deputy heads through the Committee of Senior Officials

Linkages to salary decisions of Permanent Secretaries

Depends on how the framework is applied

Linkages to both budget allocations and Chief Operating Officer evaluations

Treatment of Entities

One size (exception: small and micro agencies assessed using different standard on a three-year basis)

Consistent approach for all organizations assessed

Adaptable by organizations

Consistent approach for all organizations assessed

Publication

Yes, on TBS website and optionally on departments’ websites; often includes a management response

Yes, on agency’s websites and press release

Voluntary submission of scores into CAF database

Yes, on department’s websites

United Kingdom’s Capability Review

The UK’s Capability Review (CR) program was launched by the Cabinet Secretary in 2005.  The CR was the first organizational capability assessment framework in the UK to assess systematically the organizational capabilities of individual departments and to publish results that can be compared across departments.  Its objective is to improve the capability of the Civil Service to meet the government’s delivery objectives and to be ready for program delivery.   

The CR addresses three broad area of management capability: leadership, strategy and delivery.  Using a standard list of questions and sub-criteria related to 10 elements within the three broad management areas, a review team completes its analysis using a combination of evidence/surveys provided by the department and conducts interviews and workshops over a short two to three week period. 

Using judgment and based on the information gathered during the assessment period, the review team assigns a rating to each element along a five-point scale: strong, well placed, development area, urgent development area or serious concerns.  Results of the review are published, which sets out areas for action.  The debriefing process is an honest, hard-hitting dialogue between the Cabinet Secretary, the Head of the Home Civil Service and the departments upon which action plans are devised.

For the first round of reviews, all major government departments were reviewed in five tranches between July 2006 and December 2007.9  A three-month challenge and a six-month, 12-month and, as necessary, an 18-month stock take takes place to ensure the department is making progress towards the action plan.  Second-round reviews take place after two years of the original first-round review.  

The CR is managed and organized by the Cabinet Office.  In an effort to bring a level of independence to the review results, the five-person review teams include two private sector external experts, one local government representative, in addition to the two representatives from peer government departments. 

The CR has been well received in the UK as departments think that it has added value.  The use of a review team external to the department adds a level of independence to the results.  A significant amount of judgment is applied when assessing departments within the CR.  The application of judgment is critical when assessing organizational capability; however, this also increases the subjectivity within the results of the review.  While the reviews rely on judgment, they are based on evidence that can be compared from one department to another and the assessments are reviewed by an independent external moderation panel to ensure consistency.  Finally, although the CR was designed to assess departments’ capability to meet current and future delivery expectations, no direct correlation has been made between capability and delivery performance. 

European Union’s Common Assessment Framework

In 2000 the Common Assessment Framework (CAF) was launched by the European Public Administration Network (EUPAN) as a self-assessment tool to improve public sector quality management within EU member states.  It is based on the premise that excellent results in organizational performance, citizens/customers, people and society are achieved through leadership driving strategy and planning, people, partnerships and resources and processes. It looks at the organization from different angles at the same time, providing a comprehensive and integrated approach to organization performance analysis.10 

The CAF is based on the European Foundation for Quality Management (EFQM) and the model of the German University of Administrative Sciences in Speyer.  It is composed of nine criteria, categorized into two broad segments - Enablers and Results – that define the cause and effect relationship between organizational capabilities.  Each criterion is subdivided into sub-criteria, for which examples demonstrate that specific managerial practices are in place.

The CAF is not imposed on organizations within the EU;11 it is a voluntary tool provided to agencies as a means to improve organizational effectiveness.  As a result, the CAF is a self-assessment tool completed within the agency being assessed.  Typically, an internal review team is assembled within the organization who applies judgment of organizational performance against the indicators.  The CAF was not designed to be a one-size-fits-all model.  Flexibility has been built into the approach to allow for tailoring for the needs of the organization.  For example, a number of specialized CAF models exist for different sectors (e.g., education, local government, police services and border guards).  Finally, the CAF does not prescribe what specific organizational practices need to be in place; only the broad practices that needs to be in place.

The key advantage of the CAF is the flexibility that is built into the framework; allowing each organization to respect the basic structure of the framework while applying only relevant examples in the self-assessment process.  This ensures the relevance of the results to the organization.  However, the voluntary and self-assessment nature of the assessment tool limits the rigor, independence and comparability of the results over time and across organizations.

United States’ President’s Management Agenda

The President’s Management Agenda (PMA), launched by the Bush Administration in 2001, focuses on improving organizational effectiveness in five key areas of management across the whole of the US government.  The program was established to improve the management and performance of the federal government and deliver results that matter to the American people.  The PMA is not a comprehensive assessment framework; rather, it defines five specific government-wide initiatives thought to be of importance at the time.  These include:  i) Strategic Management of Human Capital, ii) Competitive Sourcing, iii) Improved Financial Performance, iv) Expanded Electronic Government and v) Budget and Performance Integration. 

Each initiative, for all 26 major federal departments and agencies, is scored quarterly using a red, yellow and green system.  The PMA uses a “double scoring” approach whereby the red, yellow, green score is applied for the current status as well as for progress.  Progress in this case is defined as the execution of improvement plans based on the current assessment status.  The PMA uses a self-assessment approach based on the standards that have been defined for each initiative.  The head of each agency (Chief Operating Officer) is responsible for conducting his/her own agency’s assessment on a quarterly basis.

The Office of Management and Budget (OMB – in the Executive Office of the President) is responsible for the PMA; however, the President’s Management Council, made up of Chief Operating Officers for all 26 agencies, meet regularly to review progress against the PMA for all agencies.

While the PMA consists of a standard process for each organization, it is limited in its focus to those five key government-wide management priorities.  Consistent with the CAF, the self-assessment approach, which is practical given the quarterly assessment cycle, can limit the rigor and independence in the assessment results; however, this is balanced by the oversight of the President’s Management Council, which would be the equivalent of a deputy head oversight committee.

Further details of each comparative jurisdiction have been provided in Annex D to this report.

Applicability to MAF

Our review of the frameworks in place in other jurisdictions indicates that the tools being used are similar in intent (performance improvement) and methodology (regular diagnosis of organizational capability) to the MAF.  Further, all jurisdictions appear to face similar challenges including demonstrating the link between the assessment and ultimate performance improvement, the ability to define clear measures, and finding ways of improving the efficiency of the approaches. 

There are opportunities for MAF to leverage elements of these frameworks to further enhance the usefulness and sustainability of the tool.  A summary of the key considerations for MAF from the international comparison exercise, which will be further examined in Section 3.0 below, is as follows.



3  MAF Assessment

Based on a series of 23 questions identified by a group of deputy heads, we developed four key strategic questions to support our review of MAF as a management performance assessment tool, the associated methodology and administrative practices and the benefits of MAF, relative to costs.  The four strategic questions are as follows:

A brief overview of the MAF process is outlined below.  In the sections that follow, each of the four questions are individually addressed, including a conclusion for each strategic question. 

3.1  MAF Model, Process and Cycle

The MAF is structured around 10 broad areas of management or elements: governance and strategic direction; values and ethics; people; policy and programs; citizen-focused service; risk management; stewardship; accountability; results and performance learning, innovation and change management.  Figure 2 below displays the 10 MAF elements. 

Figure 2: MAF 10 elements
Figure 2: MAF 10 elements

Figure 2 - Text Version

Figure 2 - Display full size graphic

Organizations are assessed in 21 Areas of Management (AoM); each of which have lines of evidence with associated rating criteria and definitions to facilitate an overall rating by AoM.  The four-point assessment scale measures each AoM as either strong, acceptable, opportunity for improvement or attention required.  The MAF Directorate, within TBS, has recently developed a logic model for MAF to clearly articulate the outcomes.  The logic model is presented in Annex B to this report.  It is important to note that MAF is not the sole determinant of a well-managed public service.  Apart from its broad oversight role as the Management Board and Budget office, TBS oversees Expenditure Management System (EMS) renewal (whereby TBS advises Cabinet and the Department of Finance on potential reallocations based on strategic reviews to ensure alignment of spending to government priorities) and the Management Resources and Results Structure Policy (which provides a detailed whole-of-government understanding of the ongoing program spending base).  The alignment of MAF to government priorities is presented in Annex C.

The MAF assessment process is performed annually by TBS based on evidence submitted from departments and agencies to support the defined quantitative and qualitative indicators within the framework.  Assessments are completed by TBS representatives, including a quality assurance process to ensure results are robust, defensible, complete and accurate.

For each assessment round, all 21 AoMs are assessed for every department and agency, with the following exceptions:

The main steps to the MAF assessment process are detailed below:

Step 1 – Priority Setting: The MAF assessment process begins with the previous round’s context page provided by TBS to the deputy head, which outlines the key priorities to be addressed in the coming year.  During the year, action plans and efforts are made by the departments and agencies to address the identified deficiencies and improve managerial performance within the organization.

Step 2 – Preparation and Document Submission: In advance of the assessment round, TBS conducts awareness and training sessions to introduce changes in the assessment criteria.  Once the assessment round commences and based on the documented assessment and rating criteria developed for each AoM and line of evidence, departments and agencies submit documentation to enable an assessment against the measures.

Step 3 - Assessment: The process to arrive at a preliminary assessment includes detailed analysis, reviews and approvals within the policy and program sectors of TBS.  This includes vertical (a review of all AoM results for an individual department or agency) and horizontal reviews (a comparison of AoM results across all departments and agencies assessed), which are performed by senior TBS officials.  Once completed, the draft results are presented at a TBS Strategy Forum session whereby the Associate Secretary, the Program Sector Assistant Secretaries and the AoM Leads review and discuss the draft results on a department by department basis.  Based on the results of the Strategy Forum, the draft assessment results are released to the individual departments and agencies for feedback. 

The MAF Portal

The MAF Portal is the tool used to facilitate the submission of evidence to address the lines of evidence and to ensure timely and effective communication between TBS and departmental MAF representatives. The Portal was piloted at the beginning of MAF Round V to facilitate and improve communications between TBS and departments and agencies. Since then, it has become an essential tool for both TBS and MAF respondents for executing the MAF process. The Portal facilitates the departmental submission of documents to TBS for use as evidence to support MAF assessments and ratings and enables the archiving of all documents sent electronically by departments and agencies.

TBS uses a web-based application called the MAF Assessment System, which is linked to the MAF Portal. This application has been developed and enhanced during the past three MAF assessment rounds and is used to process and archive the TBS assessment activities.

Step 4 – Finalization and Reporting: Based on feedback from the departments and agencies, the assessment results are finalized.  In addition, a context page and a streamlined report are drafted to support the assessment results and identify management priorities for the upcoming year.  A second TBS Strategy Forum session is conducted to provide guidance on any context pages, streamlined reports or assessment ratings and to finalize the materials and obtain consensus on the final departmental assessments. 

Step 5 - Communication: The final reports are approved by the Secretary of the Treasury Board and a letter is sent by the Secretary to each deputy head informing them of the results and identifying priorities to address in the upcoming year.  To provide support in the development of action plans, bilateral debriefings are scheduled between the deputy head and the Secretary for selected departments, with the Program Assistant Secretaries addressing the remaining departments.  This is complemented by the Committee of Senior Officials (COSO) process, whereby the Clerk (or Deputy Clerk) of the Privy Council provides direct feedback to each deputy head regarding his/her performance, including the MAF results for his/her department.  Beyond the deputy head community, to facilitate broader communication to all departmental officials involved in the MAF assessment process, departmental results are posted in the MAF portal. 

Step 6 – Publication:  To ensure transparency to the public, the MAF assessment results are publicly released on Treasury Board’s website.

Step 7 – Post-Mortem:  Once the round concludes, the MAF Directorate launches a post-mortem with Treasury Board Portfolio (TBP) staff, management area leads, and MAF coordinators in various departments and agencies.  The purpose of the post-mortems is to assess the process and methodology to identify areas for improvement to be integrated into the subsequent round.  Recommendations are made with appropriate consultations of TBP staff and the MAF Directorate and may include improvements to the process and/or timelines, adjustments to management areas, the introduction of new models or methodology, or additional training.  TBP’s Management Policy and Oversight Committee and the Secretary of the Treasury Board are responsible for approving any recommendations before implementation into the next round.

Figure 3 summarizes the key milestones within the MAF assessment process.

Figure 3: MAF Assessment Process
Figure 3: MAF Assessment Process

Figure 3 - Text Version

The MAF Directorate has taken a leading role in supporting MAF’s evolution and addressing concerns of stakeholders.  During Round VI, the MAF Directorate facilitated a Leading Management Practices conference for departments and agencies.  This forum provided TBS’ guidance on specific AoMs where horizontal results across departments and agencies were weaker, and showcased departments and agencies of various sizes who had successfully implemented best practices within individual AoMs.  AoM leads have also take initiative to facilitate dialogue and efforts to address horizontal issues identified as a result of MAF by leveraging existing functional communities, including Information Technology, Internal Audit and Financial Management communities, as examples. 

Other initiatives undertaken by the MAF Directorate include the development of MAF methodology and guidance documents, the management and improvement of the MAF Portal, the development of the MAF logic model and the framework for a risk-based assessment approach.

3.2  Is MAF Meeting Its Objectives?

As it has only recently been developed, we were unable to base our evaluation approach on the MAF logic model presented in Appendix B.  In evaluating MAF, we identified that it has evolved from an initial “conversation” between deputy heads and TBS to a much more defined, rigorous and documented assessment.  Through our interactions with TBS, we confirmed that this evolution was a conscious approach to address the increasing expectations for clear demonstration of accountability within the public sector12.  As such, we determined that to be of most use, the evaluation should focus on the current objectives for MAF, as outlined in section 1.1 of this report, rather than the initial goals.

It is our view that MAF is meeting its stated objectives.  MAF provides both an enterprise-wide view of management practices to departments and a view to government-wide trends and management issues for TBS.  In addition, we think the 10 key elements of the MAF framework provide an effective tool to define “management” and establish the expectations for good management within a department or agency; a view shared generally by key stakeholders, and supported by our review of other models.

In our interviews and consultations, we consistently heard and observed that MAF and the associated assessment process has led to increased focus on management practices and has become the catalyst for best practices.  As reflected in the MAF logic model reflected in Annex B to this report, an ultimate outcome of MAF is “continuous improvement in the quality of public management in the Federal public service.”  With this goal in mind, there are opportunities for future evolution to ensure ongoing sustainability, usefulness and achievement of the expected long-term outcomes.

MAF has resulted in a number of unintended impacts, both benefits and consequences.  The unintended benefits have included:

Conversely, the unintended consequences have been:

1. Through formalizing expectations of management, MAF has led to increased focus on management practices, i.e., management matters. MAF is becoming a catalyst for integrating best practices into departments and agencies.

The consistent feedback from departmental representatives was that MAF has been successful at bringing the attention and focus of the deputy heads and senior executives to management issues and has become the catalyst for best practices.  Further, as stakeholders viewed the MAF assessment results as generally accurate, MAF has facilitated the identification of areas where improvements are necessary and has contributed to priority setting, resource allocation and necessary changes to business processes.  MAF has also assisted deputy heads in managing their organizations as the MAF assessment provides an overview and “state of the department” as a baseline for determining priorities and action plans for a new deputy head coming into an organization.
“I’m a fan of MAF. I think what MAF has done is number one; put a marker for deputies on management issues. The fact that there is a marking scheme focuses the mind and shows if you’re making progress.” (Source: deputy head interview)

Several of the deputy heads interviewed stated that they use the AoMs to set managerial priorities and expectations with the department’s senior management team.  For example, MAF has influenced performance agreements with senior management teams, whereby deputy heads include MAF results as a measure of individual performance.  This has included creation of such tools as a MAF Memorandum of Understanding (MOU) or other performance agreements between the Deputy Minister and the Assistant Deputy Minister.

TBS has been able to leverage MAF results to clarify management expectations for deputy heads, provide incentives to departments and agencies to focus on management and determine enterprise-wide trends.  For example, based on the results of individual organization assessments, TBS has been able to provide direction to departments and agencies for short-term priority setting.  We understand that TBS has used MAF results to inform oversight activities and support intelligent risk taking.  By using MAF results to target oversight in the highest risk areas, TBS can recommend and support increasing departmental autonomy for a department to innovate and take risks.  Examples where MAF results currently inform oversight activities include delegations of authority, budget implementation decisions, Strategic Reviews and Treasury Board submissions. 

There are opportunities to use the MAF assessment process to showcase and further provide incentives and recognize innovation across the Federal government.  In 2009, a report on the Operational Efficiency Programme in the UK, recommended that, “the Cabinet Office should embed departmental capability and track record in fostering innovation and collaboration in Capability Reviews.”13

MAF has facilitated the identification of government-wide issues and the development of associated action plans.  As owners of the individual AoMs, AoM leads within TBS are responsible for addressing these systemic issues that are identified through the MAF results.   Enterprise trends and systemic issues identified through the AoM assessments are discussed at the Public Service Management Advisory Committee (PSMAC).  PSMAC (formerly the Treasury Board Portfolio Advisory Committee (TBPAC)) serves as the focal point for integrating and ensuring coherent, comprehensive and consistent implementation of the Treasury Board’s integrated, whole of government management agenda.  TBS also provides a précis to deputy heads, identifying horizontal management issues.  By facilitating sharing of best practices across departments and agencies and existing common interest communities, TBS is supporting efforts to prioritize and address these issues. 

2.  There continues to be a place for ongoing dialogue between TBS and deputy heads during the MAF process.

“The next generation of MAF: it would interesting to sit down with TBS, where they would identify the things they are going to be looking at. Having that kind of dialogue ahead of time and setting the standard of performance for the year.” (Source: deputy head interview)

As noted above, MAF has evolved from an initial “conversation” between deputy heads and TBS to a much more defined, rigorous and documented assessment.  This has allowed TBS to leverage MAF to support its oversight and compliance monitoring role relative to its policy framework.  

We think that MAF has matured to the point that more emphasis could be placed on the level of conversation or engagement between TBS senior officials and deputy heads. We heard consistent concerns, across a majority of our deputy head interviews, that the “process” has overtaken the “conversation”.  This results, at least in part, from the broad scope and frequency of the MAF assessments, which limits the time and ability of TBS senior managers to engage in meaningful, two-way conversations with all departments.  This has led to the perception, among both larger departments, as well as smaller agencies, of TBS as an assessor rather than an enabler.  To fully achieve the objectives set out for MAF and provide the expectations of managerial performance, there is an opportunity to more formally engage senior departmental representatives.

For the UK Capability Reviews, engagement at the senior level is seen as a critical part of the review.  As outlined in Take-off or Tail-off? An Evaluation of the Capability Reviews Programme, the consultation and interactions between the permanent secretary and the departmental boards are honest, intense, emotional and hard-hitting, and need to occur in order to drive real change within the department.

Many deputy heads indicated that they would welcome priority setting and performance review discussions with senior executives within TBS (the Secretary or Assistant Secretaries).  Of the 19 deputy heads that raised the issue of having priority setting discussions with senior TBS officials, 100% agreed that they would welcome such discussions.  Regardless of stronger or weaker performance, deputy heads believed strongly that these discussions are important for the departments to address their organization’s current circumstances, communicate their priorities for the year and agree on the performance priorities and expectations.

The existing strategy in the UK for communicating results externally has been much more publicized and visible as compared to MAF, including press releases of results.  While this approach has been designed to ensure the impact of the results, we do not think that there is a benefit to this approach for MAF.



3.3  Are MAF Assessments Robust?

Departmental stakeholders confirmed that recent MAF assessment results generally reflected the state of management practices within their organizations and served to highlight areas of management that required attention.  

“There are hidden costs of NOT doing MAF.” (Source: deputy head interview)

In our view, as a tool for the assessment of departmental managerial performance, the existing assessment approach and methodology for MAF assessments has a solid foundation.  We also noted that it is consistent with the basic elements of assessment models in place within other jurisdictions.  TBS has identified areas of improvement to support improved efficiency and effectiveness of the tool and took steps to streamline the process for Round VI; however, stakeholders were quick to express concern over making too many changes or streamlining too much to diminish the focus on management. 

Over the past six rounds of MAF, departments and agencies have taken strides to embed MAF and strong managerial performance into their organizations, pushing the expectations through to their senior management and management teams.  To ensure the continued progression towards this goal and to complement the ongoing “ever-greening” of the assessment tool, enhancements to the assessment methodology and approach should be considered.  By incorporating key elements to ensure transparency and defensibility of measures, as well as improvements to the process to increase its efficiency and effectiveness, this will support efforts towards management excellence. 

3.  TBS has in place a structured and rigorous process for reviewing the MAF assessment results.

As outlined in Section 3.1 of this report, the current assessment approach uses central agency representatives to complete the assessment with a quality assurance process to support the accuracy of the assessment results.

Our discussions with departmental representatives at all levels point to most not being aware of the extent of the quality review process within TBS (detailed in Section 3.1 of this report).  While the MAF assessment process begins with an assessment by analysts, it also involves several quality control and approval steps involving more senior staff, and leads to review and approval at the Associate Secretary and Secretary level.  A key step in this process is the Strategy Forum, involving Assistant Secretaries from the Program and Policy Sectors, as well as all AoM leads, where the draft results for all departments and agencies across all AoMs are reviewed and assessed.  This Strategy Forum is conducted twice; once before the finalization of the draft assessment results and the other prior to the release of the final assessment and context page.  This is a key internal governance step in the assessment process to ensure there is a consensus within TBS regarding the MAF assessment results.  Ongoing communication with departmental stakeholders will allow organizations to understand and gain confidence from the existing quality assurance processes already in place.  This could include consideration of how to communicate results to Parliamentarians.

In the consultation process for this evaluation, a majority of departmental stakeholders raised concerns regarding the TBS analysts involved in the assessments.  These concerns relate mainly to the rate of turnover of the analysts from one year to the next, which impacts the level of the analysts’ experience.  Consistent with other areas of the Federal government, we have learned that turnover of analysts is part of a larger and ongoing talent management challenge.

4.  There has been increasing stability of the MAF indicators in recent rounds.

”One of the complaints that departments have every year is we change the rules of the game.” (Source: Senior TBS official interview)

Feedback we received identified that stability in the measures is desirable.  In an effort to continuously improve the assessment tool, there have been changes within the indicators between each assessment round.  In comparing the overall “Strong” rating for each AoM, we noted that 12, albeit minor, changes were made in the rating definition between Round V and VI.  Overall, however, we noted that effort has been made between Round V and VI to stabilize the AoMs and the associated lines of enquiry.

Stability of MAF cycle:  Currently, the timing of the assessment process is aligned with the COSO process of evaluating deputy head performance, as the final MAF assessment results are released in April, ready for the COSO input in May.  However, this timing does not align with the annual planning cycle within departments and agencies.  This results in challenges for departments and agencies in integrating the results of the MAF assessments into the operational plans of the upcoming year. 

Most stakeholder groups, including deputy heads, TBS senior officials, and departmental MAF contacts, identified the timing of the assessment process as an issue; however, it was further noted that any other time of year would also be challenging due to existing commitments.  For example, completing the assessments in the Fall would allow input into the departmental operating plans but also coincide with the preparation of Departmental Performance Report (DPR), which tends to impact the same individuals who coordinate the MAF submissions.  As a result of the operational requirements for input into the COSO process and the potential conflicts with existing government cycles, we are not recommending changes to the current cycle and timing of MAF at this time.

5.  The current approach of assessing each AoM annually for large departments and agencies does not consider the unique risks and priorities of organizations.

Selection of AoMs to be assessed each year:  All large departments are assessed every year against all 21 AoMs and all lines of enquiry.  This facilitates comparability of results across the Federal government but does not consider the unique aspects of individual departments and agencies.  There are characteristics of organizations for which the impact of the various AoMs might differ, including: size, industry sector, complexity, portfolio relationship with another department and life cycle stage.

Depending on the nature of the department or agency, there are specific AoMs that may have limited applicability; on the other hand, there are some AoMs that would apply to every organization, given the current government priorities.  For example, AoM #14 – Asset Management varies in importance given the nature of the organization. In contrast, due to the government-wide priority of accountability and given the role of the deputy head as the Accounting Officer, AoM #17 Financial Management and Control is relevant to all departments and agencies.

International jurisdictions that have models similar to MAF have taken varied approaches to tailoring the indicators.  In the UK, the CR indicators are consistent across departments and are not tailored to the specific risk or priorities of the organization.  Contrasting that, the CAF was designed to be flexible and allow individual EU countries to tailor the assessment tool. 

Due to the public nature of the assessment results, receiving a poor rating in an area of low risk or priority to a department or agency could result in inappropriate decisions related to allocation of resources to improve the subsequent year’s rating.  A risk-based or priority based approach to the AoMs, measuring only those that are considered a priority or risk to the individual organization based on their unique characteristics would encourage management to focus on the appropriate areas.  100% of the 20 deputy heads that spoke of a risk-based approach to assessments agreed that this would enable the consideration of the unique risks and priorities of organizations.  This would further support the efforts towards streamlining the AoMs.  These issues were consistently identified during other stakeholder consultations conducted.

Development of priorities for each department and agency:  A common point that was identified by departmental stakeholders was that the results of the MAF assessments that are published on the TBS site are missing context.  For purposes of consistency and comparability, context is not considered in the application of the assessment criteria; however, in recent assessment rounds, diligence has been taken to provide appropriate context to the assessment results in the context page provided with the final assessment results. 

In the US, where assessments take place quarterly, a “double scorecard” approach is in place to not only assess the organization’s current state but also its progress against its implementation plan.

Interpreting an organization’s performance relative to its circumstances (e.g., life cycle stage, progress towards improvement, size) is critical to get an overall picture; however, setting individual priorities based on the unique circumstances of the organization, sets an expectation of what is achievable by the department/agency and holds the deputy head accountable for performance against these expectations.  The identification of these priorities and the performance against them would provide the necessary context to understand the results relative to the unique circumstances of the organization.

6.  While the reporting burden associated with MAF has been reduced, there are further opportunities to reduce the impact of the MAF assessment on the departments and agencies.

MAF assessments are completed on an annual basis for all applicable organizations, with the exception of small and micro agencies.  Round VI included 21 AoMs and 68 lines of evidence for which submission of documentation was required to allow completeness of assessment.  In Round V, a total of 16,96114 documents were submitted to TBS for MAF assessment purposes.  As a result of feedback from the post-mortems, TBS committed to reducing the reporting burden.  In Round VI, this number had been reduced by 50 per cent, in part due to the introduction of document limits per AoM.  This reduction holds true for both the number of documents submitted as well as the total size (in gigabytes) of the documents.

Acknowledging the increased impact that the MAF reporting requirements have on a small agency, these agencies are assessed on a three-year rotation.  Micro agencies are only required to complete a questionnaire to inform subsequent interviews with TBS senior officials.

“The reporting burden needs to be assessed . . . to explore options for streamlining the documentation process by maximizing the use of information that is available through other oversight mechanisms / assessments.” (Source: deputy head interview)

In the UK and the US, the current models do not assess all organizations.  The focus of the approaches is on the large departments only; limiting the number of organizations assessed.  This approach is not recommended as the results of the MAF assessments are used as one input into the COSO process and would impact the comprehensive design of the model. 

Despite the positive feedback for these efforts, the reporting burden associated with MAF continues to be a challenge for departments and agencies.  Stakeholders believe that the reporting burden, coupled with the public nature of the assessment results, has led to “playing the MAF system,” and has not necessarily resulted in improvements to management practices.  A risk or priority based approach to MAF assessments that limit measurement to selected indicators relevant to the individual organization would support the reduction in the reporting burden.

Streamlining reporting requirements:  Opportunities for streamlining the assessment process by leveraging existing information was a theme identified by all stakeholder groups.  Potential sources to inform MAF were identified including, external Audit Committees, the Auditor General reports and other centrally available, objective evidence.  There may also be the opportunity to use other information gathering techniques, e.g., interviews and workshops, to gather evidence on which to complete an assessment, where applicable.

As a progressive step, TBS has taken the initiative to streamline the element of People for Round VII.  The approach taken has resulted in a consolidation of the people management elements currently reflected in AoM #1, 10, 11 and 21 into one AoM called “People Management”15.  The process has further been streamlined to develop measures based on eight existing Key Performance Indicators (KPIs) that do not require any additional reporting requirements beyond what is currently available.  The approach taken was to identify all sources of data requests and all information currently available. Indicators of performance were then developed using the existing information.  This approach may be considered for other AoMs in conjunction with the development of “golden rules” (outlined in section 4 "Recommendations" to this report) within the performance assessment framework.  

Consistency of available guidance for indicators:  The level of guidance provided to organizations to support their understanding of the reporting expectations is inconsistent across the indicators.  For some AoMs, the level of guidance materials provided to departments and agencies is very detailed and prescriptive, which can add to the complexity of the process.  For large departments, each AoM can be allocated to individual senior executives; in contrast, smaller agencies must rely on the limited resources they have to respond to MAF and as such, the amount of guidance materials can seem overwhelming and directly impact the acceptance and satisfaction with the approach.  Development of “golden rules” or common principles, including simplicity of guidance documentation with a consistent look and feel, should provide the necessary parameters and allow for consistency of guidance across AoMs.

7. The subjectivity of the MAF assessments is a result of a significant number of qualitative indicators built into the assessment approach.

Subjectivity of indicators and rating definitions:  For those indicators that are necessarily qualitative in nature (i.e. without quantitative measures), a significant component of the assessment relies on judgment by analysts, reviewers and executives within TBS.  For example, AoM #8 “Managing Organizational Change” attempts to measure the “extent to which the organization is engaged when undertaking change management”.  AoM #1 “Values-Based Leadership and Organizational Culture” seeks to measure that “organizational culture is reflective of public service values and ethics”.  In these instances, there is a risk that there will be challenges in supporting decisions and ensuring consistency in the application of judgment.   

Qualitative measures are a necessary way of measuring specific elements of managerial performance.  Necessary subjectivity that is built into the assessment process should nonetheless be accompanied by well-defined measures that increase the likelihood of consistency across organizations.  In our review of criteria supporting AoM ratings, we noted that the rating definitions used to assess the evidence are not always well defined, nor are they supplemented with examples or baseline standards to ensure consistency across AoMs.  As an example, when rating an organization for Line of Evidence 1.1 “Leadership demonstration of strong public service values and ethics”, the difference between receiving an ‘opportunity for improvement’ and receiving an ‘acceptable’ rating is the difference between whether the task was performed ‘sporadically’ and ‘regularly’.  The rating criteria does not define the qualitative indicator (i.e. how often is ‘regularly’), nor is there a measure of effectiveness provided (i.e. is performing a task ‘regularly’ appropriate to the circumstances for a certain department).  By comparison, we noted in our international research that the CAF approach in the EU includes provisions of examples to support consistency in the application of the measures.

Perceived negative connotation of the acceptable rating: Within MAF, each department is given a rating by AoM.  The rating scale, which is used for all AoMs, comprises four levels:

Through the course of our evaluation, several departmental stakeholders identified that the rating terminology of “acceptable” was not well received or considered appropriate as it is perceived to have a negative connotation.  Since an “acceptable” rating is meant to be positive, based on the associated narratives provided by TBS, there may be an opportunity to adopt alternative terminology to represent this rating on the assessment scale.  For example, the UK CR uses the label “well placed” as the next rating before “strong”.

Inconsistency of scoring across indicators:  While a rating definition is provided for each line of evidence, a scoring methodology is applied to each AoM to arrive at an overall score by AoM.  The score and weighting system used to assign the AoM rating is not consistent across the AoMs.  Approaches used include weighted average, a straight average or a subjective approach.  Using a consistent framework for weighting assessment scores is considered a best practice, which would further improve the transparency, understanding and acceptance of AoM ratings by departments and agencies. 

There is an opportunity to harmonize the indicators and the associated rating criteria to increase the consistency of assessment given the qualitative nature of indicators designed to measure managerial performance.  The development of “golden rules” for all AoMs, including measurable indicators that use a standard scoring approach, would provide parameters to the AoM leads within TBS when developing indicators and rating definitions.

8.  Many of the lines of evidence from Round VI are process-based, which do not measure effectiveness of the outcomes resulting from the process.

“MAF is measuring process, not management outcome; MAF tells us WHAT but needs to go further to say HOW to improve.” (Source: deputy head interview)

Our evaluation identified that out of the 68 lines of evidence in Round VI, 24 (35%) are primarily process-based; the remaining lines of evidence attempt to measure results/outcomes or measure compliance to policy requirements.  Process-based indicators may only measure the process associated with the infrastructure and do not necessarily attempt to integrate measures of the effectiveness or the outcomes of the decisions that have been made. 

By using process-based indicators to measure managerial performance, there is an inherent assumption made that the process itself will lead to the achievement of results.  This assumption can only be made; however, if it can be determined that the process is optimal for every situation; it is difficult to measure managerial performance using process-based indicators due to this uncertainty factor.  Outcome-based indicators increase the ability to assess whether the management practices had an impact on the quality of decisions and actions.  The use of a logic model can be leveraged to determine these indicators.

As an example, one organization confirmed that to meet the requirements of MAF, they formalized their previously informal committee structure.  As a result, each committee developed terms of reference and formalized the process.  While this was done and was recognized by the MAF score, senior management questioned whether it changed anything regarding the actual effectiveness or outcomes of the mechanism. 

The development of “golden rules” for all AoMs, including outcome/results-based indicators, would provide parameters to the AoM leads within TBS when developing indicators and rating definitions.

3.4  Is MAF Cost Effective?

“Is MAF cost effective? Two years ago I would have said no. We’re getting there now. We’re starting to see more efficiency and effectiveness in the assessment and more benefits on the managerial side...” (Source: deputy head interview)

The results of our analysis indicate that while improvements have been made in recent rounds to address the issue of the reporting burden and thus improving the efficiency of the MAF process, we think there are additional opportunities to reduce the level of effort required for organizations to provide evidence as part of the MAF process.  However, based on the limitations of the costing information available, a conclusion on the costs and the cost effectiveness of MAF cannot be determined.  The section below provides an analysis of the cost information that was available.  Also included are two examples regarding benefits departments have identified resulting from MAF.

9.  Our consultations indicated that most departments and agencies were unsure of the cost effectiveness of MAF. 

Example: Agriculture and Agri-food Canada (AAFC)

AAFC historically has struggled with the capacity to manage the required TB submissions, as a result of the increased funding provided to the department resulting from the 2006 Budget. This resulted in a MAF result of “attention required” for AoM 5 in Round IV.

Significant efforts were undertaken by the department to improve the management of TB submissions, including the development of a protocol and criteria to differentiate between priority and less critical TB submissions.

This reduced the number of submissions that required compressed timelines and allowed sufficient time for the department to develop good quality TB submissions.

Further, a control unit within the department has been established to enhance oversight, the challenge function and quality control. All these actions have resulted in an improved rating for AoM 5 since Round IV.

Our key findings resulting from our consultations with departments and agencies are that:

In addition to interviewing various stakeholders on the cost effectiveness of MAF, we gathered information on the approximate cost of conducting a MAF assessment from both the view of the departments and agencies and TBS. 

Example: Atlantic Canada Opportunities Agency (ACOA)

ACOA is an example of an agency that has embedded MAF into the daily management of the organization to ensure ongoing robust management practices. Based on best practices identified across Regional Economic Development Agencies (RDA), ACOA has developed MAF action plans, holding senior management accountable for their integration into strategic and operational plans, with oversight by a MAF Governance Committee.

A total of 21 departments and agencies were asked to submit information regarding resources required to respond to the annual MAF assessment process, including full-time equivalents (FTEs), salaries, a total time estimate and the total cost of the MAF assessment for their department.  Of the 21 requests, 14 organizations provided their estimate of the cost of MAF.  A further sampling of three departments/agencies (one small, medium and large) was conducted to determine the level of effort in person days for 12 MAF activities. From this information, a total estimated cost of the MAF assessment process for the department was determined.

Cost of MAF to departments/agencies:  Given the range of responses and feedback provided separately in interviews, it was clear that most (if not all) organizations do not track the cost of the MAF assessment process and as such, the results of the cost analysis are questionable. Departmental representatives were only able to provide an indication of the effort required for Round VI.  In several instances, the MAF contacts in the departments were able to provide the FTE information for those individuals that are dedicated to the MAF process but found it difficult to estimate the level of effort for others across the department who provide input to the process.  Further, for the departments that were able to provide information, the data was provided to us with a number of caveats and limitations.  As a result, a trend analysis could not be performed nor could we analyze cost against performance.

Based on the information provided, the graphs below provide an indication of the total cost of MAF for a sample of departments and agencies.

Figure 4: 2008/09 MAF Assessment Estimated Total Cost per Organization
Figure 4: 2008/09 MAF Assessment Estimated Total Cost per Organization

Figure 4 - Text Version

Figure 4 outlines the total estimated cost of responding to MAF as reported by the individual departments and agencies.  As demonstrated, the cost of MAF varies across organizations.  Based on the information available and the limitations of what the information represents across the departments and agencies, a conclusion on the costs of the MAF process, let alone the cost effectiveness of MAF, cannot be determined. 

Due to this result, TBS approached three organizations (one small agency, one-medium sized organization and one large department) to complete the detailed costing template and provide their organization’s time based on the 12 MAF activities.  Figure 5 below shows the total estimated cost of conducting the MAF assessment and it is clear that as the size of the department increases so does the cost.  The estimated cost of conducting the assessment for the sample departments according to size were: $47,700 (small), $118,700 (medium) and $373,400 (large).  What is also interesting to note is that when the cost is considered as a percentage of the departments operating budget the inverse is true; small agencies spend more time and incur a greater cost as a percentage of total budget than large departments / agencies. This finding is consistent with comments that the reporting burden is felt more by the smaller agencies.

Figure 5: 08/09 MAF Assessment Estimated Cost for Three Sample Organizations
Figure 5: 08/09 MAF Assessment Estimated Cost for Three Sample Organizations

Figure 5 - Text Version

Costs to Treasury Board

The MAF Directorate facilitated a costing exercise within TBS to determine the estimated cost of conducting the MAF assessment.  The cost estimates were collected from the four program sectors and the policy centers responsible for all 21 AoMs, the MAF Directorate (which coordinates the MAF assessment process for the Government of Canada) and TBS’ Corporate Services Branch (which maintains the MAF Assessment System and MAF Portal, as well as supporting corporate communications related to MAF).  It is important to note, that like the departments / agencies the information provided are estimates, as time per task related to MAF is not tracked within TBS, nor are the costs of goods or services. 

The total level of effort is estimated to be 47.5 FTEs.  Within TBS, the most recent assessment period (Round VI) involved 339 individuals at all levels, across the organization.  The majority of the effort for the MAF assessment process is attributed to the work of the policy centers and program sector, which conduct the assessments for the 21 AoMs.  The MAF Directorate, which coordinates the MAF process, represents 16% of the person effort while Corporate Services accounts for the remaining 5%.

Figure 6: Estimates of MAF Assessment Costs within TBS by Sector
Figure 6: Estimates of MAF Assessment Costs within TBS by Sector

Figure 6 - Text Version

As a result of this exercise, the MAF assessment process within TBS is estimated to be $5.6 million per year.   

3.5  Is MAF Governance Effective?

While the governance over MAF and the MAF assessment process is effective, there is an opportunity to enhance MAF governance through more meaningful engagement with departments and agencies.

In assessing the effectiveness of MAF governance, we were asked to address the roles, responsibilities and approval processes supporting MAF, the appropriateness of the TBS’ role in measuring government managerial performance and whether the introduction of MAF has allowed for systematic and transparent conversations between deputy heads within the Federal government.  Our key findings resulting from this are that:

While we touch on roles, responsibilities and approval processes in discussing these findings, we have addressed this in an integrated fashion in section 3.3, “Are MAF Assessments Robust?”.

10.  Treasury Board Secretariat is the appropriate entity to measure managerial performance within the Federal Government.

The role of TBS is to ensure that government is well managed and accountable and that resources are allocated to achieve results. The functions performed by TBS are directed towards the governance, accountability and quality of public sector management.  These functions are intended to have an impact on the efficiency and effectiveness with which government programs and services are delivered.  MAF has allowed TBS to assess managerial performance and policy compliance at a departmental level and facilitate conversations and support effort towards high organizational managerial performance.

In our international research, we did not identify other countries or models where the central management board plays as active a role in the management capability assessments that take place. 

While self-assessment could be seen as a cost effective approach to assessment, there are limitations that would counter the benefits that stakeholders have enjoyed and have come to rely on from MAF.  Typically self-assessment is limited in rigor and quality, impacting the value and the ability for deputy heads, TBS and COSO to rely on the results as inputs into decision-making processes.  Self-assessment may be considered as an internal tool for deputy heads to assess the state of management in their department or agency during off-cycles of the MAF assessment period. 

We think, however, that regardless of the approaches used elsewhere, the role TBS plays in MAF is appropriate to its role within the Federal government.  This view was confirmed in our consultations with departments and agencies, as well as TBS representatives, where there was a consistent viewpoint that oversight of the MAF, including conducting the assessments, was an appropriate role for TBS.

Beyond the ownership of the MAF process, TBS is responsible for the conduct of the individual assessments.  Leveraging the UK model, the involvement of external reviewers was presented to various stakeholder groups as an option to enhance the level of management expertise within the assessment process.  We do not recommend this approach as it has been viewed as limited in terms of the ability to understand the Federal government context and environment.  Further, it would add significant expense to the existing MAF process. 

From a governance perspective, there may be an opportunity to enhance MAF through the involvement of “peers”, e.g., participation of a small group of senior executives in the process and discussion of system-wide results in an advisory capacity.  This group could be involved both at the outset of the MAF assessment round, through a review of the planned assessment framework, and at the end, to review and advise on the assessment results.  This would have the benefit of providing senior, experienced advice to TBS at key points in the assessment process, as well as to the deputy heads that are receiving the assessment results, and would, we think, increase the level of acceptance of the assessment process and results across the government.



4  Recommendations

As demonstrated in the previous section, our key findings were developed based on the evidence gathered from multiple lines of evidence and were consolidated, analyzed and synthesized into the 10 key findings reported.  Based on the results, we have developed five recommendations to address our principal findings.  These recommendations support the evolution of MAF as a performance enabler and to continue its usefulness and sustainability as a tool for both TBS and deputy heads to strive towards management excellence.  

The following illustration maps the five key recommendations to the 10 principal findings and the lines of evidence used to support the findings.

Figure 7: Mapping of Evaluation Recommendations to Principal Findings and Lines of Evidence
Figure 7: Mapping of Evaluation Recommendations to Principal Findings and Lines of Evidence

Figure 7 - Text Version

Figure 7 - Display full size graphic

4.1  Detailed Recommendations

The following section describes our recommendations including a description of the benefits and risks associated with implementation.  Management responses have been developed and are presented in a separate document.

1.  Implement a risk/priority based approach to the MAF assessment process.

To ensure that MAF is addressing the most relevant indicators for a particular organization and has embedded sufficient incentives for senior executives, we recommend that TBS consider implementing a risk/priority approach for MAF assessments.  Possible approaches, which would require further analysis, to achieving this are provided below.

Assessment based on the risks/priorities unique to the organization:  While the context letter provided to the deputy head with the final assessment does include key priorities for focus, TBS could, after consultation with departments and agencies, identify which indicators address the main priorities and risks of that organization for that year.  This discussion could occur in conjunction with debriefs of the previous round’s assessment.

To limit the amount of time and resources required to obtain agreement on the indicators, a variation of this recommendation could include grouping departments and agencies (e.g., by industry sector or size) and identifying the indicators that should be assessed, based on the risks and priorities of the group.  

Assessment based on clustering of indicators:  Classify the indicators into categories:  mandatory, optional, and cyclical.  Mandatory indicators (e.g., financial management and control), for all organizations would be assessed every year.  For the optional category, the assessment cycle could be based on the risk/priorities by departmental cluster (e.g., by industry sector or size).  Finally, the cyclical category might include a rotating set of indicators that reflect current government priorities.  With sufficient advance notice, the categorization of the indicators could be subject to change. 

Assessment based on the results of previous rounds:  Review the previous MAF assessment score.  Based on the performance for specific indicators, a department or agency rated as “Strong” in the current assessment round would not be formally assessed in the subsequent round.  This would provide a one-year “pass” from formal assessment.  This approach could also be combined with a self-assessment for those indicators not being assessed to confirm that no significant changes have taken place during the year.  A similar exercise was attempted for AoM#1 in Round VI.

Assessment based on department/agency size:  We agree with and recognize that TBS has already applied this approach to assessments.  Small and micro agencies are assessed using a three-year cycle and a ‘light’ version of the MAF.  TBS may wish to consider expanding on this concept by establishing separate MAF criteria and reporting requirements for small agencies, i.e., a “MAF-light”, to address the disproportionate burden felt by small agencies under the current MAF assessment approach.

We think the main benefit to using a risk-based approach is the flexibility in tailoring the assessment to the department or agency’s specific needs; it provides an explicit means for organizations and TBS to agree on what is important.  A risk-based approach also supports reduction in both the departmental and TBS resources required to manage the MAF assessment process.

There are two identified risks to using this approach.  First, if all aspects are not measured every time, there is a risk that a substantive issue may be missed or overlooked.  Second, if a risk-based approach is used, comparisons across organizations may be difficult.

2.  Develop the guiding principles (“golden rules”) for assessing managerial performance and incorporate them into the existing MAF assessment methodology.

While each AoM measures a separate management area, they each support an overall assessment framework and methodology.  We recommend that a common set of guiding principles or “golden rules” be identified, agreed upon and applied to each AoM.  Examples of the golden rules that should be considered are as follows.

  1. Maintain an appropriate balance between quantitative and qualitative indicators within each AoM.
  2. Develop outcome-based indicators of managerial performance.
  3. Leverage information that is available through other oversight activities that support the indicators identified for each AoM.
  4. Maintain the stability of indicators, which is critical to the assessment of progress over time.
  5. Ensure clarity and transparency of indicators, measurement criteria and guidance documentation.
  6. Engage functional communities in open dialogue in the ongoing development of AoM methodology and assessment measures.
  7. Include assessment and identification of both policy compliance and result-based managerial performance.
  8. Seek ways to recognize and provide incentives to encourage innovation across departments and agencies.

The following paragraphs provide further explanation for the recommended golden rules outlined above.

a)  Balance quantitative and qualitative indicators:  We recommend that TBS review the lines of evidence and consider the appropriate balance between objective, quantitative indicators and qualitative, subjective indicators.  Where subjectivity is required, sufficient definitions are necessary to minimize the risk of inconsistent application across departments and agencies. 

b)  Focus on outcome-based measures:  When assessing elements of managerial performance (as compared to policy compliance), outcome-based indicators (vs. process-based) allow an organization to assess whether the management practices put into place had an impact on the quality of decisions and actions.  We recommend that when reviewing the lines of evidence, consideration be given to structuring those indicators that are measuring managerial performance as outcome-based to provide a more accurate representation of the impact of the management practices put into place.  As the MAF logic model developed by the MAF Directorate highlights expected outcomes of MAF, this model could be leveraged to develop these indicators.

c)  Leverage existing and available information:  Once indicators and measurement criteria have been developed, the sources of information used to assess against the indicators must be established.  Leveraging existing information through other oversight mechanisms both internal to departments and agencies and from central agencies (e.g. Office of the Comptroller General, Auditor General of Canada) would maximize the efficiency of the reporting element of the MAF assessment process.

d)  Stability of indicators:  Once developed for a particular AoM and overall agreement and support have been obtained, the indicators ideally should remain stable.  Changes should be considered when there are significant changes to the environment relative to that managerial area.

e)  Clarity and transparency of indicators, criteria and guidance documentation:  To the extent possible, all indicators, lines of evidence and measurement criteria should be clear, simple to understand and transparent in the approach that leads to an assessment result.  In conjunction with recommended golden rule a), to the extent that subjectivity is built into an indicator, clarity and transparency on the application of judgment will be required.   

As part of the assessment methodology, we recommend that a consistent scoring approach be developed that will facilitate a standard yet flexible overall score per AoM.  A weighted average approach would support this goal.

We recommend TBS consider changing the terminology used in the MAF assessment scale.  Our consultations indicated that the term “acceptable” carries a negative connotation.  Alternative terms such as “meets standards”, “meets expectations” or “well placed”, among others may be considered.

f)  Engage functional communities in open dialogue in the ongoing development of AoM methodology and assessment measures.  When changes to AoM methodology and measures are required to be updated and refined, we recommend that departmental stakeholders be consulted during the development process.  TBS could leverage the applicable functional communities to facilitate an ongoing and cooperative dialogue to ensure that changes meet the needs of different stakeholders and further increases the likelihood of acceptance by departments and agencies.

g)  Assessment and identification of measures of policy compliance versus managerial performance:  We agree that MAF should continue to measure both managerial performance and policy compliance.  We recommend; however, that TBS separate the assessment of managerial performance and policy compliance by identifying which indicators measure performance and which measure compliance. This could include different approaches to assess policy compliance versus managerial performance.

h) Recognize and provide incentives to encourage innovation:  To continue to encourage innovation and creativity across the Federal government, TBS has an opportunity to provide incentives and recognize these efforts.  There are various approaches to embed innovation within the MAF, as follows: 

Incorporating the golden rules for assessing managerial performance will benefit organizations by providing greater clarity around expectations and increasing the consistency of assessments.  Further, this benefits both TBS and departments and agencies in more accurately gauging progress and focusing on specific areas of importance to the organization.

We note that if changes are made to the performance assessment framework, there is a risk to the short-term stability of the lines of evidence.  Additionally, we note that a significant amount of resources are required to properly and exhaustively streamline indicators; it may be that this approach would need to be rolled-out over time.

3.  Introduce or leverage an existing governance body with senior representatives from client departments to assist MAF.

We recommend that a deputy head or a departmental senior executive (i.e., Assistant Deputy Ministers) steering committee be introduced or an existing forum be leveraged. This governance body could be used to advise on changes to the MAF process and to use the MAF results to guide government management priorities.

A governance body with senior representatives from client departments will benefit TBS by increasing the likelihood that changes to the MAF process will be accepted and acted upon within departments and agencies.  The governance body becomes an added communication tool that may be used by TBS.  Conversely, introducing additional governance may increase the time required for reaching agreement.  That is, there is a risk that decisions become delayed. 

4.  Develop a stakeholder engagement and communication strategy and plan, including early engagement when changes are made to the MAF assessment process.

Our findings indicate a lack of clarity among departmental stakeholders on critical elements of the MAF process, which directly impacts their acceptance and satisfaction with the process and methodology.  There are opportunities to increase the engagement of stakeholders within the departments and agencies through enhanced communication at key steps within the MAF assessment process. 

We recommend that TBS consider the development of an engagement and communication strategy and plan related to the MAF assessment rounds to increase the visibility of the process.  This could include the following key gaps in understanding and clarity:

Apart from direct bilateral discussions between TBS and departments, the communication and engagement strategy may be supported through the use of existing forums or communities such as Chief Financial Officers and Chief Audit Executives, as well as through a MAF governance body (as per our previous recommendation).  Additionally, the communication strategy could potentially address how to communicate the MAF process and results to parliamentarians.

We further recommend that as changes are made to the assessment process, methodology or indicators, early engagement of department and agency stakeholders be considered to allow sufficient time to respond and increase the likelihood of complete and accurate information being submitted.

Developing and executing an engagement and communication strategy and plan benefits TBS by enabling greater acceptance of the MAF process within departments and agencies.  However, we recognize that such a plan may increase pressure on TBS to provide timely updates by placing additional constraints around communication deadlines.

5.  Assign formal responsibilities within TBS to oversee the MAF assessment methodology/framework and management of horizontal issues/action plans.

We recommend that TBS consider expanding the role of the MAF Directorate to provide horizontal oversight and integration across the lines of evidence for both methodology and results.  This expanded role would require the MAF Directorate to review the indicators and criteria for consistency across all elements. 

When the MAF results are complete, we recommend that the MAF Directorate work with the AoM/indicator leads to identify horizontal (cross department) issues and then develop appropriate action plans, communicate the plan and monitor the results.  The MAF Directorate would report on progress to the Assistant Secretaries and the Associate Secretary.

The benefit of assigning formal oversight responsibilities for MAF assessment methodology and management of horizontal issues is that this ensures the framework is sustained and, most importantly, holds stakeholders accountable for action plans.

4.2  Summary of the Recommendations

The following table demonstrates the alignment of the each recommendation detailed above to nine key areas (as identified by deputy heads) of the MAF.

Table 2 Alignment of Recommendations to Key MAF Areas

Key Areas

Recommendations for Evolution of MAF to a Performance Enabler

1: Risk-based Approach

2: Guiding Principles/ Golden Rules

3: Governance Body

4: Engagement & Communications

5: Horizontal Oversight

MAF vision and objectives

yes

yes

yes

yes

no

MAF governance, including roles and responsibilities

no

no

yes

no

yes

MAF methodology of assessments

yes

yes

no

yes

yes

Reliability and accuracy of MAF assessments

yes

yes

no

yes

yes

MAF reporting requirements

yes

yes

no

no

no

Systems supporting MAF

no

yes

no

yes

yes

MAF process

yes

yes

yes

yes

yes

MAF treatment of entities

yes

no

no

no

no

MAF alignment to GC's planning cycle

no

yes

no

no

no



Annex A   List of Interviewees and Consultation Participants

1.  Deputy Heads
Department Deputy Head

1.  Agriculture and Agri-Food Canada

Yaprak Baltacioğlu

2.  Atlantic Canada Opportunities Agency

Monique Collette

3.  Canada Economic Development for the Regions of Quebec

France Pégeot (Vice-President, Policy and Planning)

4.  Canada Revenue Agency

William Baker

5.  Canada School of Public Service

Ruth Dantzer

6.  Canadian Border Services Agency

Stephen Rigby

7.  Canadian Food Inspection Agency

Carol Swan

8.  Canadian Grain Commission

Elwin Hermanson

9.  Canadian International Development Agency

Margaret Biggs

10.  Canadian Security Intelligence Service

Jim Judd

11.  Correctional Service Canada

Don Head

12.  Environment Canada

Ian Shugart

13.  Foreign Affairs and International Trade Canada

Leonard Edwards

14.  Health Canada

Morris Rosenberg

15.  Human Resources and Skills Development Canada

Janice Charette

16.  Indian and Northern Affairs Canada

Michael Wernick

17.  Library and Archives Canada

Ian Wilson

18.  National Energy Board

Gaétan Caron

19.  National Research Council Canada

Pierre Coulombe

20.  Public Health Agency of Canada

David Butler-Jones

21.  Public Safety Canada

Suzanne Hurtubise

22.  Public Service Commission

Maria Barrados

23.  Public Works and Government Services Canada

François Guimont

24.  Statistics Canada

Munir Sheikh

25.  Transport Canada

Louis Ranger

26.  Veterans Affairs Canada

Suzanne Tining

27.  Western Economic Diversification Canada

Oryssia Lennie

 

2.  Deputy Minister Steering Committee
Steering Committee Deputy Minister

1.  Secretary of the Treasury Board of Canada (Chair)

Wayne Wouters

2.  Chief Human Resources Officer, Treasury Board of Canada

Michelle d’Auray

3.  Deputy Minister, Agriculture and Agri-Food Canada

Yaprak Baltacioglu

4.  Commissioner, Canada Revenue Agency

William Baker

5.  Deputy Minister, Citizenship and Immigration Canada

Richard Fadden

6.  Deputy Minister, Indian and Northern Affairs Canada

Michael Wernick

7.  Deputy Minister, National Defence

Robert Fonberg

8.  Deputy Minister, Natural Resources Canada

Cassie Doyle

9.  Chief Executive Officer, Parks Canada Agency

Alan Latourelle

10.  Deputy Secretary to the Cabinet, Privy Council Office

Patricia Hassard

11.  Associate Secretary, Treasury Board of Canada

Anita Biguzs

 

3.  Departmental MAF Contacts (February 23, 2009)
Department Departmental MAF Contact

1.  Agriculture and Agri-Food Canada

Pierre Corriveau

2.  Canada School of Public Service

Michele Brenning

3.  Canadian Food Inspection Agency

Dilhari Fernando

4.  Canadian International Trade Tribunal

Steve Malouin

5.  Canadian Security Intelligence Service

David Vigneaut

6.  Environment Canada

Basia Ruta

7.  Finance Canada

Barbara Gibbon

8.  Fisheries and Oceans Canada

Cal Hegge

9.  Privy Council Office

Marilyn MacPherson

10.  Treasury Board of Canada

Ann Van Dusen

11.  Veterans Affairs Canada

Ron Herbert

 

4.  Departmental MAF Contacts (February 27, 2009)
Department Departmental MAF Contact

1.  Atlantic Canada Opportunities Agency

Sherril Minns

2.  Canada Economic Development for the Regions of Quebec

André Cliche

3.  Canada Industrial Relations Board

Ginette Brazeau

4.  Canada Revenue Agency

Normand Théberge

5.  Canadian Heritage

Pablo Sobrino

6.  Canadian Intergovernmental Conference Secretariat

Mara Indri-Skinner

7.  Canadian International Development Agency

Christine Walker

8.  Foreign Affairs and International Trade Canada

John Barrett

9.  Health Canada

Alfred Tsang

10.  Human Resources and Skills Development Canada

Jean Cheng

11.  Human Resources and Skills Development Canada

Stephen Johnson

12.  Human Resources and Skills Development Canada

David Rabinovitch

13.  Infrastructure Canada

David Cluff

14.  Office of the Superintendent of Financial Institutions Canada

Gary Walker

15.  Public Health Agency of Canada

Jim Libby

16.  Public Safety Canada

Elisabeth Nadeau

17.  Public Works and Government Services Canada

Caroline Weber

18.  Service Canada

Frank Fedyk

19.  Statistics Canada

Janice Vézina

20.  Transport Canada

Andre Morency

21.  Treasury Board of Canada Secretariat – Office of the Chief Human Resources Officer

Mitch Bloom

22.  Western Economic Diversification Canada

Jim Saunderson

 

5.  MAF Network (February 11, 2009)
Department MAF Network

1.  Agriculture and Agri-Food Canada

Doug Ruby

2.  Atlantic Canada Opportunities Agency

Michel Léger

3.  Canada Industrial Relations Board

Jean-Charles Roy

4.  Correctional Service Canada

Jason Cormier

5.  Environment Canada

Lisa Huang

6.  Foreign Affairs and International Trade Canada

Francis Furtado

7.  Health Canada

Johanne Curodeau

8.  Indian and Northern Affairs Canada

Roger Ermuth

9.  Library and Archives Canada

Christine Mayer

10.  National Defence

Dan Bellini

11.  Natural Resources Canada

Eugène Omboli

12.  Public Health Agency of Canada

Loretta Scott

13.  Public Safety Canada

Linda Stapledon

14.  Public Works and Government Services Canada

Malick Babou

15.  Royal Canadian Mounted Police

Redd Oosten

16.  Statistics Canada

Peter Bissett

17.  Transport Canada

Debbie Cecil

18.  Veterans Affairs Canada

Anita Lewis

 

6.  Small Agency Administrator’s Network (February 24, 2009)
Small Agency Administrator

1.  Canadian Environmental Assessment Agency

Ronald Kuzak

2.  Canadian Human Rights Commission

Hervé Ethier

3.  Heads of Federal Agencies

Tom Pederson

4.  Immigration and Refugee Board of Canada

Glenn Ng

5.  National Parole Board

Anne Gagne

6.  National Parole Board

Sheila Ouellette

7.  Public Service Labour Relations Board

Alison Campbell

8.  RCMP External Review Committee

Virginia Anderson

9.  Senate of Canada

Kim Grandmaison

10.  Senate of Canada

Jill Anne Joseph

11.  Supreme Court of Canada

Lynn Potter

12.  Western Economic Diversification Canada

Kevin Johnson

 

7. Central Agency Representatives
Central Agency Representatives

1.  Canada School of Public Service

Ivan Blake

2.  Privy Council Office

Patricia Hassard

3.  Secretary of the Treasury Board of Canada

Wayne Wouters

4.  Treasury Board of Canada Secretariat – Chief Information Officer Branch

Peter Bruce

5.  Treasury Board of Canada Secretariat – Expenditure Management Sector

Alister Smith

6.  Treasury Board of Canada Secretariat – Office of the Chief Human Resources Officer

Mitch Bloom

7.  Treasury Board of Canada Secretariat – Office of the Comptroller General

John Morgan

8.  Treasury Board of Canada Secretariat – Office of the Comptroller General

Brian Aiken

9.  Treasury Board of Canada Secretariat – Priorities and Planning

Frank Des Rosiers

10.  Treasury Board of Canada Secretariat – MAF Directorate

Ewa Burk

 

8.  TBS Program Sector Assistant Secretaries (February 18, 2009)
TBS Program Sector Assistant Secretary

1.  Economic Sector

Nada Semaan

2.  Government Operations Sector

Mary Chaput

3.  International Affairs, Security and Justice Sector

John Ossowski

4.  Social and Cultural Sector

Wilma Vreeswijk

 

9. Area of Management Representatives (February 17, 2009)
Area of Management Representative

1.  AoM 1 – Values and Ethics

Jeffrey Ayoub, Bryon Milliere

2.  AoM 2 – Corporate Performance Framework

Rohit Samaroo

3.  AoM 3 – Corporate Management Structure

Elizabeth Tromp

4.  AoM 4 – Extra-Organizational Contribution

Evan Perrakis

5.  AoM 6 – Evaluation

Caroline Falaiye

6.  AoM 7 – Performance Reporting to Parliament

Tim Wilson, Gyulia Borbely

7.  AoM 8 – Managing Organizational Change

Matthew Enticknap, David Clifton, Bruce Wang

8.  AoM 9 – Risk Management

Eric Bélair, Nisa Mairi Tummon

9.  AoM 10 – Workplace

Jeffrey Ayoub, Bryon Milliere

10.  AoM 11 – Workforce

Jeffrey Ayoub, Bryon Milliere

11.  AoM 12 – Information Management

Stephen Walker, Laura Simmermon, Marg McIntyre

12.  AoM 13 – Information Technology

Jeff Braybrook

13.  AoM 14 – Asset Management

Phil Jacobson, Magali Johnson

14.  AoM 15 – Project Management

Greg Kenney, John Nater

15.  AoM 17 – Financial Management and Control

Eddie Vlasblom

16.  AoM 18 – Internal Audit

Helena Szakowski, Brian McKenna

17.  AoM 19 – Security and Business Continuity

Nathalie Pelletier

18.  AoM 20 – Citizen-Focused Service

Evan Perrakis, Christine Lau

19.  AoM 21 – Alignment of Accountability Instruments

Suky Sodhi

20.  Program Sector Representative, Government Operations Sector

Tom Scott

 

Annex B   MAF Logic Model

The following represented the MAF logic model as developed and provided by TBS.  The measurement implications of the logic model are provided following the diagram:

Figure 8: MAF Logic Model
Figure 8: MAF Logic Model

Figure 8 - Text Version

Figure 8 - Display full size graphic

Measurement Implications of Logic Model:

Ultimate Outcome 1

Continuous improvement in quality of public management in federal public service

Intermediate Outcome 1

Departments/Agencies take action to improve management performance

Intermediate Outcome 2

TBS has greater capacity to meet needs of Departments/Agencies for advice on management

Immediate Outcome 1

Shared understanding in Government of Canada of standards of good management

Immediate Outcome 2

Departments/Agencies recognize management issues and prepare action plans

Immediate Outcome 3

Better understanding in TBS on state of public management, including key risks

Immediate Outcome 4

Availability to Parliament and public of information on state of public management

Output 1

MAF assessments for Departments/Agencies

Output 2

Recommended management priorities for Departments/Agencies

Output 3

Ongoing advice, assistance and outreach to Departments/Agencies

Output 4

Analysis of Government of Canada and global trends in public management

Output 5

Public communications products related to MAF

Annex C   MAF Alignment with Government Priorities

MAF is aligned with and supports the federal priority (Budget 2008) of ‘Managing spending to ensure programs and services are efficient, effective, aligned with the priorities of Canadians, and affordable over the long term.

The role of TBS is to help ensure departments are well managed and accountable and that resources are allocated to achieve results:

MAF is a strategic intelligence-gathering tool that is used by TBS to inform the above noted activities whose primary purpose is to ensure that programs can be delivered to the highest standards of public management in the most cost-efficient fashion which is a key federal priority as noted above.

TBS does have other mechanisms to exercise oversight of both departmental and individual program performance results. However, these are used for specific purposes.  The MAF collects and coordinates these inputs to provide a single departmental and government-wide perspective.

Figure 9: MAF Alignment with Government Priorities
Figure 9: MAF Alignment with Government Priorities

Figure 9 - Text Version



Annex D   International Comparisons

 

United Kingdom (UK)

United States (US)

European Union (EU)

Name of program/framework

Capability Review (CR)

President’s Management Agenda (PMA)

Common Assessment Framework (CAF) [“Study on the use of the Common Assessment Framework in European Public Administration”]

Type of program/framework

Organizational capability: The objective is to “...improve the capability of the Civil Service to meet today’s delivery objectives and to be ready for the challenges of tomorrow.” [“Take-off or Tail-off?  An evaluation of the Capability Reviews programme”]

Organizational performance.

Organizational capability: The objective is to improve quality of public service administration.

How long has the program been in place?

Since 2005.

Since 2001.

First released in 2000.  Major updates were made in 2002 and 2006. Adopted by approximately 1,775 public organizations in the EU by mid 2009..

Purpose of program

Assess capability as well as provide support for improvement.

Improve agency performance in 5 specific areas.

Program initiated by the EU Member States related to public sector quality management.

Brief description of framework

The CR contains 10 program elements in 3 broad areas of:

  • Leadership
  • Strategy
  • Delivery

The PMA contains 24 “standards” defined across 5 key initiatives:

  • Strategic management of Human Capital.
  • Competitive Sourcing.
  • Improved Financial Performance.
  • Expanded Electronic Government.
  • Budget and Performance Integration.

The CAF is patterned after the European Foundation for Quality Management (EFQM) business model and the model of the German University of Administrative Sciences in Speyer.  It contains nine criteria (five Enablers and four Results criteria).  These criteria are divided into 28 sub-criteria.

 

What indicators are used?

Five assessment categories are used;

0 = “serious concerns”; 4 = “strong-good capability for future delivery in place”

A “double scorecard” system is used.  Standards (indicators) are established to measure:

  • Current status of initiatives
  • Progress on improvement

This is scored by assessing the agency’s “Green Plan”  (in this case, meaning a plan to get to “green” on all initiatives).

Views “enablers” as organizational capability, “results” as output and outcomes and includes criteria for measuring both within the framework.  Based on self-assessment.

Is it an “outcomes-based model”; that is, what specific outcomes of management performance or organizational capability is being assessed?

Focus is on organizational systems and processes that promote delivery capability.

Current status scores verifiable through the agency’s Performance Assessment Rating (PAR).  Progress criteria are developmental and more subjective.

See above.


Assessment Methodology

 

United Kingdom (UK)

United States (US)

European Union (EU)

How is the assessment done?

Yearly external review of key capabilities.

Quarterly self-assessment with input from the President’s Management Council (PMC).

Internal review, usually be project teams appointed by the organization.

Who does the assessment, what are the characteristics of the assessors?

Capability Review Teams situated in the Cabinet Office.  A five-person team: 3 “experienced” people from outside central government and 2 director generals from other government depts.  Each team usually includes 2 members from the private sector (note: teams reduced to 3 in the second round of assessments). [National Audit Office Assessment of the Capability Review Programme, Feb 2009]

Internal, progress indicators and Green Plans reviewed by Office of Management and Budget (OMB).

Internal project teams.

Is it purely evidence-based or is assessor judgment used? What is the balance between evidence and judgment?

Judgment:  the review teams make judgments about the 10 elements on a score of 0 to 4 using an underlying set of 40 questions.

Evidence based: internal indicators noted in PAR. Judgment, PMC and OMB provide input into Green Plan initiatives.

Judgment based on the CAF and the evidence gathered for each sub-criterion on the basis of group consensus.

Is a risk-based approach used? (Are all component of the framework assessed or are certain components used at different times?)

All components are assessed.

All components are assessed.

A complete assessment is done each time, but there is flexibility in how the CAF is applied.

How are organizations selected for assessment? Are all organizations assessed or is there a selection process?

All central government agencies are assessed.

All are assessed in quarterly scoring process.

Self-assessment; voluntary.

Are different approaches used for organizations of different types and sizes (i.e., is it a one size fits all model)?

One size.

One size.

The CAF has to be implemented respecting the structure of the nine criteria and 28 sub-criteria and the scoring system.  Further, each organization and country has to take into consideration its own specificities.

Do the results of the program affect the senior management's remuneration? What are the outcomes of good/bad performance?

There are some linkages to salary decisions of Permanent Secretaries.  The focus is on improvement, therefore poor performance leads to assistance as required, but Permanent Secretaries are held accountable.

There are some linkages to both budget and Chief Operating Officer evaluations. The Executive Branch Management Scorecard provides overview of progress in all agencies.

Depends on how the framework is applied.


Outcomes of Assessment

 

United Kingdom (UK)

United States (US)

European Union (EU)

What are the immediate results of the assessment?

Each department must create an action plan to rectify problems identified. [National Audit Office Assessment of the Capability Review Programme, Feb 2009]

Agencies must create “Green Plans” demonstrating how they will improve on each of the criteria assessed.

Depends on application, but a recent survey of 133 organizations who have used the CAF found that 87% started improvement initiatives based on the assessment. [“Study on the use of the Common Assessment Framework in European Public Administration” – 2005]

What are the improvements in service delivery/performance results for citizens?

Difficult to separate out improvement from the CR from other initiatives.

Consideration of PAR results fits into the PMA. Chief Operating Officers are held accountable for results.

The survey suggests that the aforementioned improvement initiatives might or might not lead to improvement in results for citizens. Difficult question to answer due to attribution and time-lag issues.

Cost/value for money

Direct first round of reviews cost £5.5 million, reduced to £4.3 million in round 2; approximately £226,000 per department. Difficult to assess cost to departments for internal work on the review. Difficult to determine value for money given that full costs are not known. Departments also have a hard time separating the value delivered by actions linked to Reviews from other ongoing programmes. The Cabinet office charges departments £150,000 for the review.

N/A.

N/A.

Main benefits

Departments have designated board members who coordinate action on the CR. This brought a focus on management within departments and a means of connecting across departments for sharing lessons learned.

Primarily focusing senior management’s attention on important management issues. Other benefits include better fact-based decision making and better internal coordination.

Provides a focus and methodology for organizational improvement and is strongly based on the involvement of all staff in the installation of an organizational culture of total quality.

Lessons learned

Departments showing improvement on CRs, but no link to actual performance in terms of delivery noted.

Need to consider middle layer management as well as programs delivered through 3rd parties.

Need to consider comparison to organizations outside of government (i.e. best practice)

Yellow criteria can generally be objectively assessed, however there are difficulties with green criteria (e.g. documentation is often incomplete; some evaluations are difficult to substantiate). [“Review of OMB’s Improved Financial Performance Scorecard Process.”]

N/A.

Are results made public?

Yes, published on agency’s web sites.

Yes, published on department’s web sites.

The results remain the property of each organization.

Annex E   Evaluation Questions

Section

Evaluation Questions

Is MAF Meeting Its Objectives?

  • P1. Is MAF realizing stated objectives? Are the objectives appropriate or should they be broadened/narrowed? As part of this assessment, the relevancy and effectiveness of the 21 Areas of Management will be assessed. 
  • P2. Is MAF adequately assessing and contributing to improved public sector management?
  • P3. Should MAF measure other broad elements of management (e.g., policy development, leadership ability, management reporting systems, etc.)?
  • P4. To what extent does MAF support the multiple objectives of the government (e.g., deputy head support and advice, Committee of Senior Officials, TB support, horizontal assessment of management practices, etc.)?
  • P5. What are the ancillary benefits and uses of MAF to others?  Who do they accrue to (e.g. Parliament)?
  • P6. What impacts has MAF had (intended or non-intended)?
  • P9. In a post-FedAA world where deputy heads are Accounting Officers, does MAF need to change?
  • P11. Is there evidence overall other government organizations are using management frameworks that are performing better than MAF?
  • A11. To what extent can, or should, MAF be used to support other government initiatives (e.g., greening of government, gender-based analysis, policy compliance and expenditure management)?

Are MAF Assessments Robust?

  • A3. Is MAF methodology of assessment reliable, fair and reasonable?  To what extent is MAF providing a reliable and accurate assessment of management within and across federal organizations (e.g., do MAF results accurately reflect realities, including departmental performance)?
  • A4. To what extent should the MAF process be evidence-based?
  • A5. Is the present assessment approach appropriate (present approach versus independent or risk-based approaches)?  Is it the most efficient and effective way to obtain MAF results?  Are results communicated, used and reported effectively?
  • A6. Should MAF treat portfolio entities, small agencies and non-PMP entities differently?
  • A7. Is there value in having stability in the MAF tool?
  • A9. Are MAF reporting requirements excessively burdensome?  How can the process be streamlined?
  • A10. Is MAF adequately aligned to the GC’s planning cycle?
  • A12. What lessons can be learned from other jurisdictions to improve administration in the assessment of management performance?
  • P10. What can we learn from other jurisdictions?  How are other jurisdictions assessing public sector management performance practices?  What other tools exist both nationally and internationally and how do these compare to the MAF?  Are there lessons learned that can be applied to MAF (e.g., use of private sector)?

Is MAF Cost Effective?

  • P7. Is MAF cost effective and does it deliver value-for-money?  Is the level of effort and impact on the department justifiable?
  • P8. What is the optimum balance between required departmental resources/ investments and level of effort to ensure adequate (and not necessarily optimal) departmental performance?

Is MAF Governance Effective?

  • A1. Is MAF governance effective?  Are the roles and responsibilities, systems and approval processes supporting MAF efficient and effective?
  • A2. Is TBS the appropriate entity to assess and measure government management performance (e.g., does TBS have adequate capacity and level of experience)?  Should a 3rd party be considered?  If so, who should be the members of the 3rd party?
  • A8. How can you ensure that MAF allows for systematic and transparent conversations between deputy heads?

 

Annex F   MAF History

The History, Objectives and
Evolution of the Management Accountability Framework
16

The seed for the Management Accountability Framework (MAF) was planted in June 1997 when the government designated Treasury Board as the government’s management board. One of Treasury Board’s new responsibilities was to lead and provide expertise in the development of an agenda to improve management practices in federal departments and agencies.

These tasks were to be performed under the umbrella of the Modern Comptrollership Initiative (MCI).  The MCI was the result of the work performed over a two-year period by the Independent Review Panel on Modern Comptrollership in the Government of Canada from 1996 to 1998. The state of management in the federal system had been a longstanding concern which had been intensified in the context of the sharp cuts in federal program spending made in the mid-1990s as the government acted to reduce and then eliminate the federal deficit. Delivering federal services efficiently and effectively with sharply constrained human and financial resources required a heightened emphasis on how those resources were being managed and an effort to understand what modern public sector management should look like and what might be required to institute it. The MCI was the pivotal piece in the creation of MAF some five years later.

From 1998 to 2001, Phase 1 of the Modern Comptrollership Initiative was operating as a pilot project. Phase 2 was to be implemented government-wide in 2001 under the Modern Comptrollership Directorate. Phase 2’s premise was to enable front-line public servants to deliver services to citizens efficiently and effectively. The Directorate believed that the management framework necessary to achieve this purpose had to encompass key MCI concepts but be broader in scope. The framework would have to engage deputies directly and make them accountable for the management of their organizations.

The Modern Comptrollership Directorate continued with its work and by the end of 2002 it had reviewed 13 additional models or studies of modern management and categorized each as falling into one of five groups. By the end of this work, the Directorate had put together the central features of what became known as the Management Accountability Framework in late 2002. MAF was different from the MCI in that MAF was driven by a senior executive champion who was committed to the modernization of federal government management and who was convinced of the potential merits of this new tool.

In 2003, the Secretary of the Treasury Board saw MAF as a tool that would allow him to hold “facts-based conversations” with his deputy minister colleagues about their departments and their particular management problems and concerns. Also in 2003, the Coordinating Committee of Deputy Ministers endorsed MAF as the principal tool with which TBS would engage departments and agencies on management issues.

MAF’s Objectives

By early 2003, MAF had been transformed from a new set of ideas about management looking for support into a framework through which Treasury Board Secretariat would try to entrench the importance of modern comptrollership within the federal public service.

MAF reflected and brought together in one tool a range of TBS management improvement initiatives that were then underway as a result of the March 2000 publication of Results for Canadians: A Management Framework for the Government of Canada. Results for Canadians identified a number of management areas in which departments and agencies needed to improve their performance and it set out a forward agenda for management reform. MAF provided the first “explicit and coherent model for high organizational performance” with which TBS and departments and agencies could work together.

MAF was to focus on “management results rather than required capabilities.” It was about strengthening accountability to manage federal organizations effectively, serve Ministers and government, and deliver on results for Canadians. MAF was intended to help deputies and their executive teams manage their organizations more effectively by helping them pose critical questions that needed to be addressed or by helping them monitor and assess their own performance.

The Management Accountability Framework would also be a TBS oversight tool. Departments and agencies needed to be able to demonstrate progress in implementing the framework to TBS.

The three initial objectives of MAF were to:

  1. Use MAF as the basis for dialogue between TBS and deputy heads on the state of management practices.
  2. Frame TBS’s input into the COSO (Committee of Senior Officials) process for assessing deputy heads’ performance in managing their department or agency.
  3. Frame reporting on management so that the management practices and efforts to improve them could be more readily assessed.

In addition, MAF was expected to help integrate and streamline management reporting and information management within TBS. MAF was seen as a horizontal initiative within TBS that could help break down barriers between program sectors and policy centres.

The Evolution of MAF

Since its introduction in 2003, the actual collection and assessment of relevant departmental information is performed by TBS program and policy staff. The MAF Division now plays a coordinating role to ensure that the process and implementation of the MAF assessments is completed efficiently and on time.

From 2003 to 2005, MAF operated without an overall governance structure. There was no committee of senior TBS officials who were directly responsible for providing guidance and oversight to the project. There were no mechanisms in place to enforce process requirements such as deadlines. At the start of Round III in 2005, the MAF Division established an informal arrangement with the committee of TBS’s four program sector assistant secretaries (PSAS) that the Division could bring to PSAS problem issues and that PSAS would try to resolve them. From the end of Round III to the end of Round IV, a formal governance structure was put in place with TBS’s Management Policy Oversight Committee (MPOC) tasked with providing overall guidance and oversight.

Following Round III, a MAF Strategy Forum was put in place. Its basic purpose was to provide a venue in which disputes about assessment ratings could be resolved. This arrangement worked well because Strategy Forum did not have the same constraints on its time as MPOC did. What has evolved through the latter part of Round IV and Round V and continuing through Round VI is a far more effective overall governance structure involving POC and the Strategy Forum. POC reserves its time for big MAF issues such as overall direction setting, approval of methodology and timelines and oversight of the assessment process. Strategy Forum now provides the ongoing operational guidance and support that MAF requires, in particular, enforcing deadlines, resolving disputes, and reviewing individual areas of management and overall departmental assessments.

Since its inception, MAF has been an ambitious big assessment system used by all major federal departments, small agencies and micro-agencies. For every organization, the assessments currently cover 21 areas of management, 68 lines of evidence and some 300 criteria.

This was MAF in 2008 and the sixth assessment round. While there have been many changes in approach and methodology to reflect TBS’s growing experience with the assessment tool and the Secretariat’s desire to improve it in light of problems and opportunities that have been identified, the basics of MAF remain the same as they were in 2003.

The original 10 elements remain in place as do the expectations associated with each of the 10 elements that public service managers should meet. Associated with each of the elements are a number of areas of management designed to give a sense of the scope of each element and to also suggest how progress towards meeting expectations might be assessed. While the number of areas of management has changed over the years, their purpose has not. To gauge whether progress was being made towards meeting the expectations of the 10 elements as described by the areas of management, lines of evidence were developed that also remain in place today.

MAF Round I and Round II

The basic premise of Rounds I and II was that assessments should be based on knowledge and information about departments already available within the Treasury Board Portfolio. Areas of management were not rated during Round I because the ratings were subjective since there were no formal methodologies that defined how the information collected would be used to assess the areas of management. In addition, MAF was supposed to be an initiative that did not add to departmental work. In practice, however, TBP analysts often had to contact departmental officials to get information they were missing. Thus, MAF did, in fact, work with departments from the beginning.

MAF Round III

Important methodology changes were made during Round III about the basic premise concerning the information required for assessment purposes. The decision was made to expand the information base to include things that TBP staff should know about departmental management issues. As a result the number of areas of management increased to 41 for Round III.

Departments were invited to participate in defining the areas of management, which drew them further into MAF-related work. Formal arrangements were also made for the exchange of MAF-related information between TBP and departments. Each department designated a MAF contact person and the information was to be exchanged between the departmental MAF contact and the TBP program analyst for that department. This “single-point-of-contact” system was meant to regularize the information exchange and make it transparent. This would end the process by which different TBP analysts would obtain MAF-related information from different contacts in one department in an uncoordinated fashion.

Another consolidation innovation was the creation of the first formal MAF database. Its purpose was to facilitate the exchange of MAF information and assessment material within TBP and to consolidate all the information electronically. The information gathered during the first two rounds was added to the database to allow comparisons of Round III views about departmental performance with the previous years’ assessments.

During this round, for the first time, departments were allowed to look at and comment on their draft assessment. The areas of management were again rated during Round III as they had been in Round II -- subjectively.

MAF Round IV

The methodology changes made during Round IV were to address the issues concerning the assessment burden being too heavy, the assessments being too large and the major increase in the workload for both TBP and departments following their review of their draft assessments. For Round IV, the number of areas of management was reduced to 20 and formal methodologies for the assessment of all 20 areas of management were prepared. The methodologies were provided to departments and departments had the opportunity to provide TBP with information they thought would be relevant to their assessments. This removed the idea of “subjectivity” associated with the rankings.

The “single-point-of-contact” rule was strengthened through stricter enforcement and through the establishment of a MAF electronic mailbox. Departmental MAF contacts could speak directly with their MAF program contact in TBP as before. And in Round IV, departments were asked to review and electronic version of their assessment in the MAF database rather than receiving a paper copy.

MAF Round V

The MAF methodology was strengthened further in Round V through three major innovations. First was the introduction of a “maturity model” rating system whose purpose is to bring further objectivity to the rating process. The second was the development of the MAF portal that was accessible to authorized MAF users including TBP staff and departmental contacts to facilitate the information exchange, assessment and review processes as well as increasing transparency. The third was the implementation of standardized language to help improve the quality and consistency of written assessments.

MAF Round VI

Round VI remained fairly stable with only minor changes in the methodology. This decision was made following the announcement of a five-year evaluation of MAF that would take place during the round and which could result in major changes being implemented during Round VII or Round VIII.

One major change was the elimination of the draft assessment. For Round VI there was a single draft release that contained the ratings. Another important change was the definition used to assess micro-agencies, to reduce the reporting burden on micro-agencies while maintaining a consistent definition of what constitutes one. Other important changes included the test of a risk-based approach in selected areas of management related to the HR component and a limitation on the number and size of documents that departments and agencies could upload to the MAF Portal.

Annex G   Bibliography

The following references were used during the conduct of the project as well as in the preparation of this report.

Audit Commission. “Comprehensive Performance Assessment.

Australian Public Service Commission. “Senior Executive Leadership Capability (SELC) Framework.

CCAF-FCVI. “Institutional Foundations for Performance Budgeting: The Case of the Government of Canada.” OECD Journal on Budgeting. 2007, Vol. 7, No. 4.

Chartered Institute of Public Finance and Accountancy. The CIPFA FM Model: Statements of Good Practice in Public Financial Management – Getting Started (PDF version 419kb).

Collins, Jim. Good to Great. New York: HarperCollins. 2001.

Conrad, M. “Some Mandarins Merely Going through the Motions of DCRs.” Public Finance, 16 November 2007.

Cresswell A.M., D. Canestraro, and T.A. Pardo. “A Multi-Dimensional Approach to Digital Government Capability Assessment.” Center for Technology in Government, University of Albany, SUNY, 2008.

Falletta, S. Organizational Diagnostic Models: A Review and Synthesis. Leadersphere, 2005.

Halligan, J. “Accountability in Australia: Control, Paradox and Complexity.” Public Administration Quarterly, Winter 2007.

Harris, L. “Best Value Reviews of Human Resource Services in English Local Government.” Review of Public Personnel Administration, 24(4), 334-347, 2004.

HM Government. Doing the Business: Managing Performance in the Public Sector – An External Perspective. February 2008.

HM Treasury. “HMT DSO Delivery Plan: Professionalising and Modernising the Finance Functions in Government.” July 2008.

HM Treasury. Operational Efficiency Programme: Final Report. April 2009.

HM Treasury. “Management Accountability Framework.” Undated presentation.

HM Treasury. “Risk Management in UK Government.” Undated presentation.

Holkeri, K. and H. Summa. “Evaluation of Public Management Reforms in Finland: From Ad Hoc Studies to Programmatic Approach.” Ministry of Finance, Finland.

Interdepartmental MAF Network. “2009 TBS Evaluation of the Management Accountability Framework: RFP MAF Administration.” 2009.

Interdepartmental MAF Network. “2009 TBS Evaluation of the Management Accountability Framework: Potential Points to Consider.” 2009.

Kaplan, Robert S. and David P. Norton. The Balanced Scorecard: Translating Strategy into Action. Harvard Business Press, 1996.

McNamara, Carter. “Organizational Performance Management.” Authenticity Consulting, 2008.

National Audit Office. Cabinet Office: Assessment of the Capability Review Programme. Report by the Comptroller and Auditor General, HC 123 Session 2008-2009, 5 February 2009.

National Audit Office. The Efficiency Programme: A Second Review of Progress. Report by the Comptroller and Auditor General, HC 156 I & II 2006-2007, 8 February 2007.

National Audit Office. Managing Financial Resources to Deliver Better Public Services. Report by the Comptroller and Auditor General, HC 240 Session 2007-2008, 20 February 2008.

National Audit Office. Managing Financial Resources to Deliver Better Public Services: Survey Results. 2008.

National Audit Office. Value for Money in Public Sector Corporate Services: A Joint Project by the UK Public Sector Audit Agencies. 2007.

Office of the Auditor General of Canada. Report of the Auditor General of Canada to the House of Commons – Chapter 2: Accountability and Ethics in Government. November 2003.

Oliveira, Fatima and Josiane Désilets. “Interdepartmental MAF Network: MAF Round VI Solution Sheet.” Draft document, 26 January 2009.

Platt, Rodney K. “Performance Management.” WorldatWork White Paper. December 2003.

Poage, James L. “Designing Performance Measures for Knowledge Organizations.” Ivey Business Journal, March/April 2002.

Public Audit Forum. Finance Value for Money Indicators Guidance. May 2007.

Public Policy Forum. A Management Accountability Framework for the Federal Public Service: Outcomes Report of Roundtable with Public Administration Experts and Academics. Public Policy Forum Boardroom, Ottawa, Ontario, 16 May 2003.

Reid, Joanne and Victoria Hubbell. “Creating a Performance Culture.” Ivey Business Journal, March/April 2005.

Sims, Harvey. “The Management Accountability Framework – Genesis, Evolution, Current Uses, and Future Prospects: A Paper Prepared for the MAF Directorate.” Sussex Circle, 15 July 2008.

Sims, Harvey. “The Management Accountability Framework: A Paper Prepared for the MAF Directorate.” Sussex Circle, 16 July 2008.

Staes, P. “Study on the Use of the Common Assessment Framework in European Public Administration.” European Institute of Public Administration, 2005.

Staes, P. and N. Thijs. “Quality Management and the European Agenda.” EIPAScope, 2005.

State Services Commission, New Zealand. “The Capability Toolkit: A Tool to Promote and Inform Capability Management.” 2008.

State Services Commission, New Zealand. “Quality Management in Government Responsibility and Accountability: Standards Expected of Chief Public Service Officers.”

Sussex Circle. “MAF Round VI Questions.” 2009.

Sussex Circle. “Management Accountability Framework Limited Review: A Paper Prepared for the MAF Directorate.” 13 June 2007.

Sussex Circle. “Report on Interviews with Deputy Ministers Concerning Management Accountability Framework Round V.” 30 June 2008.

Treasury Board of Canada Secretariat. “Guide to MAF Assessments 2007-2008.” Management Accountability Framework Directorate, 2008.

Treasury Board of Canada Secretariat. Leading Management Practices Handbook. Hampton Inn, Ottawa, 3 October 2008.

Treasury Board of Canada Secretariat. “MAF Post-Mortem: Internal.” 5 June 2008.

Treasury Board of Canada Secretariat. “MAF Process and Timelimes.” Internal Training Session with Departments and Agencies, 22 & 26 September 2008.

Treasury Board of Canada Secretariat. “MAF Round VI Draft Ratings: Trends & Issues.” MAF Strategy Forum, 30 January 2009.

Treasury Board of Canada Secretariat. “MAF Round VI September 2008 Presentations, Ratings Descriptions, Methodology Reference and Guidance Documents.” Management Accountability Framework Directorate.

Treasury Board of Canada Secretariat. “MAF Training for TBP Managers and Analysts.” October 2008.

Treasury Board of Canada Secretariat. “MAF VI Methodology Reference Document to Assessing Areas of Management (AoMs) 2008-2009.” Management Accountability Framework Directorate.

Treasury Board of Canada Secretariat. “Management Accountability Framework External Post-Mortem.” 30 May 2008.

Treasury Board of Canada Secretariat. “Management Accountability Framework: Round V Findings.” June 2008.

Treasury Board of Canada Secretariat. “Overview of the Management Accountability Framework.” PricewaterhouseCoopers, December 2008 – March 2009.

Treasury Board of Canada Secretariat. "People Management and MAF VII: A New Approach; Briefing for TBS Priorities and Planning Sector" Presentation dated February 26, 2009

Treasury Board of Canada Secretariat. “Process Issues.” Extended POC Meeting, 3 July 2008.

Treasury Board of Canada Secretariat. “Proposed Changes to Areas of Management Methodology for Round VI (2008-2009).” Presented at Extended POC, 3 July 2008.

Treasury Board of Canada Secretariat. “Responding to MAF Round V Feedback & Post-Mortems.” Extended POC Meeting, 3 July 2008.

Treasury Board of Canada Secretariat. “TBS MAF Application: Round VI Internal Training Sessions.” October 2008.

Treasury Board of Canada Secretariat. “TBS MAF Portal.” Internal Training Session with Departments and Agencies, 22 & 26 September 2008.

The Treasury, New Zealand. Annual Report of the Treasury for the Year Ended 30 June 2008. 2008.

United States Government Accountability Office. “Review of OMB’s Improved Financial Performance Scorecard Process.” Report to the Subcommittee on Government Management, Finance, and Accountability, Committee on Government Reform, House of Representatives, 2006.

 


[1] Five-Year Independent Evaluation of the Management Accountability Framework Request for Proposal.

[2] A complete list of interviewees and consultation participants is included as Annex C.

[3] The MAF Network and SAAN are networks of departmental representatives at various levels aimed at sharing best practices.

[4] Why is Performance Management so difficult to implement? Canadian Government Executive. March 2009

[5] For example, studies conducted by the National Institute of Standards and Technology in the late 1990s indicated that Malcolm Baldridge National Quality Award winners outperform other companies in terms of total financial return.

[6] Performance Management White Paper.  Rodney K. Platt:  WorldatWork, December 2003.

[7] The President’s Management Agenda (PMA) was an initiative of the Bush administration. The Obama administration has identified the need for a Chief Performance Officer, but at this point, it is not clear what role the PMA will play under the new administration.

[8] Organisation for Economic Co-operation and Development (OECD). Public Sector Integrity: A Framework for Assessment, p. 86

[9] Assessment of the Capability Review programme – Report by the Comptroller and Auditor General.  National Audit Office.  February 2009.

[10] CAF 2006. CAF Resource Centre. European Institute of Public Administration.

[11] Exceptions include the Czech Republic, Slovakia and Romania who have made the use of the CAF mandatory in an effort to encourage public sector quality management practices.

[12] For example, the Federal Accountability Act, which included provisions for strengthening accountability within government including designating deputy ministers as accounting officers, was passed in December 2006.

[13] Operational Efficiency Programme: Final Report.  HM Treasury.  April 2009.

[14] TBS slide presentation – “MAF Round VI Draft Ratings – Trends and Issues”. January 30, 2009.

[15] Slide Presentation “People Management and MAF VII:  A New Approach; Briefing for TBS Priorities and Planning Sector” – dated February 26, 2009.

[16] This Annex was prepared by the MAF Directorate of Treasury Board Secretariat after completion of the evaluation, and is included in the report for information purposes.


Date modified: