We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Five-Year Evaluation of the Management Accountability Framework


Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

2  Putting MAF in Context

2.1  Importance of Performance Management Frameworks

To achieve organizational goals, organizations must motivate individuals and groups towards a common goal; performance management is a means used to get there.  Performance management is defined as a cycle of managerial activities that includes planning, measuring results and using measurement to reflect on the accomplishment of objectives.4  This cycle can be applied at the various levels: individual, team/business unit or organizational level.  Fundamentally, this involves integrating goal setting, measurement control, evaluation and feedback in a single ongoing process aimed at fostering continuous improvement in the creation of value.  A performance management framework usually refers to a specific process to accomplish the performance management cycle.  These frameworks usually include:

  • Planning – identifying goals and objectives;
  • Measuring – assessing extent to which goals and objectives have been achieved;
  • Reporting – monitoring and reporting of key performance results; and
  • Decision Making – using the findings to drive improvement/manage opportunities in the organization, department, program, policies or project.

Effectively managing business performance in today’s complex, ever changing, competitive environment is critical to the success of any organization.  In fact, studies5 have shown that organizations that manage performance through measurement generally are far more successful than those that do not.  For example, TBS’ implementation of the MAF has resulted in an increased focus on improving managerial performance within departments. 

There are a number of benefits that organizations can reap by implementing a performance management framework.  Some other benefits that can be accrued to organizations are:

  • Alignment of activities to the goals and objectives of the organization.
  • Achievement of organizational goals and objectives through appropriate allocation and focus of resources on performance drivers. 
  • Measurement of the organization’s progress towards outcomes. 

These benefits can be realized when an effective performance management system is implemented.  An effective performance management system “encourages employee behaviours that drive positive results whereas an ineffective performance management system, at best, utilizes rewards inefficiently and, at worst, adversely affects the outcomes that it is intended to improve.”6 

2.2  International Comparison

As part of this evaluation, we conducted an international comparison of the MAF against approaches currently used by other public sector jurisdictions.  Specifically, we compared the MAF against management performance frameworks used in the United Kingdom (UK), the European Union (EU) and the United States (US).7 

We did review approaches in Australia and New Zealand but determined they were not comparable for our purposes because they either had a different focus or were not implemented public-sector wide.  In Australia, the Executive Leadership Capability Framework is a competency framework, not an overall management performance framework.  In New Zealand, the Capability Toolkit shares similarities with MAF but the framework is used on a voluntary basis and limited information is available on the impact of its application across the public service.

Internationally, MAF is considered to be one of the more sophisticated management practices systems.  For example, according to the Organization for Economic Co-operation and Development (OECD)8 "An exceptional model for widening the framework of performance assessments beyond managerial results to include leadership, people management and organisational environment is provided by the Treasury Board of Canada Secretariat (TBS) Management Accountability Framework (MAF)."

An overview of each framework is provided below, with more details provided in Annex D to this report.

Table 1 Overview of International Comparator Organizations

Attribute

MAF (Canada)

Capability Review (UK)

Common Assessment Framework (EU)

President’s Management Agenda (US)

Purpose

Clarify management expectations and to foster improvements in management

Assess organizational capability to meet the government’s delivery objectives

Improve public sector quality management

Improve agency performance in five specific areas

Methodology

Annual review by central agency (exceptions for small and micro-agencies)

External review with six, 12 and 18 month stock takes; second round review after two years

Internal reviews on an optional basis in line with the results of the self-assessment

Quarterly self-assessments

Assessors

TBS personnel review of documentation and questionnaires

Three assessors from outside central government and two director generals from other government departments review documentation and conduct interviews; input and analysis provided by the Cabinet Office

Self-assessment; project teams internal to the organization

Office of Management and Budget (OMB) reviews internal, progress indicators and Green Plans

Judgment vs. Evidence-based

Judgment and evidence-based criteria

Judgment and evidence-based criteria

Judgment and evidence-based

Evidence-based

Remuneration

Linkage to performance pay of deputy heads through the Committee of Senior Officials

Linkages to salary decisions of Permanent Secretaries

Depends on how the framework is applied

Linkages to both budget allocations and Chief Operating Officer evaluations

Treatment of Entities

One size (exception: small and micro agencies assessed using different standard on a three-year basis)

Consistent approach for all organizations assessed

Adaptable by organizations

Consistent approach for all organizations assessed

Publication

Yes, on TBS website and optionally on departments’ websites; often includes a management response

Yes, on agency’s websites and press release

Voluntary submission of scores into CAF database

Yes, on department’s websites

United Kingdom’s Capability Review

The UK’s Capability Review (CR) program was launched by the Cabinet Secretary in 2005.  The CR was the first organizational capability assessment framework in the UK to assess systematically the organizational capabilities of individual departments and to publish results that can be compared across departments.  Its objective is to improve the capability of the Civil Service to meet the government’s delivery objectives and to be ready for program delivery.   

The CR addresses three broad area of management capability: leadership, strategy and delivery.  Using a standard list of questions and sub-criteria related to 10 elements within the three broad management areas, a review team completes its analysis using a combination of evidence/surveys provided by the department and conducts interviews and workshops over a short two to three week period. 

Using judgment and based on the information gathered during the assessment period, the review team assigns a rating to each element along a five-point scale: strong, well placed, development area, urgent development area or serious concerns.  Results of the review are published, which sets out areas for action.  The debriefing process is an honest, hard-hitting dialogue between the Cabinet Secretary, the Head of the Home Civil Service and the departments upon which action plans are devised.

For the first round of reviews, all major government departments were reviewed in five tranches between July 2006 and December 2007.9  A three-month challenge and a six-month, 12-month and, as necessary, an 18-month stock take takes place to ensure the department is making progress towards the action plan.  Second-round reviews take place after two years of the original first-round review.  

The CR is managed and organized by the Cabinet Office.  In an effort to bring a level of independence to the review results, the five-person review teams include two private sector external experts, one local government representative, in addition to the two representatives from peer government departments. 

The CR has been well received in the UK as departments think that it has added value.  The use of a review team external to the department adds a level of independence to the results.  A significant amount of judgment is applied when assessing departments within the CR.  The application of judgment is critical when assessing organizational capability; however, this also increases the subjectivity within the results of the review.  While the reviews rely on judgment, they are based on evidence that can be compared from one department to another and the assessments are reviewed by an independent external moderation panel to ensure consistency.  Finally, although the CR was designed to assess departments’ capability to meet current and future delivery expectations, no direct correlation has been made between capability and delivery performance. 

European Union’s Common Assessment Framework

In 2000 the Common Assessment Framework (CAF) was launched by the European Public Administration Network (EUPAN) as a self-assessment tool to improve public sector quality management within EU member states.  It is based on the premise that excellent results in organizational performance, citizens/customers, people and society are achieved through leadership driving strategy and planning, people, partnerships and resources and processes. It looks at the organization from different angles at the same time, providing a comprehensive and integrated approach to organization performance analysis.10 

The CAF is based on the European Foundation for Quality Management (EFQM) and the model of the German University of Administrative Sciences in Speyer.  It is composed of nine criteria, categorized into two broad segments - Enablers and Results – that define the cause and effect relationship between organizational capabilities.  Each criterion is subdivided into sub-criteria, for which examples demonstrate that specific managerial practices are in place.

The CAF is not imposed on organizations within the EU;11 it is a voluntary tool provided to agencies as a means to improve organizational effectiveness.  As a result, the CAF is a self-assessment tool completed within the agency being assessed.  Typically, an internal review team is assembled within the organization who applies judgment of organizational performance against the indicators.  The CAF was not designed to be a one-size-fits-all model.  Flexibility has been built into the approach to allow for tailoring for the needs of the organization.  For example, a number of specialized CAF models exist for different sectors (e.g., education, local government, police services and border guards).  Finally, the CAF does not prescribe what specific organizational practices need to be in place; only the broad practices that needs to be in place.

The key advantage of the CAF is the flexibility that is built into the framework; allowing each organization to respect the basic structure of the framework while applying only relevant examples in the self-assessment process.  This ensures the relevance of the results to the organization.  However, the voluntary and self-assessment nature of the assessment tool limits the rigor, independence and comparability of the results over time and across organizations.

United States’ President’s Management Agenda

The President’s Management Agenda (PMA), launched by the Bush Administration in 2001, focuses on improving organizational effectiveness in five key areas of management across the whole of the US government.  The program was established to improve the management and performance of the federal government and deliver results that matter to the American people.  The PMA is not a comprehensive assessment framework; rather, it defines five specific government-wide initiatives thought to be of importance at the time.  These include:  i) Strategic Management of Human Capital, ii) Competitive Sourcing, iii) Improved Financial Performance, iv) Expanded Electronic Government and v) Budget and Performance Integration. 

Each initiative, for all 26 major federal departments and agencies, is scored quarterly using a red, yellow and green system.  The PMA uses a “double scoring” approach whereby the red, yellow, green score is applied for the current status as well as for progress.  Progress in this case is defined as the execution of improvement plans based on the current assessment status.  The PMA uses a self-assessment approach based on the standards that have been defined for each initiative.  The head of each agency (Chief Operating Officer) is responsible for conducting his/her own agency’s assessment on a quarterly basis.

The Office of Management and Budget (OMB – in the Executive Office of the President) is responsible for the PMA; however, the President’s Management Council, made up of Chief Operating Officers for all 26 agencies, meet regularly to review progress against the PMA for all agencies.

While the PMA consists of a standard process for each organization, it is limited in its focus to those five key government-wide management priorities.  Consistent with the CAF, the self-assessment approach, which is practical given the quarterly assessment cycle, can limit the rigor and independence in the assessment results; however, this is balanced by the oversight of the President’s Management Council, which would be the equivalent of a deputy head oversight committee.

Further details of each comparative jurisdiction have been provided in Annex D to this report.

Applicability to MAF

Our review of the frameworks in place in other jurisdictions indicates that the tools being used are similar in intent (performance improvement) and methodology (regular diagnosis of organizational capability) to the MAF.  Further, all jurisdictions appear to face similar challenges including demonstrating the link between the assessment and ultimate performance improvement, the ability to define clear measures, and finding ways of improving the efficiency of the approaches. 

There are opportunities for MAF to leverage elements of these frameworks to further enhance the usefulness and sustainability of the tool.  A summary of the key considerations for MAF from the international comparison exercise, which will be further examined in Section 3.0 below, is as follows.

  • Consider conducting interviews and consultations, as a means to gather evidence to support the lines of evidence.
  • Incorporate flexibility into the MAF assessment process by facilitating a tailored assessment approach for each organization or category of organization being assessed.
  • Increase the level of engagement between TBS and departments, at the senior levels.
  • Assemble or leverage an existing deputy head committee to provide input and support to the development of assessment criteria and horizontal issues identified through MAF.
  • Consider assessment of an organization’s progress towards improvement plans, in addition to the current state assessment.


Date modified: