We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Five-Year Evaluation of the Management Accountability Framework


Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

1 Introduction

1.1  Overview

In 2003, the Treasury Board Secretariat (TBS) introduced the Management Accountability Framework (MAF) with the intent of strengthening deputy heads/departmental accountability for management. As a performance management framework, the purpose of MAF1 is to:

  • Clarify management expectations for deputy heads to support them in managing their own organizations;
  • Develop a comprehensive and integrated perspective on management issues and challenges and to guide TBS engagement with organizations; and
  • Determine enterprise-wide trends and systemic issues in order to set priorities and focus efforts to address them.

To emphasize the importance of sound managerial skills for deputy heads, the MAF assessment results form an input into the Privy Council Office (PCO) process for assessing deputy head performance.

The MAF assessment process is managed by the MAF Directorate in the Priorities and Planning Sector within TBS.  The MAF is an annual assessment and since 2003, six MAF assessments have been conducted. 

1.2  MAF Evaluation

In November 2008, TBS commissioned a five-year evaluation of the MAF.  The objectives of this evaluation were to:

  • Evaluate how TBS is assessing public sector management practices and performance within and across the Federal government (i.e., is MAF relevant, successful and cost-effective?);
  • Compare MAF as a tool for assessing public sector management practices and performance across jurisdictions; and
  • Identify and recommend areas for improvement to MAF as an assessment tool and its supporting reporting requirements, tools and methodologies.

A combined team from PricewaterhouseCoopers LLP (PwC) and Interis Consulting Inc. (Interis) were contracted to complete the evaluation.  The evaluation questions were developed by TBS with input from the deputy head community; with the overall evaluation approach agreed upon by TBS.

Evaluation Methodology and Approach

The evaluation was initiated with the clear objective of performing an assessment of MAF and developing recommendations for improvement.  The Statement of Work (SOW) developed by TBS identified 23 evaluation questions.  We based our evaluation framework on those questions, developing indicators and evaluation methods to address each question.  As well, to support an integrated analysis, we grouped the evaluation questions into four strategic questions (Annex E identifies the 23 evaluation questions and their grouping under the four strategic questions).

In addressing the evaluation questions, multiple sources of evidence were used.  This included document and literature reviews, interviews, consultations and roundtables with stakeholder group representatives, international comparison against comparative jurisdictions and a costing analysis.

A brief summary of our data gathering approach is as follows:

  • Document/Literature Review:  Documents were provided by TBS personnel and additional literature was researched by the evaluation team.  The literature reviewed, as listed in Annex G, was used as follows:
    • As a basis for analyzing whether MAF is achieving its objectives and the assessment of the MAF process and associated tools; feedback on past assessment rounds and process documentation was used to corroborate information gathered from interviews and consultations;
    • To contextualize MAF as a performance management framework, literature review was used to identify established theories/approaches  in the area of performance management;
    • In order to assess MAF’s performance relative to comparable models in place in other jurisdictions, the literature review identified key elements of each model, provided international perspectives and feedback on MAF and the other models to compare successes and challenges with the various models. This information corroborated feedback obtained and analysis performed relative to what elements of these other models may be considered within MAF.  The literature review was used to assess each comparable framework to MAF within three broad categories: nature and objective of the framework, the assessment methodology and the outcome of the assessment.
  • Interviews with Key Stakeholders2:  Interviews and consultations were conducted as follows:
    • Departmental and agency interviews were conducted with 27 deputy heads and four separate group consultations were held; two with the departmental MAF representative (ADM-level) community, one with the MAF Network and one with the Small Agencies Administrators Network (SAAN)3.
    • Interviews and consultations were also held with central agency stakeholder groups including: Secretary of Treasury Board, Program Sector Assistant Secretaries, Policy Sector Assistant Secretaries, Area of Management (AoM) leads and the Privy Council Office.   
  • International Comparison:  As part of this evaluation, we were asked to compare MAF against specific international comparator jurisdictions.  Our comparison exercise included the United Kingdom, United States and European Union.  An overview of the results is provided in Section 2.0 of this report and details are provided in Annex D.  We also consulted with a team of international representatives within our firms who provided advice and feedback to the evaluation team on the approach and results of the international comparison.
  • Costing Survey and Analysis:  As part of this evaluation we were asked to conduct a quantitative assessment of the costs of MAF.  To do this, we asked 21 departments and agencies and TBS to provide an estimate of the approximate level of effort for each individual involved in the MAF assessment for Round VI.
  • Walk-through of the MAF Assessment Tool and Process:  In conjunction with the information gathered from the individual interviews and consultations, detailed documentation review of the MAF assessment process was conducted in order to analyze the effectiveness and efficiency of the process and the associated communication approaches and support mechanisms.

    The MAF Portal and the MAF Assessment System were assessed as tools to effectively and efficiently administer the assessment process.  This analysis was achieved through a walk-through and in-depth interview on the assessment process and the supporting functionality of the databases.  The information gathered was corroborated by literature review and the results of interviews and stakeholder consultations. 

  • Analysis of the AoMs and Associated Assessment Methodology:  Detailed analysis was performed for each of the 21 AoMs, including:
    • Trend of results by organization over assessment rounds;
    • Comparison of weighting approach to arrive at the overall assessment rating per AoM (i.e. decision-rule based, weighted average, average, points-based);
    • Comparison and analysis of the rating criteria for each AoM – specifically focusing on the trends between Round V and VI for the definition of a “Strong” rating and the precision of definition over time;
    • Year over year comparison between Round V and VI of the number and nature of changes to the lines of evidence per AoM; and
    • Categorization of AoMs according to level of subjectivity of assessment criteria, process vs. outcome-based indicators and level of definition of maturity model within the assessment criteria.

Evidence from each of these sources was analyzed and synthesized to develop the findings, conclusions and recommendations that are included in this report. Figure 1 below depicts, at a high level, the evaluation framework we used, including the linkages between the four strategic questions and the three main evaluation issues expected to be covered by the TB Evaluation Policy.

Figure 1: Evaluation Framework
Figure 1: Evaluation Framework

Figure 1 - Text Version

During the course of the planning, conduct and reporting phases of the evaluation, oversight and guidance was obtained from the following sources:

  • Deputy Minister Steering Committee:  A deputy minister steering committee was established to oversee the evaluation and to provide feedback and advice to the evaluation team at key points.  The members of the deputy minister steering committee are presented in Annex A.
  • Departmental Evaluation Committee:  Throughout the course of the evaluation, members of TBS’ Internal Audit and Evaluation Bureau and the Evaluation Committee provided oversight and input into the evaluation approach and associated deliverables.
  • International Advisory Committee:  A selection of international public sector, senior experts was assembled to provide guidance, oversight and advice on key milestones and deliverables during the evaluation process.

Limitations of the Evaluation.  The following are the main constraints we faced in completing the evaluation as planned:

  • Accuracy/Validity of the MAF Assessment Process/Tool:  The evaluation assessed the robustness, relevancy, validity and accuracy of the MAF assessment tool focusing on the methodology, including the lines of evidence, the assessment criteria and rating approach.  While the evaluation found that the MAF assessment process is robust and the results generally reflect the realities of organizations, it was not able to conclusively determine the validity or accuracy of the tool as MAF necessarily relies upon both qualitative and quantitative indicators.  Once the recommendations from this evaluation have been implemented and the next phase of MAF maturity has been established, it may be appropriate to consider a more in-depth analysis of the accuracy of the individual AoM results.
  • Cost Effectiveness of MAF:  As departments and agencies are not tracking costs related to the MAF assessment process, we were unable to complete a comprehensive costing analysis within departments and agencies to support our assessment of cost-effectiveness.  The results of the costing analysis did identify a range of approximate dollars spent on MAF during Round VI.  Consideration should be given to additional work to better identify the costs associated with MAF to allow a baseline to be established to compare trends, including, as an example, the reduction in reporting burden over time.
  • Quantitative Evidence to Demonstrate Impact on Management Practices:  Given the fact that the TBS scoring approach and indicators are evolving, we were unable to identify quantitative evidence (based on MAF scores) whether management practices have improved as a result of the introduction of MAF – for this perspective, we relied on interviews and workshops to gather opinions from departments and agencies and TBS.

    Related to the above two points, without reliable data on costs or an ability to measure improvement on MAF scores, we were unable to draw any correlation between departmental investment in MAF and MAF performance.

  • International Comparison:  As outlined in section 2.2 on international comparisons, although we were asked to review approaches in Australia and New Zealand, we determined they were not comparable for our purposes because they either had a different focus or were not implemented public-sector wide.  Instead, we compared MAF to approaches used in the United Kingdom, the European Union and the United States.

    For the comparator models included in our international research, there was limited data available regarding the costs of the programs.  As well, we were unable to identify any conclusive data relating management performance to achievement of organization goals.

  • Validation of the MAF Logic Model:  The logic model for MAF was developed after this study’s methodology was designed and initiated.  Accordingly, it was not possible to identify direct linkages between MAF and improvements to departmental management performance.  In the absence of a logic model, the evaluation team used the objectives of MAF as communicated in the Request for Proposal for this evaluation.  While including the draft logic model in this report was essential in the context of an evaluation, it should be validated with key stakeholders.

MAF is an information-gathering tool used by TBS to assess departmental management performance; which in turn, is critical to ensuring that programs and services are delivered to the highest standards in the most cost-efficient fashion.  

Given the qualitative and subjective nature of MAF, as a tool to assess management performance, the foregoing limitations are reasonable.  The evaluation team was able to address the evaluation questions by engaging relevant stakeholders and corroborating information gathered through multiple lines of evidence.  As a result, it is our view that the evaluation standards, as defined in TB Evaluation Policy (2001), have been met.

1.3  Purpose of Report

The purpose of this report is to provide:

  • An assessment of MAF performance to date;
  • An analysis of key issues including: governance; methodology of assessments; reliability and accuracy of assessments; reporting requirements; systems; process;  treatment of entities and alignment to the Federal government’s planning cycle and other initiatives; the analysis encompasses international comparisons to other managerial performance models; and
  • Recommendations for proposed changes to MAF, with the intention to improve the functionality and sustainability of MAF, as a result of the key issues identified.

1.4  Conclusions

As we describe in detail in section 3 of the report, we have concluded that MAF is successful and is meeting its current objectives.  MAF has clarified management expectations for deputy heads, has guided TBS engagement with departments and agencies, and provided both an enterprise-wide view of management practices to departments and a view to government-wide trends and management issues to TBS. 

Further, we have concluded that MAF is a valuable and relevant management tool that should continue to be maintained and supported.  In stating this we note that, driven by the need to meet increasing expectations for clear demonstration of accountability within the public sector, MAF has evolved significantly since its inception in 2003, from a relatively informal approach to a much more rigorous assessment.  Based on the results of our international comparison, it is reasonable to conclude that had MAF not existed, something similar would have been needed to meet these increased accountability requirements.   

While we were unable to conclude on the cost effectiveness of MAF, due to limitations of the costing information available, or on the accuracy/validity of the assessment results, we were able to conclude that the MAF assessment process is robust and the results generally reflect the realities of organizations. 

We have identified areas improvements that can be made to enhance the efficiency and effectiveness of the MAF process and enhance the overall validity of the assessment results.  Going forward, TBS may consider developing a costing approach that once implemented, would establish a baseline to compare cost in future years.  Further, validation of the MAF logic model with key stakeholders will be essential for its use as a basis for future performance measurement.

Finally, to ensure that MAF continues to meet its objectives and continues to support efforts towards management excellence, we have concluded that MAF should continue its evolution as a performance enabler for deputy heads.  The recommendations outlined in Section 4.0 will support this transition.



Date modified: