We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Five-Year Evaluation of the Management Accountability Framework


Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

3.3  Are MAF Assessments Robust?

Departmental stakeholders confirmed that recent MAF assessment results generally reflected the state of management practices within their organizations and served to highlight areas of management that required attention.  

“There are hidden costs of NOT doing MAF.” (Source: deputy head interview)

In our view, as a tool for the assessment of departmental managerial performance, the existing assessment approach and methodology for MAF assessments has a solid foundation.  We also noted that it is consistent with the basic elements of assessment models in place within other jurisdictions.  TBS has identified areas of improvement to support improved efficiency and effectiveness of the tool and took steps to streamline the process for Round VI; however, stakeholders were quick to express concern over making too many changes or streamlining too much to diminish the focus on management. 

Over the past six rounds of MAF, departments and agencies have taken strides to embed MAF and strong managerial performance into their organizations, pushing the expectations through to their senior management and management teams.  To ensure the continued progression towards this goal and to complement the ongoing “ever-greening” of the assessment tool, enhancements to the assessment methodology and approach should be considered.  By incorporating key elements to ensure transparency and defensibility of measures, as well as improvements to the process to increase its efficiency and effectiveness, this will support efforts towards management excellence. 

3.  TBS has in place a structured and rigorous process for reviewing the MAF assessment results.

As outlined in Section 3.1 of this report, the current assessment approach uses central agency representatives to complete the assessment with a quality assurance process to support the accuracy of the assessment results.

Our discussions with departmental representatives at all levels point to most not being aware of the extent of the quality review process within TBS (detailed in Section 3.1 of this report).  While the MAF assessment process begins with an assessment by analysts, it also involves several quality control and approval steps involving more senior staff, and leads to review and approval at the Associate Secretary and Secretary level.  A key step in this process is the Strategy Forum, involving Assistant Secretaries from the Program and Policy Sectors, as well as all AoM leads, where the draft results for all departments and agencies across all AoMs are reviewed and assessed.  This Strategy Forum is conducted twice; once before the finalization of the draft assessment results and the other prior to the release of the final assessment and context page.  This is a key internal governance step in the assessment process to ensure there is a consensus within TBS regarding the MAF assessment results.  Ongoing communication with departmental stakeholders will allow organizations to understand and gain confidence from the existing quality assurance processes already in place.  This could include consideration of how to communicate results to Parliamentarians.

In the consultation process for this evaluation, a majority of departmental stakeholders raised concerns regarding the TBS analysts involved in the assessments.  These concerns relate mainly to the rate of turnover of the analysts from one year to the next, which impacts the level of the analysts’ experience.  Consistent with other areas of the Federal government, we have learned that turnover of analysts is part of a larger and ongoing talent management challenge.

4.  There has been increasing stability of the MAF indicators in recent rounds.

”One of the complaints that departments have every year is we change the rules of the game.” (Source: Senior TBS official interview)

Feedback we received identified that stability in the measures is desirable.  In an effort to continuously improve the assessment tool, there have been changes within the indicators between each assessment round.  In comparing the overall “Strong” rating for each AoM, we noted that 12, albeit minor, changes were made in the rating definition between Round V and VI.  Overall, however, we noted that effort has been made between Round V and VI to stabilize the AoMs and the associated lines of enquiry.

Stability of MAF cycle:  Currently, the timing of the assessment process is aligned with the COSO process of evaluating deputy head performance, as the final MAF assessment results are released in April, ready for the COSO input in May.  However, this timing does not align with the annual planning cycle within departments and agencies.  This results in challenges for departments and agencies in integrating the results of the MAF assessments into the operational plans of the upcoming year. 

Most stakeholder groups, including deputy heads, TBS senior officials, and departmental MAF contacts, identified the timing of the assessment process as an issue; however, it was further noted that any other time of year would also be challenging due to existing commitments.  For example, completing the assessments in the Fall would allow input into the departmental operating plans but also coincide with the preparation of Departmental Performance Report (DPR), which tends to impact the same individuals who coordinate the MAF submissions.  As a result of the operational requirements for input into the COSO process and the potential conflicts with existing government cycles, we are not recommending changes to the current cycle and timing of MAF at this time.

5.  The current approach of assessing each AoM annually for large departments and agencies does not consider the unique risks and priorities of organizations.

Selection of AoMs to be assessed each year:  All large departments are assessed every year against all 21 AoMs and all lines of enquiry.  This facilitates comparability of results across the Federal government but does not consider the unique aspects of individual departments and agencies.  There are characteristics of organizations for which the impact of the various AoMs might differ, including: size, industry sector, complexity, portfolio relationship with another department and life cycle stage.

Depending on the nature of the department or agency, there are specific AoMs that may have limited applicability; on the other hand, there are some AoMs that would apply to every organization, given the current government priorities.  For example, AoM #14 – Asset Management varies in importance given the nature of the organization. In contrast, due to the government-wide priority of accountability and given the role of the deputy head as the Accounting Officer, AoM #17 Financial Management and Control is relevant to all departments and agencies.

International jurisdictions that have models similar to MAF have taken varied approaches to tailoring the indicators.  In the UK, the CR indicators are consistent across departments and are not tailored to the specific risk or priorities of the organization.  Contrasting that, the CAF was designed to be flexible and allow individual EU countries to tailor the assessment tool. 

Due to the public nature of the assessment results, receiving a poor rating in an area of low risk or priority to a department or agency could result in inappropriate decisions related to allocation of resources to improve the subsequent year’s rating.  A risk-based or priority based approach to the AoMs, measuring only those that are considered a priority or risk to the individual organization based on their unique characteristics would encourage management to focus on the appropriate areas.  100% of the 20 deputy heads that spoke of a risk-based approach to assessments agreed that this would enable the consideration of the unique risks and priorities of organizations.  This would further support the efforts towards streamlining the AoMs.  These issues were consistently identified during other stakeholder consultations conducted.

Development of priorities for each department and agency:  A common point that was identified by departmental stakeholders was that the results of the MAF assessments that are published on the TBS site are missing context.  For purposes of consistency and comparability, context is not considered in the application of the assessment criteria; however, in recent assessment rounds, diligence has been taken to provide appropriate context to the assessment results in the context page provided with the final assessment results. 

In the US, where assessments take place quarterly, a “double scorecard” approach is in place to not only assess the organization’s current state but also its progress against its implementation plan.

Interpreting an organization’s performance relative to its circumstances (e.g., life cycle stage, progress towards improvement, size) is critical to get an overall picture; however, setting individual priorities based on the unique circumstances of the organization, sets an expectation of what is achievable by the department/agency and holds the deputy head accountable for performance against these expectations.  The identification of these priorities and the performance against them would provide the necessary context to understand the results relative to the unique circumstances of the organization.

6.  While the reporting burden associated with MAF has been reduced, there are further opportunities to reduce the impact of the MAF assessment on the departments and agencies.

MAF assessments are completed on an annual basis for all applicable organizations, with the exception of small and micro agencies.  Round VI included 21 AoMs and 68 lines of evidence for which submission of documentation was required to allow completeness of assessment.  In Round V, a total of 16,96114 documents were submitted to TBS for MAF assessment purposes.  As a result of feedback from the post-mortems, TBS committed to reducing the reporting burden.  In Round VI, this number had been reduced by 50 per cent, in part due to the introduction of document limits per AoM.  This reduction holds true for both the number of documents submitted as well as the total size (in gigabytes) of the documents.

Acknowledging the increased impact that the MAF reporting requirements have on a small agency, these agencies are assessed on a three-year rotation.  Micro agencies are only required to complete a questionnaire to inform subsequent interviews with TBS senior officials.

“The reporting burden needs to be assessed . . . to explore options for streamlining the documentation process by maximizing the use of information that is available through other oversight mechanisms / assessments.” (Source: deputy head interview)

In the UK and the US, the current models do not assess all organizations.  The focus of the approaches is on the large departments only; limiting the number of organizations assessed.  This approach is not recommended as the results of the MAF assessments are used as one input into the COSO process and would impact the comprehensive design of the model. 

Despite the positive feedback for these efforts, the reporting burden associated with MAF continues to be a challenge for departments and agencies.  Stakeholders believe that the reporting burden, coupled with the public nature of the assessment results, has led to “playing the MAF system,” and has not necessarily resulted in improvements to management practices.  A risk or priority based approach to MAF assessments that limit measurement to selected indicators relevant to the individual organization would support the reduction in the reporting burden.

Streamlining reporting requirements:  Opportunities for streamlining the assessment process by leveraging existing information was a theme identified by all stakeholder groups.  Potential sources to inform MAF were identified including, external Audit Committees, the Auditor General reports and other centrally available, objective evidence.  There may also be the opportunity to use other information gathering techniques, e.g., interviews and workshops, to gather evidence on which to complete an assessment, where applicable.

As a progressive step, TBS has taken the initiative to streamline the element of People for Round VII.  The approach taken has resulted in a consolidation of the people management elements currently reflected in AoM #1, 10, 11 and 21 into one AoM called “People Management”15.  The process has further been streamlined to develop measures based on eight existing Key Performance Indicators (KPIs) that do not require any additional reporting requirements beyond what is currently available.  The approach taken was to identify all sources of data requests and all information currently available. Indicators of performance were then developed using the existing information.  This approach may be considered for other AoMs in conjunction with the development of “golden rules” (outlined in section 4 "Recommendations" to this report) within the performance assessment framework.  

Consistency of available guidance for indicators:  The level of guidance provided to organizations to support their understanding of the reporting expectations is inconsistent across the indicators.  For some AoMs, the level of guidance materials provided to departments and agencies is very detailed and prescriptive, which can add to the complexity of the process.  For large departments, each AoM can be allocated to individual senior executives; in contrast, smaller agencies must rely on the limited resources they have to respond to MAF and as such, the amount of guidance materials can seem overwhelming and directly impact the acceptance and satisfaction with the approach.  Development of “golden rules” or common principles, including simplicity of guidance documentation with a consistent look and feel, should provide the necessary parameters and allow for consistency of guidance across AoMs.

7. The subjectivity of the MAF assessments is a result of a significant number of qualitative indicators built into the assessment approach.

Subjectivity of indicators and rating definitions:  For those indicators that are necessarily qualitative in nature (i.e. without quantitative measures), a significant component of the assessment relies on judgment by analysts, reviewers and executives within TBS.  For example, AoM #8 “Managing Organizational Change” attempts to measure the “extent to which the organization is engaged when undertaking change management”.  AoM #1 “Values-Based Leadership and Organizational Culture” seeks to measure that “organizational culture is reflective of public service values and ethics”.  In these instances, there is a risk that there will be challenges in supporting decisions and ensuring consistency in the application of judgment.   

Qualitative measures are a necessary way of measuring specific elements of managerial performance.  Necessary subjectivity that is built into the assessment process should nonetheless be accompanied by well-defined measures that increase the likelihood of consistency across organizations.  In our review of criteria supporting AoM ratings, we noted that the rating definitions used to assess the evidence are not always well defined, nor are they supplemented with examples or baseline standards to ensure consistency across AoMs.  As an example, when rating an organization for Line of Evidence 1.1 “Leadership demonstration of strong public service values and ethics”, the difference between receiving an ‘opportunity for improvement’ and receiving an ‘acceptable’ rating is the difference between whether the task was performed ‘sporadically’ and ‘regularly’.  The rating criteria does not define the qualitative indicator (i.e. how often is ‘regularly’), nor is there a measure of effectiveness provided (i.e. is performing a task ‘regularly’ appropriate to the circumstances for a certain department).  By comparison, we noted in our international research that the CAF approach in the EU includes provisions of examples to support consistency in the application of the measures.

Perceived negative connotation of the acceptable rating: Within MAF, each department is given a rating by AoM.  The rating scale, which is used for all AoMs, comprises four levels:

  • Strong;
  • Acceptable;
  • Opportunity for Improvement; and
  • Attention Required.

Through the course of our evaluation, several departmental stakeholders identified that the rating terminology of “acceptable” was not well received or considered appropriate as it is perceived to have a negative connotation.  Since an “acceptable” rating is meant to be positive, based on the associated narratives provided by TBS, there may be an opportunity to adopt alternative terminology to represent this rating on the assessment scale.  For example, the UK CR uses the label “well placed” as the next rating before “strong”.

Inconsistency of scoring across indicators:  While a rating definition is provided for each line of evidence, a scoring methodology is applied to each AoM to arrive at an overall score by AoM.  The score and weighting system used to assign the AoM rating is not consistent across the AoMs.  Approaches used include weighted average, a straight average or a subjective approach.  Using a consistent framework for weighting assessment scores is considered a best practice, which would further improve the transparency, understanding and acceptance of AoM ratings by departments and agencies. 

There is an opportunity to harmonize the indicators and the associated rating criteria to increase the consistency of assessment given the qualitative nature of indicators designed to measure managerial performance.  The development of “golden rules” for all AoMs, including measurable indicators that use a standard scoring approach, would provide parameters to the AoM leads within TBS when developing indicators and rating definitions.

8.  Many of the lines of evidence from Round VI are process-based, which do not measure effectiveness of the outcomes resulting from the process.

“MAF is measuring process, not management outcome; MAF tells us WHAT but needs to go further to say HOW to improve.” (Source: deputy head interview)

Our evaluation identified that out of the 68 lines of evidence in Round VI, 24 (35%) are primarily process-based; the remaining lines of evidence attempt to measure results/outcomes or measure compliance to policy requirements.  Process-based indicators may only measure the process associated with the infrastructure and do not necessarily attempt to integrate measures of the effectiveness or the outcomes of the decisions that have been made. 

By using process-based indicators to measure managerial performance, there is an inherent assumption made that the process itself will lead to the achievement of results.  This assumption can only be made; however, if it can be determined that the process is optimal for every situation; it is difficult to measure managerial performance using process-based indicators due to this uncertainty factor.  Outcome-based indicators increase the ability to assess whether the management practices had an impact on the quality of decisions and actions.  The use of a logic model can be leveraged to determine these indicators.

As an example, one organization confirmed that to meet the requirements of MAF, they formalized their previously informal committee structure.  As a result, each committee developed terms of reference and formalized the process.  While this was done and was recognized by the MAF score, senior management questioned whether it changed anything regarding the actual effectiveness or outcomes of the mechanism. 

The development of “golden rules” for all AoMs, including outcome/results-based indicators, would provide parameters to the AoM leads within TBS when developing indicators and rating definitions.

3.4  Is MAF Cost Effective?

“Is MAF cost effective? Two years ago I would have said no. We’re getting there now. We’re starting to see more efficiency and effectiveness in the assessment and more benefits on the managerial side...” (Source: deputy head interview)

The results of our analysis indicate that while improvements have been made in recent rounds to address the issue of the reporting burden and thus improving the efficiency of the MAF process, we think there are additional opportunities to reduce the level of effort required for organizations to provide evidence as part of the MAF process.  However, based on the limitations of the costing information available, a conclusion on the costs and the cost effectiveness of MAF cannot be determined.  The section below provides an analysis of the cost information that was available.  Also included are two examples regarding benefits departments have identified resulting from MAF.

9.  Our consultations indicated that most departments and agencies were unsure of the cost effectiveness of MAF. 

Example: Agriculture and Agri-food Canada (AAFC)

AAFC historically has struggled with the capacity to manage the required TB submissions, as a result of the increased funding provided to the department resulting from the 2006 Budget. This resulted in a MAF result of “attention required” for AoM 5 in Round IV.

Significant efforts were undertaken by the department to improve the management of TB submissions, including the development of a protocol and criteria to differentiate between priority and less critical TB submissions.

This reduced the number of submissions that required compressed timelines and allowed sufficient time for the department to develop good quality TB submissions.

Further, a control unit within the department has been established to enhance oversight, the challenge function and quality control. All these actions have resulted in an improved rating for AoM 5 since Round IV.

Our key findings resulting from our consultations with departments and agencies are that:

  • While a majority of deputy heads interviewed indicated that there is a significant level of effort required within the department to report on MAF, there are qualitative benefits being realized in the departments as a result of MAF.
  • Deputy heads acknowledged efforts to address reporting burden and ask for continued efforts in this area, e.g., reduction in documentation requirements, tools, guidelines and the best practices conference were well received.
  • From the perspective of small departments and agencies, several interviewees indicated that the level of effort was not justifiable given the limited capacity of their organizations. 
  • Our evidence suggests that there is strong support for streamlining the number of AoMs and making the process more risk-based so as to reduce the level of effort required by departments/agencies to respond to MAF requirements.
In addition to interviewing various stakeholders on the cost effectiveness of MAF, we gathered information on the approximate cost of conducting a MAF assessment from both the view of the departments and agencies and TBS. 

Example: Atlantic Canada Opportunities Agency (ACOA)

ACOA is an example of an agency that has embedded MAF into the daily management of the organization to ensure ongoing robust management practices. Based on best practices identified across Regional Economic Development Agencies (RDA), ACOA has developed MAF action plans, holding senior management accountable for their integration into strategic and operational plans, with oversight by a MAF Governance Committee.

A total of 21 departments and agencies were asked to submit information regarding resources required to respond to the annual MAF assessment process, including full-time equivalents (FTEs), salaries, a total time estimate and the total cost of the MAF assessment for their department.  Of the 21 requests, 14 organizations provided their estimate of the cost of MAF.  A further sampling of three departments/agencies (one small, medium and large) was conducted to determine the level of effort in person days for 12 MAF activities. From this information, a total estimated cost of the MAF assessment process for the department was determined.

Cost of MAF to departments/agencies:  Given the range of responses and feedback provided separately in interviews, it was clear that most (if not all) organizations do not track the cost of the MAF assessment process and as such, the results of the cost analysis are questionable. Departmental representatives were only able to provide an indication of the effort required for Round VI.  In several instances, the MAF contacts in the departments were able to provide the FTE information for those individuals that are dedicated to the MAF process but found it difficult to estimate the level of effort for others across the department who provide input to the process.  Further, for the departments that were able to provide information, the data was provided to us with a number of caveats and limitations.  As a result, a trend analysis could not be performed nor could we analyze cost against performance.

Based on the information provided, the graphs below provide an indication of the total cost of MAF for a sample of departments and agencies.

Figure 4: 2008/09 MAF Assessment Estimated Total Cost per Organization
Figure 4: 2008/09 MAF Assessment Estimated Total Cost per Organization

Figure 4 - Text Version

Figure 4 outlines the total estimated cost of responding to MAF as reported by the individual departments and agencies.  As demonstrated, the cost of MAF varies across organizations.  Based on the information available and the limitations of what the information represents across the departments and agencies, a conclusion on the costs of the MAF process, let alone the cost effectiveness of MAF, cannot be determined. 

Due to this result, TBS approached three organizations (one small agency, one-medium sized organization and one large department) to complete the detailed costing template and provide their organization’s time based on the 12 MAF activities.  Figure 5 below shows the total estimated cost of conducting the MAF assessment and it is clear that as the size of the department increases so does the cost.  The estimated cost of conducting the assessment for the sample departments according to size were: $47,700 (small), $118,700 (medium) and $373,400 (large).  What is also interesting to note is that when the cost is considered as a percentage of the departments operating budget the inverse is true; small agencies spend more time and incur a greater cost as a percentage of total budget than large departments / agencies. This finding is consistent with comments that the reporting burden is felt more by the smaller agencies.

Figure 5: 08/09 MAF Assessment Estimated Cost for Three Sample Organizations
Figure 5: 08/09 MAF Assessment Estimated Cost for Three Sample Organizations

Figure 5 - Text Version

Costs to Treasury Board

The MAF Directorate facilitated a costing exercise within TBS to determine the estimated cost of conducting the MAF assessment.  The cost estimates were collected from the four program sectors and the policy centers responsible for all 21 AoMs, the MAF Directorate (which coordinates the MAF assessment process for the Government of Canada) and TBS’ Corporate Services Branch (which maintains the MAF Assessment System and MAF Portal, as well as supporting corporate communications related to MAF).  It is important to note, that like the departments / agencies the information provided are estimates, as time per task related to MAF is not tracked within TBS, nor are the costs of goods or services. 

The total level of effort is estimated to be 47.5 FTEs.  Within TBS, the most recent assessment period (Round VI) involved 339 individuals at all levels, across the organization.  The majority of the effort for the MAF assessment process is attributed to the work of the policy centers and program sector, which conduct the assessments for the 21 AoMs.  The MAF Directorate, which coordinates the MAF process, represents 16% of the person effort while Corporate Services accounts for the remaining 5%.

Figure 6: Estimates of MAF Assessment Costs within TBS by Sector
Figure 6: Estimates of MAF Assessment Costs within TBS by Sector

Figure 6 - Text Version

As a result of this exercise, the MAF assessment process within TBS is estimated to be $5.6 million per year.   

3.5  Is MAF Governance Effective?

While the governance over MAF and the MAF assessment process is effective, there is an opportunity to enhance MAF governance through more meaningful engagement with departments and agencies.

In assessing the effectiveness of MAF governance, we were asked to address the roles, responsibilities and approval processes supporting MAF, the appropriateness of the TBS’ role in measuring government managerial performance and whether the introduction of MAF has allowed for systematic and transparent conversations between deputy heads within the Federal government.  Our key findings resulting from this are that:

  • TBS is the appropriate entity to measure managerial performance within the Federal government; and
  • The opportunity to use MAF for systematic conversations is under-leveraged.

While we touch on roles, responsibilities and approval processes in discussing these findings, we have addressed this in an integrated fashion in section 3.3, “Are MAF Assessments Robust?”.

10.  Treasury Board Secretariat is the appropriate entity to measure managerial performance within the Federal Government.

The role of TBS is to ensure that government is well managed and accountable and that resources are allocated to achieve results. The functions performed by TBS are directed towards the governance, accountability and quality of public sector management.  These functions are intended to have an impact on the efficiency and effectiveness with which government programs and services are delivered.  MAF has allowed TBS to assess managerial performance and policy compliance at a departmental level and facilitate conversations and support effort towards high organizational managerial performance.

In our international research, we did not identify other countries or models where the central management board plays as active a role in the management capability assessments that take place. 

  • The UK’s CR employs departmental and external reviewers, although it is important to note that, from a governance perspective, the process is managed centrally by the Cabinet Office. 
  • Countries that employ the EU’s CAF do so through self-assessment.  The European Institute of Public Administration supports internal assessment through the CAF resource centre. This centre monitors application and provides external support, including training for internal reviewers as needed. The resource centre also disseminates information to provide benchmarking assistance for EU member states.
  • “TBS is the right place to be doing this because that is their role.” (Source: deputy head interview)

    In the US, the assessments under the PMA are conducted through self-assessment.  More interestingly, a key distinction between the PMA and the MAF is that the Chief Operating Officers of all federal agencies are, as a group, responsible for implementation of the PMA. This group forms the President’s Management Council reviewing progress on the PMA on a regular basis.

While self-assessment could be seen as a cost effective approach to assessment, there are limitations that would counter the benefits that stakeholders have enjoyed and have come to rely on from MAF.  Typically self-assessment is limited in rigor and quality, impacting the value and the ability for deputy heads, TBS and COSO to rely on the results as inputs into decision-making processes.  Self-assessment may be considered as an internal tool for deputy heads to assess the state of management in their department or agency during off-cycles of the MAF assessment period. 

We think, however, that regardless of the approaches used elsewhere, the role TBS plays in MAF is appropriate to its role within the Federal government.  This view was confirmed in our consultations with departments and agencies, as well as TBS representatives, where there was a consistent viewpoint that oversight of the MAF, including conducting the assessments, was an appropriate role for TBS.

Beyond the ownership of the MAF process, TBS is responsible for the conduct of the individual assessments.  Leveraging the UK model, the involvement of external reviewers was presented to various stakeholder groups as an option to enhance the level of management expertise within the assessment process.  We do not recommend this approach as it has been viewed as limited in terms of the ability to understand the Federal government context and environment.  Further, it would add significant expense to the existing MAF process. 

From a governance perspective, there may be an opportunity to enhance MAF through the involvement of “peers”, e.g., participation of a small group of senior executives in the process and discussion of system-wide results in an advisory capacity.  This group could be involved both at the outset of the MAF assessment round, through a review of the planned assessment framework, and at the end, to review and advise on the assessment results.  This would have the benefit of providing senior, experienced advice to TBS at key points in the assessment process, as well as to the deputy heads that are receiving the assessment results, and would, we think, increase the level of acceptance of the assessment process and results across the government.



Date modified: