This page has been archived.
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.
Robert Lahey
REL Solutions Inc.
January 2011
This report provides a summary of consultations held with a small sampling of Deputy Heads of federal organizations about the Evaluation function. The objective was to provide information to serve as input to the Treasury Board Secretariat’s (TBS) first annual report to Treasury Board on the ‘Health of the Evaluation Function’.
The TBS Centre of Excellence for Evaluation (CEE) engaged Robert Lahey, President of REL Solutions Inc. to conduct interviews with nine Deputy Heads over September 2010, addressing four broad lines of enquiry:
A list of the Deputy Heads interviewed is provided in Appendix 1.
This report provides a summary overview of feedback from the Deputy Head consultations, reporting against each of the four issues identified above.
The annual Report to the Treasury Board on the Health of the Evaluation Function is a new requirement under the 2009 Policy on Evaluation. It is intended to support the Treasury Board and Secretariat by providing an overview on how implementation of the 2009 Policy is progressing across departments. Operationally, such a report is expected to help both TBS in its functional leadership role and departments, by identifying needed improvements in the Evaluation function.
While the Deputy Head consultation is only one input to the Report on the Health of the Evaluation Function, the consultation with Deputy Heads was expected to provide a rich information base of how deputies see and use Evaluation within the management of their organizations.
Eighteen months into the transitional implementation of the 2009 Policy on Evaluation, it was expected that organizations would have a sufficient amount of experience with the new Policy. Moreover, key issues, should they have arisen, would have had time to come to the attention of the most senior official in the organization.
The number of organizations to consult (9) and the identification of which specific organizations was determined by TBS officials.
Though the total sample of nine is quite small, and should therefore not be considered as representative of the full universe, the selection of organizations covers a broad cross-section of departments and agencies. Listed in Annex 1, the nine Deputy Heads cover a broad range of government business and circumstances for Evaluation, including: a department with a large Grants and Contributions (G&C) component; a ‘policy’ department; a regional agency; the department with the largest Evaluation unit; a diversity of program types including social programs, economic programs, regulatory programs and international programs; and, organizations where Evaluation was/was not co-located with Internal Audit.
In setting up meetings, each Deputy Head was formally contacted by the TBS Secretary (See Appendix 2 for the information forwarded by TBS in advance of the meetings with Deputy Heads). The request was made for a 30 to 60 minute interview and it was suggested that no advance preparation was required on the part of the Deputy, as the consultation would be seeking ”top of mind” reactions to questions about your evaluation function”. Only the four broad lines of enquiry were identified[1]
Deputy Heads made themselves available for interviews that ranged in duration from 45 to 60 minutes. In all cases, the Deputy Heads were more than willing to devote their attention to the discussion on Evaluation and its use.
This summary report is a synthesis of findings as they relate to the four broad lines of enquiry. Given the small sample size, the analysis is more qualitative, but does point out those areas or instances where a majority of Deputy Heads take a similar position.
Deputy Heads allude to a variety of ways that Evaluation can and does get used within their organization. Some specifically mentioned the link to strategic planning and performance reporting (as input to the Departmental Performance Report) but the areas that seemed to stand out were the following:
A few deputies also reported that not all of the potential uses of Evaluation can necessarily be predicted, as it is sometimes difficult to trace where evaluation findings will have a particular influence (for example, in policy discussions).
What came out as important from the comments of the deputies is that Evaluation needs to be conducted and discussed in a timely fashion and well connected to senior management discussions. The latter is important to ensure that the Evaluation function anticipates areas of future interest and makes available findings from a relevant evaluation to the key audiences.
A number of deputies felt that the new proximity of Strategic Review to Evaluation had, as one Deputy put it, “breathed some new life into Evaluation”.
“Strategic Review has bumped up Evaluation a couple of notches.”
But as some deputies noted, the real impact on Evaluation will be in the future since the first round of Strategic Reviews had to rely on readily available information. In many cases, evaluations were not available. Now though, with a better understanding of where the information gaps lie, the view is that planning for future Evaluation work can take account of the timing of future Reviews.
Overall, many of the deputies felt that Evaluation was making a “solid” contribution to decision-making in their department, though a number also felt that more could be achieved (for reasons discussed in the next section below).
Several factors were mentioned by the deputies that likely contribute to the use of Evaluation:
Most deputies identified some elements they either directly described as a hindrance to the use of Evaluation in their department or would likely serve as some form of barrier. This includes the following:
As noted above, some Deputies have adjusted their Departmental Evaluation Committee (DEC) to more closely mirror their Senior Management Committee. Done apparently for management reasons, there is no feedback or sense that this has in any way interfered with the ‘neutrality’[3] of the Evaluation function.
In a number of the organizations consulted, Evaluation is co-located with Internal Audit, largely as a result of the historical positioning of these two functions within the organization. This could have several impacts from a ‘governance’ perspective, largely because of certain requirements of the 2006 Policy on Internal Audit, including: the introduction of members external to government to Departmental Audit Committees; (ii) a heightened profile of the Chief Audit Executive (CAE); and (iii) a renewed importance placed on ‘independence’.
The feedback from deputies was that, to date, external members sitting on both the Audit and Evaluation Committees principally needed to gain a greater understanding of Evaluation and its role in the organization.
Regarding the higher profile given the Chief Audit Executive by the 2006 IA Policy, instances of co-location would seem to imply that the Evaluation function is now having more frequent interactions with the Deputy. It is not clear however whether this actually generates more discussion about Evaluation than otherwise would be the case.
Regarding the emphasis put on independence of the Internal Audit function -- deputies seem to make a distinction between the two functions in this regard. There would seem to be a wide-held view that Evaluators (unlike Internal Auditors) will and should consult with managers at various points in an evaluation, and yet can remain neutral and objective when it comes to analyzing findings and reporting on results.
In general, deputies interviewed did not express any issues or concerns with the neutrality of Evaluation being compromised in their organizations.
The second broad area of consultation with Deputy Heads concerned ‘how the Expenditure Management System (EMS) renewal and the 2009 Policy on Evaluation have impacted the conduct, resourcing and planning for Evaluation in the department’.
Those deputies that alluded to the EMS only did so in the context of ‘strategic review’; all though did have something to say about the new Policy on Evaluation and their perception of it. Comments ranged from a view that the Policy was pretty well conceived to comments that some of the requirements are overly ambitious. As one deputy pointed out however, it is probably too early to determine the full impact of the Policy until departments have experienced a complete cycle.
The implicit linking of Evaluation to Strategic Review was noted by a number of deputies to have raised the profile for their Evaluation function[4]. As one deputy mentioned, it has “raised the bar” for Evaluation, in part because it will force all government programs to systematically address fundamental issues such as ‘program rationale’.
One deputy commented that Evaluation is now “occupying a bit more space” and, “for the right reasons”.
Many of those interviewed however were quick to note that, if the bar was going to be raised, more of a leadership role was needed from TBS insofar as Evaluation is concerned. This is elaborated on further in Section 4.3.
There were five overriding concerns that came up a number of times and expressed in a variety of ways during interviews with deputy heads. In no particular order, concerns of deputies are summarized below.
Human resource (HR) and capacity issues were an overriding concern of deputies interviewed. Several dimensions of the human resources issue surfaced during discussions with deputy heads. Even in those organizations where the size of the internal Evaluation unit has grown in recent years (just under one-half of the organizations consulted), there is a general concern with the following:
Addressing these issues, according to the deputies consulted, will require TBS to take action on many fronts. Given the nature of the development of Evaluators, the strategy will of necessity need to be longer-term and perhaps “community-wide”. It was suggested that the experience of the OCG in developing the Internal Audit community might offer some valuable lessons.
A related question concerns the use of external consultants to assist in carrying out Evaluation work for departments. For most deputies though, this would not resolve the HR problem for three key reasons:
One question that was posed to deputies related to the Auditor General’s perspective in her recent review of the Evaluation function[6] that departments and agencies should have more in-house Evaluators and rely less on external consultants to conduct evaluations.
Not surprisingly, those deputies whose Evaluation units had increased in size said that they plan to use fewer external consultants in the future. The rest indicated they would hire consultants ‘as needed’. Or, as one deputy put it, “It depends” on many factors – availability of special skills, pressing need for delivery of a product, availability of internal Evaluation staff, etc. In other words, all deputies are planning on continuing to use external consultants where it makes sense. That said, the majority of the deputies lamented the uneven “quality” of external consultants.
For deputies, the pool of qualified Evaluators will not be topped up to satisfactory levels by simply adding external to internal Evaluators. For this reason, Deputy Heads expect that an insufficient pool of skilled Evaluators across the full system will challenge departments in meeting the requirements of the 2009 Policy on Evaluation in the short-term, if not longer.
Having identified a series of challenges facing the Evaluation function, Deputy Heads were then asked to identify ‘how TBS/CEE could best support the organization vis-à-vis Evaluation and the increased requirements of the new Policy’.
An overview of the suggestions from Deputy Heads is given below, organized under four broad headings:
A general view of the deputies interviewed is that TBS needs to clarify its expectations regarding the 2009 Policy on Evaluation and provide guidance on cost-effective approaches to meeting the TB Policy requirements.
In place of the perceived “one-size-fits-all” approach of the Policy, a number of deputies offered the following suggestions:
A number of deputies took issue with what they viewed as the inflexibility of the five-year cycle, as well as the yardstick that might constitute good performance by a departmental Evaluation function. It was noted that risk analysis should be introduced into the planning of Evaluation coverage, as it is with Internal Audit. As one Deputy (who was quite supportive of Evaluation) noted, if “good performance” is based on compliance with meeting 20 per cent evaluation coverage, then there is every chance that Evaluation will be less meaningful to a DM. What is important is for a department to produce meaningful evaluations; that is, evaluations “that make a difference”.
It was mentioned by more than one Deputy that there is an important audience within TBS itself – i.e. its own Analysts – that requires a better understanding of the expectations for an Evaluation function, and performance measurement in general. A view reflected in several comments from deputies is that TBS Analysts across its various branches need to have an aligned set of expectations when it comes to Evaluation. There is a real concern that this is not the case presently.
More than simply clarifying expectations, a number of deputies noted that specific guidance and tools from TBS would assist in implementing the Policy and eventual use of Evaluation. There was a sense that TBS has generally been playing an “oversight” role and providing too little support. As noted by one Deputy, departments generally feel “left on their own” when it comes to the new Policy.
A range of areas where TBS/CEE could provide more support was identified:
In general, there were many statements relating to a lack of or too little leadership being provided by TBS insofar as the Evaluation function is concerned. Some very pointed statements mentioned: the TBS role as “not obvious”; that departments were “not getting much guidance from TBS”; “not much visibility (for Evaluation) from TBS”; and, the fact that it is not the case right now that the Evaluation community is “being led; seen to be led; and, seen to be making a difference”. In comparison, for the Internal Audit function in government, the relationship of the Office of the Comptroller General (OCG) with the Chief Audit Executive (CAE) and departments was described as “much stronger”.
In addition to guidance suggested for specific areas, noted above, some broader suggestions were advanced relating to a more visible, pro-active and senior champion for Evaluation needed from the centre. In particular:
Virtually all deputies recognized the major challenge for Evaluation in the area of human resources (HR) and so all had a comment or suggestion on how TBS/CEE might respond:
For most, though not all deputies interviewed, there is an understanding that ‘ongoing performance monitoring’ and ‘evaluation’ are two tools for measuring the performance of an organization’s program base.
On the whole, information generated by ongoing performance monitoring does not appear to be comprehensive, and taken together with information generated through evaluations, the two tools are not currently providing Deputy Heads with a complete picture of organization-wide performance. The implication according to one Deputy is that evaluation and performance monitoring are stronger tools for helping the DM to manage programs individually, rather than for providing a comprehensive picture of the organization’s performance as a whole.
As noted previously, it seems that the first round of the Strategic Review exercise helped identify for Deputy Heads where the information gaps were in terms of articulating the ‘performance’ of their organizations’ programs. It was noted by some deputies that this helped in determining priorities for future Evaluation work.
In terms of current performance measurement efforts, as a number of deputies noted, ongoing performance ‘monitoring’ is hampered by the difficulties of measuring ‘outcomes’; of getting the right data in a timely and cost-effective fashion. This appears especially challenging for organizations with a large component of G&C programs, where the period from program inception to evaluation is often only five years. This short period makes it difficult to collect appropriate performance monitoring data, because some of the program outcomes may take several years to materialize.
Deputies’ feedback would suggest that for many organizations the real challenge does not lie with developing their performance measurement framework per se, but with making it operational. A number of organizations do not have all the measurement systems needed to collect data and it was also suggested that current expectations for measuring outcomes may be unrealistic or, as one Deputy put it, “There is rhetoric around ‘outcomes’ and this often puts a focus on elements that are difficult to measure”.
One Deputy, whose organization was into their “second or third generation of performance reporting”, indicated that they are still a considerable distance away from a fully functioning performance monitoring system.
Even for departments with mature data collection systems, whether or not individual program managers have comprehensive performance monitoring information appears to be variable. While some performance monitoring information is readily available, if new data bases need to be developed (or a special study or survey needs to be carried out to collect data), there are cost and resource issues; a critical challenge in today’s world of frozen budgets.
Deputy Heads indicated that Evaluation units play an advisory role to assist program managers in developing their Performance Measurement (PM) systems. Given resourcing issues raised by deputies though, a future challenge that may be faced in organizations is whether the Evaluation group will be resourced to continue to play this advisory role.
For Evaluation, feedback from the Deputies would suggest that, because of the challenges for measuring outcomes through ongoing performance monitoring, there may continue to be many instances where this type of performance information is not readily available as input to evaluation work, thus requiring evaluators to collect the data needed to assess whether outcomes are being achieved.
This report summarizes the views of nine Deputy Heads consulted in September 2010 on four broad issues related to the use of Evaluation and the 2009 Policy on Evaluation. Such a sample is certainly too small to draw general conclusions for the universe of Deputy Heads of federal organizations. Nevertheless, it does provide a snapshot of opinions from a diverse set of organizations.
On the basis of the nine Deputy Heads consulted, it would seem that Deputy Heads are paying more attention to Evaluation than might have been the case in the past. For some, the recognition that Evaluation can play a useful role in the oversight and management of the organization is certainly not a new phenomenon. They have used Evaluation in the past in a variety of ways within their organization. In general though, where the perception and use of Evaluation in the past was probably ‘uneven’ across federal organizations, it has likely risen somewhat in profile and stature across the system.
The 2009 Policy on Evaluation, with its requirement for 100 per cent coverage over a five-year cycle and a pre-determined set of core issues, has created a mixed set of reactions. Many have come to recognize that the departmental Evaluation function can play a useful role in assisting the Strategic Review exercise, made more meaningful with the current period of frozen budgets. But most feel that system-wide challenges (particularly the insufficiency of the pool of skilled Evaluators) will stand in the way of most organizations meeting the requirements of the Policy. And, the perceived rigidities within the Policy and its application are expected to raise the overall cost of carrying out Evaluation work. Deputies’ perception of a lack of flexibility also raises for them a tension around whether Evaluation can be conducted in a way that serves all the needs of the department/agency.
In many respects, the requirements of the 2009 Policy on Evaluation for 100 per cent coverage, coupled with the MAF Annual Assessment of each organization on its ‘quality and use of Evaluation’[7], seems to be prompting Deputies to take a harder look at the prospect of evaluating all of their programs in a more systematic fashion.
Deputies have raised a number of concerns however with how the Policy on Evaluation is being rolled out and have offered some suggestions to TBS as feedback. Since the 2009 Policy on Evaluation is less than halfway into its ‘transition’ period, this is an opportunity to ‘learn and adjust’ as necessary. That said, as some deputies noted, it is probably still too early to appreciate the full implications of the Policy and its impact on departments and agencies until they have experienced a full cycle.
An important area in going forward that was noted by several Deputy Heads calls for TBS to play a higher profile leadership role for the Evaluation function. Comparisons were made with the Office of the Comptroller General (OCG) where, in 2006, with the introduction of a new Internal Audit Policy, departments and agencies were faced with a comprehensive set of new accountability requirements, and at a time when the professional Audit community was also in need of significant support. Deputies noted that the high profile of the OCG helped ensure that federal organizations responded to the new IA Policy and also worked to support the capacity building needs of the community. The view is that TBS has not responded with that same high-level support for the Evaluation function. More visibility for Evaluation and a high level TBS champion for the function were cited as important factors that ought to be addressed.
Additionally, with its focus on oversight, the view is that TBS is directing too little effort towards supporting community development for the Evaluation function. All deputies had something to say about human resource (HR) issues facing the Evaluation function. There is a wide-held view that Evaluation capacity across the system is challenged by too few skilled Evaluators and that TBS needs to play a role in addressing this. A variety of suggestions were advanced.
Comments from deputies also suggest that the Evaluation function is facing some challenges in today’s environment that were not there when the IA Policy was being introduced. Unlike 2006 when funding was less of an issue and there was a general appetite for more accountability in the system, the Evaluation function faces challenges on both counts. Some deputies seem to feel that there are currently too many ‘oversight’ and accountability mechanisms; several deputies wondered how well aligned are the requirements of the various TBS policies and initiatives, suggesting that TBS needs to do a better job in communicating this. Coming at a time when budgets are frozen raises the added challenge of the 2009 Policy on Evaluation for Deputy Heads – where best to spend their marginal dollars; i.e. on more evaluation/oversight or on program and service delivery?
Finally, based on feedback from most Deputy Heads, there is an apparent communication/information gap surrounding the requirements of the 2009 Policy on Evaluation. Several deputies raised concerns about a lack of flexibility in the way that they could apply the 2009 Policy on Evaluation to their organization. This is at odds however with feedback from TBS/CEE which indicates that organizations do indeed have flexibility in their application of the Policy. The fact that deputies did advance a number of suggestions can provide TBS with a useful way to address this and focus on any issues of misperception and clarify, where needed, practical implementation issues related to the Policy on Evaluation.
This of course speaks to the broader view of many of the deputies that TBS needs to be more visible insofar as the Evaluation function is concerned, not only providing the needed guidance, but also giving more profile to the Evaluation function.
Department or Agency | Name of Deputy Head |
---|---|
Canada Economic Development for Quebec Region | Suzanne Vinet |
Canadian Heritage | Judith Anne LaRocque |
Canadian International Development Agency | Margaret Biggs |
Citizenship and Immigration | Neil Yeates |
Human Resources and Skills Development Canada | Ian Shugart |
Industry Canada | Richard Dicerni |
Indian and Northern Affairs Canada | Michael Wernick |
Public Safety | William V. Baker |
Public Works and Government Services Canada | François Guimont |
Consultations are being held with deputy heads of 10 federal departments/agencies to get their perspective on the evaluation function.
This initiative is being led by the Centre of Excellence for Evaluation at TBS to get a better understanding on how the function is progressing across departments and the level of decision support it provides to deputies and senior management. Insights gained from the consultation will provide input to the first annual Report to Treasury Board on the Health of the Evaluation Function.
In-person interviews of between 30 and 60 minutes will be conducted from now to mid- September 2010.
The Treasury Board Secretariat has engaged a consultant, Mr. Robert Lahey, to conduct individual in-person consultations with deputy heads. Mr. Lahey is a retired executive from the federal public service who possesses extensive knowledge of the evaluation function and of results-based management.
The consultation with deputy heads will explore four broad lines of enquiry:
TBS is seeking top-of-mind reactions to questions about the evaluation function and no advance preparation is expected prior to the interview. All the names of the participants will be listed in the consultation report and a copy will be shared with them.
Centre of Excellence for Evaluation (CEE), Expenditure Management Sector. Specific questions can be directed to Anne Routhier, Senior Director (613-952-7447) or Rob Chambers, Director (613-952-3112) at the CEE.
[1]. In preparation for the meetings, the consultant reviewed the most recent MAF assessment and Evaluation capacity assessment conducted by TBS so as to provide some insight into use and capacity for Evaluation within each organization.
[2]. Other contracting issues, including problems created by the ‘standing offer’, were mentioned by some Deputies.
[3]. As defined in the Policy on Evaluation, neutrality is an attribute required of the evaluation function and evaluators that is characterized by impartiality in behaviour and process.
[4]. As mentioned earlier though, the more significant contribution of Evaluation to Strategic Review discussions will likely come with the next round of review.
[5]. Potential problems with hiring via ‘standing offer’ were noted.
[6]. Auditor General of Canada, Chapter 1, ‘Evaluating the Effectiveness of Programs’ in ‘2009 Fall Report of the Auditor General of Canada’, Ottawa: 2009
[7]. ‘Quality and Use of Evaluation’ is one of the Areas of Management (AoM 6) that serves as part of the annual rating criteria under the Management Accountability Framework (MAF) exercise.