Treasury Board of Canada Secretariat
Symbol of the Government of Canada

ARCHIVED - Deputy Head Consultations on the Evaluation Function - Summary Report


Warning This page has been archived.

Archived Content

Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.

4. Findings

4.1 The Use of Evaluation by Departments/Agencies

A variety of uses for Evaluation

Deputy Heads allude to a variety of ways that Evaluation can and does get used within their organization. Some specifically mentioned the link to strategic planning and performance reporting (as input to the Departmental Performance Report) but the areas that seemed to stand out were the following:

  • For organizations with G&C programs, Evaluation is an important input to the discussions/decisions around program sunsetting and funding renewal as they go forward to Cabinet.
  • In general, Evaluation can and should serve as a piece of intelligence (though not the only input) supporting departmental Strategic Reviews. In this context, Evaluation evidence and reports can serve as input to documentation identifying ‘low performing, low priority’ areas.
  • Expressed in a variety of ways by several deputies, Evaluation is seen to provide insight on a particular topic or policy area that can be used to inform policy discussions, policy development and decisions. This use highlights the “learning” that Evaluation contributes to the organization or, put another way, the “knowledge” that Evaluation has generated about a particular area.
  • Not mentioned by all, but still deemed as particularly useful by some deputies was the traditional area of “program evaluation” that examined the range of issues around program design, delivery, effectiveness, etc. Their view was that the systematic and disciplined approach to bringing disparate information together (sometimes a challenge) and ultimately requiring a Management Response that is tracked is a “robust approach” that is “on the same footing as Internal Audit”.

A few deputies also reported that not all of the potential uses of Evaluation can necessarily be predicted, as it is sometimes difficult to trace where evaluation findings will have a particular influence (for example, in policy discussions).

What came out as important from the comments of the deputies is that Evaluation needs to be conducted and discussed in a timely fashion and well connected to senior management discussions. The latter is important to ensure that the Evaluation function anticipates areas of future interest and makes available findings from a relevant evaluation to the key audiences.

Evaluation and Strategic Review

A number of deputies felt that the new proximity of Strategic Review to Evaluation had, as one Deputy put it, “breathed some new life into Evaluation”.

“Strategic Review has bumped up Evaluation a couple of notches.”

But as some deputies noted, the real impact on Evaluation will be in the future since the first round of Strategic Reviews had to rely on readily available information. In many cases, evaluations were not available. Now though, with a better understanding of where the information gaps lie, the view is that planning for future Evaluation work can take account of the timing of future Reviews.

Factors supporting the contribution of Evaluation to decision-making

Overall, many of the deputies felt that Evaluation was making a “solid” contribution to decision-making in their department, though a number also felt that more could be achieved (for reasons discussed in the next section below).

Several factors were mentioned by the deputies that likely contribute to the use of Evaluation:

  • The structure of the Departmental Evaluation Committee (DEC). More than one Deputy spoke of how a more inclusive DEC that parallels the Senior Management Team has generated “really good discussion”. For example, more rigour and probing questions in the development of the organization’s Evaluation Plan. Additionally, in cases where the DEC mandate includes approval of Terms of Reference for evaluations, this is providing a forum in those organizations that helps ensure that management concerns/issues get reflected in the scope of evaluations.
  • Not only the structure, but the frequency of meetings of the Departmental Evaluation Committee likely also contributes to the use of Evaluation. One Deputy spoke of DEC meetings occurring as frequently as every 4 to 6 weeks, noting that it “keeps Evaluation top of mind in the governance of the organization”.
  • Some deputies spoke of Evaluation as “part of the governance structure”, getting built into decision-making, Memoranda to Cabinet, linking to strategic planning, policy making and reporting. But what makes this happen? One Deputy described having a “systematic approach” to using Evaluation, where it is in effect embedded into a strong “planning, measurement and reporting scheme”. But, as was noted, this requires a commitment to a systematic use of Evaluation within the organization, without which, the same benefits from Evaluation would not likely be realized.
  • The commitment by a Deputy to Evaluation is more than simply chairing the DEC, though this is important (“This sends a signal”). As one Deputy indicated, successful use of Evaluation depends in part on whether or not the organization is actually interested in “using Evaluation”. It was noted that “This likely varies across government departments”.

Potential barriers to the use of Evaluation

Most deputies identified some elements they either directly described as a hindrance to the use of Evaluation in their department or would likely serve as some form of barrier. This includes the following:

  • Start-up problems associated with the conduct of an Evaluation, ultimately resulting in the Evaluation not being delivered in a timely way. This is in part associated with the general issue of human resources (that is, a view of too few skilled Evaluators across the system) and the need to hire external contractors to support the conduct of Evaluations. It was noted by more than one Deputy that the contracting process has become rather heavy and lengthy, requiring what seems to be an excessive amount of time for administration[2].
  • Evaluation reports not always providing definitive evidence around a particular issue and sometimes lacking clarity around observations and conclusions. Feedback from deputies would suggest a number of different reasons why they believe that this occurs. For instance,
    1. a deficiency in skilled Evaluators, both in-house and externally. Most deputies noted that there were simply too few skilled Evaluators, both within the government ranks and as external consultants, and that from their perspective, the quality of external consultants was quite uneven;
    2. the difficulty in measuring outcomes and dealing with issues of attribution;
    3. the expectations around what can reasonably be measured via Evaluation - sometimes they are unrealistically high. Some deputies noted that it is important to recognize that Evaluation is “only one input” to decision-making and that other factors, including the political dimension, come into play in making decisions. “Evaluation won’t find all the answers, and people need to understand that!”
  • Evaluation reports that are not forward-looking and don’t provide the needed insight for Evaluation to feed into policy discussions. In such instances, it was noted that an evaluation may provide a diagnostic that frames a particular area, but doesn’t always address ‘What are you going to do about it?’ when issues are identified.
  • Too little time and energy available among program and policy people whose input and engagement is required for the follow-up to an Evaluation. Where and how Evaluation results get used takes time and effort of people who may have too little of both available. As was noted, they are principally focused on “keeping the lights on and doing their job”.
  • A view that there are too few resources across the organization to leverage out “what they are learning from the Evaluation function”. For most organizations, the focus on producing quality Evaluation reports leaves no time/resources for the Evaluation group to provide broader more strategic thinking/analysis of what Evaluation results are showing for the organization.
  • A view that the timing and subject of an evaluation is based more on the sunsetting of a program rather than a more strategic approach to evaluating the programs of a department. In this regard, one Deputy spoke of the importance of aligning the conduct of an Evaluation and the timing of the release of study findings, noting that Evaluations need to be delivered when the window of opportunity is still open.
  • The perception that the 2009 Policy on Evaluation is rigid and lacks flexibility to allow a department to focus on issues that may be outside of the core issues of the Policy. The view that the Policy is a “one-size-fits-all” approach was mentioned by many of the deputies.

Governance and Neutrality

As noted above, some Deputies have adjusted their Departmental Evaluation Committee (DEC) to more closely mirror their Senior Management Committee. Done apparently for management reasons, there is no feedback or sense that this has in any way interfered with the ‘neutrality’[3] of the Evaluation function.

In a number of the organizations consulted, Evaluation is co-located with Internal Audit, largely as a result of the historical positioning of these two functions within the organization. This could have several impacts from a ‘governance’ perspective, largely because of certain requirements of the 2006 Policy on Internal Audit, including: the introduction of members external to government to Departmental Audit Committees; (ii) a heightened profile of the Chief Audit Executive (CAE); and (iii) a renewed importance placed on ‘independence’.

The feedback from deputies was that, to date, external members sitting on both the Audit and Evaluation Committees principally needed to gain a greater understanding of Evaluation and its role in the organization.

Regarding the higher profile given the Chief Audit Executive by the 2006 IA Policy, instances of co-location would seem to imply that the Evaluation function is now having more frequent interactions with the Deputy. It is not clear however whether this actually generates more discussion about Evaluation than otherwise would be the case.

Regarding the emphasis put on independence of the Internal Audit function -- deputies seem to make a distinction between the two functions in this regard. There would seem to be a wide-held view that Evaluators (unlike Internal Auditors) will and should consult with managers at various points in an evaluation, and yet can remain neutral and objective when it comes to analyzing findings and reporting on results.

In general, deputies interviewed did not express any issues or concerns with the neutrality of Evaluation being compromised in their organizations.

4.2 The Impact of EMS Renewal and the 2009 Policy on Evaluation in Departments

The second broad area of consultation with Deputy Heads concerned ‘how the Expenditure Management System (EMS) renewal and the 2009 Policy on Evaluation have impacted the conduct, resourcing and planning for Evaluation in the department’.

Those deputies that alluded to the EMS only did so in the context of ‘strategic review’; all though did have something to say about the new Policy on Evaluation and their perception of it. Comments ranged from a view that the Policy was pretty well conceived to comments that some of the requirements are overly ambitious. As one deputy pointed out however, it is probably too early to determine the full impact of the Policy until departments have experienced a complete cycle.

Positive Impact of the 2009 Policy on Evaluation

The implicit linking of Evaluation to Strategic Review was noted by a number of deputies to have raised the profile for their Evaluation function[4]. As one deputy mentioned, it has “raised the bar” for Evaluation, in part because it will force all government programs to systematically address fundamental issues such as ‘program rationale’.

One deputy commented that Evaluation is now “occupying a bit more space” and, “for the right reasons”.

Many of those interviewed however were quick to note that, if the bar was going to be raised, more of a leadership role was needed from TBS insofar as Evaluation is concerned. This is elaborated on further in Section 4.3.

Concerns and Challenges Raised by the 2009 Policy on Evaluation

There were five overriding concerns that came up a number of times and expressed in a variety of ways during interviews with deputy heads. In no particular order, concerns of deputies are summarized below.

  1. A concern that there is an overabundance of ‘review-type’ activity that may not be well coordinated by TBS
    • A number of deputies noted, in a variety of ways, that the system has seen the introduction of a very aggressive set of oversight requirements in the last few years (brought in by the Transfer Payment Policy, Strategic Review cycle, enhanced Parliamentary Committees, the perception of an “overbuilt” set of Internal Audit requirements, and the MAF assessment process). Added to this is the increased requirements of the 2009 Policy on Evaluation, and a number of deputies wondered how well coordinated these various review type mechanisms may be.
    • While all may have their own elements of merit in terms of helping deputies with ‘sound stewardship’, the view is that, taken together, the oversight activities impose a considerable burden on organizations. Several deputies noted that TBS needs to stand back and review the full set of requirements. In this context, for the Evaluation function, it would be useful to: determine where Evaluation is adding most value; and, position Evaluation and better align the various activities within this broader set of oversight mechanisms that have emerged.
  2. A view among many that the 100 per cent coverage requirement over a five-year cycle likely cannot be achieved and that this should not be the yardstick for a ‘successful’ Evaluation function
    • Some Deputies indicated that their organization would meet the five-year cycle requirements of the Policy on Evaluation. A number were uncertain and a number indicated that this simply would not happen. The majority though expressed concern with the cost of maintaining a five-year Evaluation cycle, a number indicating that it was simply not sustainable.
    • As one Deputy put it, departments will aim “to produce meaningful products that matter, whether it is 10% or 20% coverage”.
    • For most deputies, the requirement of 100 per cent coverage over a five-year cycle will challenge the capacity of the system to deliver since, system-wide, there are simply too few skilled Evaluators to meet the demand.
  3. The higher cost for Evaluation cannot be sustained, especially during a period of budget freeze
    • Some deputies expressed the view that, to meet the five-year cycle requirements, they will be spending too much on Evaluation (in part because of the perceived inflexibility of the Policy on Evaluation).
    • Looking ahead, in an era of frozen budgets, for a number of deputies there is a real question about whether it makes sense to spend the marginal dollar on Evaluation, or on delivery of a service or program.
    • Part of this dilemma relates to the perceived overabundance of ‘oversight’ mechanisms. With the requirements and heavy investment in Internal Audit brought on by the 2006 Internal Audit Policy, some deputies are wondering whether the department can also afford what they perceive would be an expensive Evaluation function.
  4. There is a perception that there is inflexibility in the Policy and that this is adding to the cost of Evaluation. For some, it also means that issues important to departments may not get looked at, thus lessening the usefulness of Evaluation to the department.
    • A number of deputies reacted negatively to what they viewed as a “one-size-fits-all” approach of the 2009 Policy on Evaluation. The view is that more flexibility is needed in terms of the design, scoping and conduct of evaluation studies (particularly ‘large’ versus ‘small’ programs and ‘low risk’ versus ‘high risk’ areas).
    • There is a view that the Policy requires all program components be treated the same way, and it is felt that this will generate too many Evaluation requirements and a far costlier Evaluation function.
    • Additionally, the requirement for evaluations to address the full set of ‘core issues’ does not seem to allow (does not give “credit”) to addressing issues that fall outside the TBS listing (even when they are issues of importance to the department) or to studies that do not embrace the full complement of issues as defined by the TBS Policy on Evaluation.
    • Some deputies, as noted earlier, feel that they are unable to take full advantage of the broad learning and ‘knowledge’ generated through Evaluation studies and they attribute this in part to the demands of the Policy. The policy requirement to evaluate all of their programs within a five-year cycle does not leave Evaluation units with enough time or resources for extracting the broad lessons from across multiple evaluations.
  5. The pool of trained experienced Evaluators across the system (both internal Evaluators and external consultants) is felt to be too small to meet the increased demands of the new Policy.
    • Most deputies spoke of the imbalance between the demand for Evaluators (increased via the 2009 Policy on Evaluation) and the current supply across the system. The pool of qualified Evaluators and external consultants is too small, thus impacting on the ability to deliver on the demands of the new Policy.
    • Several issues related to HR were raised as real challenges to the Evaluation function system-wide. These are discussed further below.

Human Resources Challenges facing the Evaluation Function

Human resource (HR) and capacity issues were an overriding concern of deputies interviewed. Several dimensions of the human resources issue surfaced during discussions with deputy heads. Even in those organizations where the size of the internal Evaluation unit has grown in recent years (just under one-half of the organizations consulted), there is a general concern with the following:

  • There are too few trained and experienced Evaluators across the system
  • There are challenges for recruiting and retaining qualified Evaluators
  • Hiring is often delayed due to a lengthy staffing process
  • There is a need to ‘re-orient’ many internal Evaluators from their historical ‘Evaluation manager’ role to that of a more active ’hands-on’ Evaluator

Addressing these issues, according to the deputies consulted, will require TBS to take action on many fronts. Given the nature of the development of Evaluators, the strategy will of necessity need to be longer-term and perhaps “community-wide”. It was suggested that the experience of the OCG in developing the Internal Audit community might offer some valuable lessons.

A related question concerns the use of external consultants to assist in carrying out Evaluation work for departments. For most deputies though, this would not resolve the HR problem for three key reasons:

  • There are too few ‘quality’ consultants across the system
  • Problems associated with the contracting process can seriously delay bringing a consultant on board when needed
  • The contracting process does not always result in the hiring of a ‘quality’ consultant for a particular job[5]

One question that was posed to deputies related to the Auditor General’s perspective in her recent review of the Evaluation function[6] that departments and agencies should have more in-house Evaluators and rely less on external consultants to conduct evaluations.

Not surprisingly, those deputies whose Evaluation units had increased in size said that they plan to use fewer external consultants in the future. The rest indicated they would hire consultants ‘as needed’. Or, as one deputy put it, “It depends” on many factors – availability of special skills, pressing need for delivery of a product, availability of internal Evaluation staff, etc. In other words, all deputies are planning on continuing to use external consultants where it makes sense. That said, the majority of the deputies lamented the uneven “quality” of external consultants.

For deputies, the pool of qualified Evaluators will not be topped up to satisfactory levels by simply adding external to internal Evaluators. For this reason, Deputy Heads expect that an insufficient pool of skilled Evaluators across the full system will challenge departments in meeting the requirements of the 2009 Policy on Evaluation in the short-term, if not longer.

4.3 Suggestions for TBS/CEE Support or Improvement

Having identified a series of challenges facing the Evaluation function, Deputy Heads were then asked to identify ‘how TBS/CEE could best support the organization vis-à-vis Evaluation and the increased requirements of the new Policy’.

An overview of the suggestions from Deputy Heads is given below, organized under four broad headings:

  • Clarifying and/or Revisiting Policy Requirements to Allow for Greater Flexibility
  • Providing Greater Guidance and support for Evaluation
  • Providing More Leadership and Visibility for the Evaluation Function
  • Addressing Human Resource Challenges

Clarifying and/or Revisiting Policy Requirements to Allow for Greater Flexibility

A general view of the deputies interviewed is that TBS needs to clarify its expectations regarding the 2009 Policy on Evaluation and provide guidance on cost-effective approaches to meeting the TB Policy requirements.

In place of the perceived “one-size-fits-all” approach of the Policy, a number of deputies offered the following suggestions:

  • In dealing with ‘smaller’ versus ‘larger’ programs, consider introducing a tiered approach (say three levels) that reflects a range in terms of expectations for the sophistication of the evaluation design and methodologies to be employed.
  • Allow some flexibility in the evaluation of programs within the PAA context; for example, allowing for higher-level evaluations based on a risk assessment.
  • Allow greater flexibility in the scoping of evaluations.
  • Allow greater flexibility in the determination of the appropriate length of the Evaluation cycle, and determine priorities based on ‘risk’ rather than a rigid five-year cycle.

A number of deputies took issue with what they viewed as the inflexibility of the five-year cycle, as well as the yardstick that might constitute good performance by a departmental Evaluation function. It was noted that risk analysis should be introduced into the planning of Evaluation coverage, as it is with Internal Audit. As one Deputy (who was quite supportive of Evaluation) noted, if “good performance” is based on compliance with meeting 20 per cent evaluation coverage, then there is every chance that Evaluation will be less meaningful to a DM. What is important is for a department to produce meaningful evaluations; that is, evaluations “that make a difference”.

It was mentioned by more than one Deputy that there is an important audience within TBS itself – i.e. its own Analysts – that requires a better understanding of the expectations for an Evaluation function, and performance measurement in general. A view reflected in several comments from deputies is that TBS Analysts across its various branches need to have an aligned set of expectations when it comes to Evaluation. There is a real concern that this is not the case presently.

Providing Greater Guidance and Support for Evaluation

More than simply clarifying expectations, a number of deputies noted that specific guidance and tools from TBS would assist in implementing the Policy and eventual use of Evaluation. There was a sense that TBS has generally been playing an “oversight” role and providing too little support. As noted by one Deputy, departments generally feel “left on their own” when it comes to the new Policy.

A range of areas where TBS/CEE could provide more support was identified:

  • Guidance on cost-effective approaches and methodologies for meeting the requirements of the 2009 Policy on Evaluation.
  • More advice/guidance on the standards expected in producing “quality” work, addressing for example, the conduct and process of Evaluation; the writing of reports; follow-up to Evaluations.
  • Providing guidance on the conduct of strategic and horizontal evaluations.
  • Advice to departments on how best to apply information management practices so that knowledge generated by evaluations is retained and easily accessible as part of institutional memory.
  • Providing guidance on TBS expectations and suitable approaches to conducting policy evaluation.

Providing More Leadership and Visibility for the Evaluation Function

In general, there were many statements relating to a lack of or too little leadership being provided by TBS insofar as the Evaluation function is concerned. Some very pointed statements mentioned: the TBS role as “not obvious”; that departments were “not getting much guidance from TBS”; “not much visibility (for Evaluation) from TBS”; and, the fact that it is not the case right now that the Evaluation community is “being led; seen to be led; and, seen to be making a difference”. In comparison, for the Internal Audit function in government, the relationship of the Office of the Comptroller General (OCG) with the Chief Audit Executive (CAE) and departments was described as “much stronger”.

In addition to guidance suggested for specific areas, noted above, some broader suggestions were advanced relating to a more visible, pro-active and senior champion for Evaluation needed from the centre. In particular:

  • TBS needs to put more of a spotlight on Evaluation, playing more of a leadership role across government, to raise its profile and recognition as the Comptroller General does with Internal Audit.
  • Senior levels across government should be targeted for greater interaction – to raise awareness and understanding of Evaluation, why it is important, and, how it can help Deputy Heads. Internal audits are “more visible, understood and (perhaps) appreciated”; the same does not appear to be the case with Evaluation.
  • TBS needs to give more profile to the Evaluation function and more profile to the Head of Evaluation, as the Internal Audit Policy and the OCG have given profile, presence and higher classification to the Chief Audit Executive, CAE.
  • TBS needs to champion Evaluation better by making abundantly clear what constitutes a “robust Evaluation function”, focusing on “What does this mean for a DM?”; and explaining “Why DMs cannot get along without Evaluation – because Evaluation is bringing something important to the table!” Additionally, for practitioners, address in clearer fashion: “How to get there?”
  • TBS needs to provide more support and leadership for the full federal Evaluation community, putting more effort and focus on “community development”.

Addressing Human Resource Challenges

Virtually all deputies recognized the major challenge for Evaluation in the area of human resources (HR) and so all had a comment or suggestion on how TBS/CEE might respond:

  • Training in general, – it was recognized that TBS needs to play a leadership role, given the government-wide nature of the HR challenge. Also, in terms of delivering needed training, it was noted that the link to the Canada School could be strengthened.
  • Focused training, – several elements within an Evaluator’s skill set were identified, including: project costing; developing Statements of Work; Evaluation design considerations; project management; and, communication skills.
  • Training directed at ‘users’ of Evaluation results, – it was noted that there is a need for training/orientation directed at Middle Managers since, as one Deputy noted, “TBS does not do a good job in reaching Middle Managers about Evaluation”.
  • Broader-based “community development” approach, – a number of deputies recognized that the HR issue needs to be addressed from a base broader than simply training. In effect, issues of recruitment and retention are critical elements of developing a sustainable Evaluation community. Part of the immediate issue is the loss of experienced Evaluators through increasing retirement levels.
  • Generally, it was felt that TBS needs to provide more support and leadership for the full Evaluation community.
  • It was noted that TBS should look to what has been done with other functional communities that have experienced problems with recruitment (too few professionals) and retention/turnover. Innovative approaches to job assignment on a “community-wide basis” may be needed to create more of a career path for Evaluators.
  • For the full community, more effort is needed to put more focus on community development and networking opportunities to create the synergies, support and ability to share best practices among federal evaluators - using a variety of virtual meetings; networking events; newsletters; etc. to create opportunities for engagement and information sharing across the full Evaluation community.
  • Competencies and the Head of Evaluation, – TBS/CEE could learn from the experience of the OCG in terms of the program for developing Chief Audit Executives (CAE); that is, regular training, development and networking sessions. But it was also observed by one Deputy that the federal Evaluation community benefits from recruitment from a variety of disciplines. As such, TBS should not impose too many restrictions, by way of formal requirements, that would interfere with this broad-based entry. On the other hand, senior Evaluators should possess the needed qualifications.

4.4 Tools to Measure Performance of the Organization

For most, though not all deputies interviewed, there is an understanding that ‘ongoing performance monitoring’ and ‘evaluation’ are two tools for measuring the performance of an organization’s program base.

On the whole, information generated by ongoing performance monitoring does not appear to be comprehensive, and taken together with information generated through evaluations, the two tools are not currently providing Deputy Heads with a complete picture of organization-wide performance. The implication according to one Deputy is that evaluation and performance monitoring are stronger tools for helping the DM to manage programs individually, rather than for providing a comprehensive picture of the organization’s performance as a whole.

As noted previously, it seems that the first round of the Strategic Review exercise helped identify for Deputy Heads where the information gaps were in terms of articulating the ‘performance’ of their organizations’ programs. It was noted by some deputies that this helped in determining priorities for future Evaluation work.

In terms of current performance measurement efforts, as a number of deputies noted, ongoing performance ‘monitoring’ is hampered by the difficulties of measuring ‘outcomes’; of getting the right data in a timely and cost-effective fashion. This appears especially challenging for organizations with a large component of G&C programs, where the period from program inception to evaluation is often only five years. This short period makes it difficult to collect appropriate performance monitoring data, because some of the program outcomes may take several years to materialize.

Deputies’ feedback would suggest that for many organizations the real challenge does not lie with developing their performance measurement framework per se, but with making it operational. A number of organizations do not have all the measurement systems needed to collect data and it was also suggested that current expectations for measuring outcomes may be unrealistic or, as one Deputy put it, “There is rhetoric around ‘outcomes’ and this often puts a focus on elements that are difficult to measure”.

One Deputy, whose organization was into their “second or third generation of performance reporting”, indicated that they are still a considerable distance away from a fully functioning performance monitoring system.

Even for departments with mature data collection systems, whether or not individual program managers have comprehensive performance monitoring information appears to be variable. While some performance monitoring information is readily available, if new data bases need to be developed (or a special study or survey needs to be carried out to collect data), there are cost and resource issues; a critical challenge in today’s world of frozen budgets.

Deputy Heads indicated that Evaluation units play an advisory role to assist program managers in developing their Performance Measurement (PM) systems. Given resourcing issues raised by deputies though, a future challenge that may be faced in organizations is whether the Evaluation group will be resourced to continue to play this advisory role.

For Evaluation, feedback from the Deputies would suggest that, because of the challenges for measuring outcomes through ongoing performance monitoring, there may continue to be many instances where this type of performance information is not readily available as input to evaluation work, thus requiring evaluators to collect the data needed to assess whether outcomes are being achieved.