Treasury Board of Canada Secretariat
Symbol of the Government of Canada

ARCHIVED - Using External Service Delivery Key Performance Indicators


Warning This page has been archived.

Archived Content

Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.

MAF Category: Citizen-Focussed Service

Theme: Quality Measures

Once access to service has been gained and wait time has elapsed, the service agent will interact with the caller and deliver some type of service, whether it be a referral, information, or help with processing a specific transaction such as a filing or an address change. Quality measures are indicators that provide analytical evidence in response to questions such as the following: Did the caller get the result needed? Was the interaction delivered with efficiency, clarity, and sensitivity? This type of subjective assessment of a service call requires evaluation of the entire interaction by a third party (e.g. a coach, team leader, or supervisor) either monitoring the call in real time or listening to a recording of the call afterwards.

A secondary technique used to collect quality information is the survey. GoC departments and agencies are directed to assess client satisfaction annually. Most commonly, surveys are conducted on a regular schedule using external customer satisfaction measurement suppliers. The Common Measurements Tool or CMT (see http://www.iccs-isac.org/eng/cmt-about.htm) developed by the Institute for Citizen-Centred Service is the de facto standard for public-sector client satisfaction measurement.

In the private sector, large companies also employ automated voice surveying techniques. ACD scripts can be used to conduct a voluntary and short survey after call completion, which provides a "near-time" assessment of client satisfaction. Another innovative development is automated callbacks. Callers abandoning the agent queue are called back a few hours later by an automated calling program. This program invites the client to complete a short survey and then offers to transfer the client to an available agent immediately after the survey. The ACD directs clients choosing agent service to an appropriately skilled agent, based on the survey results.

The GoC KPI working group identified two measures for quality. Professionalism is one of the criteria assessed using the CMT. It includes highly valued factors such as vocabulary, courtesy, and sensitivity. The second is Answer Accuracy, which is a difficult measure to compile, as it requires either an exit survey to be completed by callers (possibly an imprecise technique) or call monitoring, an expensive and resource-intensive commitment.

These measures are collected independently of ACDs and have no vendor-specific usage information.

MAF Category: Citizen-Focussed Service

Theme: Client Satisfaction

Overall Client Satisfaction is an important strategic measure. Readers may remember one of the early themes describing government service transformation was "10 by 5"—a goal of improving overall citizen satisfaction by 10 percentage points by 2005. The emphasis on "ClientSat" is necessary and important—the caller's overall perception of satisfaction is a fundamental outcome of service delivery.

Two measures have been proposed for this theme. The overall Client Satisfaction rating as compiled by the CMT is the primary measure. As previously discussed, this measure is determined through periodic surveys of citizens and businesses. Complete details can be found at http://www.iccs-isac.org/eng/cmt-about.htm

The second measure, Service Complaints, has been clearly identified by the working group but is difficult to compile at this time. Effective recording and tracking of service complaints requires integration of client contact information across all channels. For example, in 2003, significant complaints were received via e-mail concerning the accessibility of phone service for a government service. Many complaints are sent to ministers' offices, where they are tracked by separately managed ministerial correspondence systems. Even within the phone channel, many programs do not systematically record a call result against a call record. Over the coming years, the performance measurement community will address this measure to determine its contribution, cost, and value to overall service measurement practices.

No vendor-specific implementation guidelines are required for either measure.