IT support sucks but we made our metrics


SLAs should only include those items that can be effectively monitored and measured at a commonly agreed point. Inclusion of items that can’t be effectively monitored almost always results in disputes and eventual loss of faith in the SLM process.

It is essential that monitoring matches the customer’s true perception of the service. A service that is available only to the edge of the data center and not to the end-user is incomplete service and provides little value to the business. Monitoring of services must show the service from an end-to-end or value perspective. Monitoring must also detect when failure of a component recorded at the service desk results in failure of a service and potentially of an SLA. Further, it should indicate how many end-users are potentially impacted by the failure. This will determine the impact and possibly the urgency of a given incident.

This capability requires a well functioning CMDB and the ability to connect incidents with both components and services. SLA breaches are first identified at the Service Desk, so it is very important that appropriate processes and procedures are in place and that they are followed. If this is not done, the reporting may indicate SLA breaches where none actually occur or it may indicate no breaches when they do occur. Either way, the result is bad for IT. It is also critical that SLA information, such as triggers and escalations match between the SLM and incident/problem recording systems.

There are a number of important ‘soft’ issues that can’t be monitored by technical or procedural means such as customer satisfaction, which may not match ‘hard’ monitoring. For instance, even when there have been a number of reported service failures the customers/end-users may still have positive feelings about IT performance. The opposite may also happen when the service is performing normally, but customers/end-users still feel dissatisfied. These types of disconnects primarily occur because of human interaction between end-users and the service desk or customers and process/IT managers. In these situations, perception often outweighs the facts and human errors taint the perception of IT.

Given the importance of soft issues, the question arises; how do we measure soft issues? The simple answer is that you ask the customer or end user what their perception is. Measuring soft issues however is as much an art as it is science. Political pollsters, the masters of this art, are famous for asking the same question in slightly different ways and getting completely different results. They are also famous for selecting the wrong group of people to ask.

Measuring soft issues will take continuous effort over time but it is vital for the long term success of IT. One way to increase the objectivity of these measures over time is to set targets for soft issues that can be quantified and improved.