ITXM Summit 2024

Questions & Answers

 

 

SAMI KALLIO AND PASI NIKKANEN:

Decision-making with Human-Centric IT Experience 


Q: We struggle to get the response rate from users higher than 20%.  Do you have any advice to increase participation levels so we can get more responses?

A: First of all, a response rate at level 20% is quite an ok result. But the most important reason for people to answer surveys is that they believe the results are used. Here is a review of academic studies on the topic. Link 

So make sure you have way to communicate to employees about what decision you have done with the date. Secondly it is very important what you ask. You must focus on things that matter to the user and are really about their experience and efficiency. Test the questions with end-users,  not with other IT employees. 

 

Q: How can Managed Service Providers implement XLAs when they're only supporting part of the overall IT operations e.g. providing application support, but not infra or Service Desk?

A: The experience of employees is always a result of different smaller experiences that together influence the overall IT experience, but in our tool, you can measure separately different areas of IT like support and applications.  So we believe this can be done, but we would maybe question if your customers would still like to measure a wider experience even though you would be providing only part of that. As discussed in Neils's presentation, focus on the ways of working in the XLA agreement more than the exact numbers. I recommend you to read the whitepaper we have written about this with Neil: “How to Incorporate XLAs into Outsourcing Contracts”.

 

Q: Would love to hear about best practices, tools/methods for gaining quantitively user sentiment data that doesn't rely on bombarding users with surveys.

A: We believe that the only way of gathering real experience data is by asking employees. When this is done correctly, people don’t feel bombarded, but instead, they feel heard. HappySignals has and continues to optimize what to ask to get enough actionable data without having complicated questions.  We also have smart suppression filters built-in to make sure the questions are spaced out for end-users, while also providing a steady stream of continuous experience data for IT. Feeling bombarded is less about surveys and more about how people feel their feedback is used. Great reading in this McKinsey article

 

OSVALDO SANTOS, NESTLÉ:

How Nestlé IT is Modernizing ITSM with Experience Management

 

Q: @Osvaldo: How many people work in your IT Experience Mgmt team?

A:  3 globally and 4 Regionally  ( I made a mistake in the call saying 5, but in reality, we are 7, including me)

 

Q: @Osvaldo: What are the success factors in becoming a data-centric IT organization?

A: 1st) Have an ITXM Framework, from sending the surveys until closing the loop. 2nd) Build an XMO team that can support Stakeholders. 3rd) Make sure that change management is implemented from Communications to Stakeholders.

 

Q: @Osvaldo: How does it impact the Operational work/collaboration after the creation of the XMO? What was the feeling/feedback from the colleagues who now need to work with XMO?

A: The general feeling is good. The colleagues see XMO as a reference, driving this change and explaining the doubts that arise along on the way.

 

Q: @Osvaldo: Do you also use other tools, like Nexthink or Systrack, in addition to Happy Signals, to measure, for example, these wifi issues and have the full picture?

A: Yes, there are different tools.

 

Q: @Osvaldo - How did you achieve and keep a high response rate?

A: A high response rate depends on the culture of the company and the value that the surveys bring back to the business/customers. This is what we are driving. I believe the high response is intimately connected with our ability to show our improvement plans and share consistently that we are taking into consideration the user feedback and taking concrete actions perceived in the day-to-day. Apart from this, support teams to make communications before, during, and after the survey launch might help with one single outcome-based narrative.

 

Q: @Osvaldo - How the Happiness Score is calculated? 

A: It´s an NPS calculation: % Unhappy (people giving 0-6 scores)- % Happy (9 &10 scores)

 

Q: @Osvaldo - What are the channels you used to approach your users so you were able to get your measures?

A: WorkPlace (By Meta), E-mail, Executive Forums, BRM Market Forums, Teams- All Hands meetings

 

Q: @Osvaldo - what was the % of response rate

A:  Depends on the measurement area, but overall, we have 12%.  

 

Q: @Osvaldo - Did you update/change the parameters you were measuring? Also, how did you decide on the parameters for each area?

A: We set up targets/improvement initiatives for each area/product depending on several factors (I.e., business structure, business needs, context, sizing), and then we keep monitoring the HappySignals variables in order to keep improving.

 

CHRIS FAZEY, CAMPARI GROUP

Leading IT with Experience Management

 

Q: Would you be willing to share your business case around getting a budget to measure Experience - DEX tooling/HappySignals or otherwise?

A: for Happysignals we didn’t have a business case, as it was bundled in the outsourcing agreement we implemented in 2019. The additional cost of the proactive surveys was low for us given the small user base, so it wasn’t a challenge to fund. For the DEX, we introduced these alongside vendor or service changes, so essentially we have the tool and the service contract in place to utilize then. For example, as we move our end-user computing service to a new service vendor, we will lose access to Systrack, which is licensed through the MSP. The version of Intune Analytics we are currently using is free as we are using E5 M365 licenses, and we have recently negotiated a new PC contract with HP and included it in the cost of each new PC is HP proactive insights – so we didn’t need to consider large upfront costs.

 

Q: How do you balance insight granularity and data privacy in Campari?

A: we are very conscious about data privacy hence when we contract new tools or services we look carefully in this area. As you know with Happysignals we only look at user names etc via SNOW and access to this is restricted to set roles in the organization – whenever discussing in general with the business we talk about themes and services, not individuals. With regard to the DEX tools, again, we can see who the devices belong to, but we can only see if there are hardware issues, firmware updates or patches missing, whether the PC is slow to start up or if there are applications repeatedly crashing or even BSOD events. We can’t see anything regarding personal or even business data as part of these activities.

 

NEIL KEATING, BRIGHT HORSE:

Ensuring Successful XLA Contracts

Q: What about the risk associated with commercial commitment to qualitative targets, considering it is kind of a softer metric compared to SLA, and can be subjective & contextual? And, has the risk of being used as a tool to beat the MSP, hence the risk of putting XLA in a contract.

A: One of the interesting points with experience is that although individual sentiment is sometimes difficult to manage, as a collective, it is actually predictable and if you can, you can use statistical science to show this and get "fact from feelings." Happy to discuss this outside if you want?  I totally agree with your point that sentiment should not be weaponized  - we always speak with suppliers and vendors together and stress this is open and collaborative - otherwise it won't work... 

 

Q: Should HR own the critical XLA KPIs?

A: We are seeing more and more XLAs coming from an HR perspective, so I think this might be coming. In my view, they are still behind IT in experience but are catching up (the last couple of projects we did came from HR).

 

Q: How can an MSP commit to a DeX score in case their service is limited to a "limited" IT scope?

A: I think DeX scores can only work for MSPs who have limited scope if their responsibilities for delivery are included in the DeX data. If not, it might be irrelevant. The key thing is to understand how you contribute to the ambition.

 

Q: How do you differentiate between objective and subjective experience measures when measuring sentiment over months and years if there is a small stakeholder response group, for example?  There are customers who say they feel a certain way depending upon what is going on in their world at a particular time, regardless of what you are delivering to them and how.

A: The key here is to make sure you have both objective and subjective measures in your experience model. O & T data are objective measures, and therefore, it matters less what the sample size is. With the subjective measures, (i.e. sentiment), small samples can be a challenge – as much as we can, we want to broaden the sample group so that it is easier to use statistical science and confidence levels to ensure that the answers are factual and applicable. In any size of sample group, it is true that some people may adjust their answers (i.e. their feelings) based on what is happening in their world and that is what we are looking for but you can only change the things in your sphere of control or influence (these are the experience indicators you should be considered with - the others we call external influencers which you need to know about but you can’t change so we don’t measure). To ease some of this, in smaller groups, you can:

  1. Widen the sample by asking people more than once
  2. Build up to statistical validation over time (which limits those who adjust their feelings)
  3. Focus on the differences between the sample groups/surveys taken (rather than statistics) – actually, this is probably the differences that users will notice
  4. Measure and match the O&T data as this will provide you a rounded view of experience (not just believing the X-Data)

 

Q: How do you reconcile "commit to a score" with Goodhart's Law?

A: I agree with the Goodhart law adage. That is why my recommendation is to always focus on the process of delivering an XLA and not so much the target. Focusing too much on the target drives the wrong behavior (we have all seen this). I am not sure all procurement teams believe this though! (hence the other options).

 

Q: How do you differentiate between objective and subjective experience measures when measuring sentiment over months and years if there is a small stakeholder response group for example.  There are customers who say they feel a certain way depending upon what is going on in their world at a particular time, regardless of what you are delivering to them and how.

A: Yes, small stakeholder groups are more of a challenge as you are unlikely to be able to make this statistically valid, but if you still bridge the X-data data to O & T data, then you can match the sentiment to what actually happened. This gives you context. (e.g. They say they were unhappy but actually the technology worked fine - you can now speak to people and find out why there is a gap). I hope that helps.

 

SAKARI KYRÖ AND SAMI KALLIO:

Sneak Preview of the Global IT Benchmark 2023 Key Findings

 

Q: How does HappySignal stand out from other tools like Systrack or Nexthink (some form or survey or feedback from end users can be captured by the existing monitoring tools as well if the right Q is asked).

A: HappySignals stands out by continuously monitoring end-users' perceptions of IT, unlike Systrack and Nexthink, which are focused on ongoing telemetry, device, and application insights. The survey features in Nexthink and Systrack, similar to HappySignals' "Additional Questions," are just one component of their offerings, not the main focus. Therefore, it's more accurate to compare specific similar features rather than the entire platforms, which fundamentally rely on different primary sources of data and insights for their core functionalities.

For a deeper understanding of how these differences play out in practical applications and the value they bring to IT management and reporting, Sakari is available for a detailed discussion.

 

Q: How does the pricing of HappySignals work?

A: HappySignals pricing is based on number of employees. Contact our sales if you wish to have a cost estimate for your organization. 

 

Q: Is there a possibility of adding ad-hoc close questions? Like multiple choice?

A: A question back would be, “What is the outcome that you are trying to achieve?” HappySignals does provide targeting options for segments of the userbase (i.e. certain geographies where maybe you're doing a migration, or departments etc.) or asking open ended additional questions relevant to a specific scenario. So the answer is yes, you can ask questions outside of the standardized measurement areas, but instead of using multiple choice, other means of analysis are used to extract themes and insights. 

 

Q: What is the average response rate for all your customers for Services?

A: The average global response rate for incidents is 23%. This said there are cultural differences both in companies but also in countries. Please read our latest benchmark report to have a bit more insight to this. 

 

Q: Weather jokes aside, external (non digital workplace) factors are very important in influencing the mood of employees. Are you thinking of incorporating this in your scores to account for trend deviations that are outside of your scope of control?

A: The main thing in measuring experiences is to learn how IT should be using their always limited resources to maximize the benefits for business. If some outside IT factors are influencing how people feel in the organisation or in their life in common, it will for sure influence the overall scores a bit. But that doesn’t really matter so much because the main thing is analyzing the results and finding out where to focus the improvements. So even if all scores drop by 10%, that difference stays the same or about the same.  This is one of the reasons why we recommend not to use penalties or rewards for XLAs in agreements. And back to weather - it has been said that in traditional NPS surveys during great weather, you get 8% better results. We also see that global feedback on Mondays is lower than feedback gathered on Fridays. 

 

Q: Do I understand correctly that XLAs primarily measure when something 'goes wrong on the end-users side’ and how we respond to it (using service desk data) - but not, for example, happiness around current software kit/tools, IT onboarding experience/set-up, access management, procurement processes etc. Things outside of service desk data.

A: XLAs can and are used also in applications and other IT areas than Services. Quite often, the discussion related to XLAs starts in the service Desk organization, but the concept and benefits are not limited to that. 

 

Q: How can we control the Service Desk, as long as they are mostly working from home? How can we make sure they are delivering the services we are looking for?

A: Huh, a bit scary word for me, that control. Regarding the management of Service Desk (SD) teams, especially with remote work, I prefer to approach the concept of "control" with caution. Instead of relying solely on Service Level Agreements (SLAs), which traditionally dictate performance metrics and can inadvertently prioritize quantity over quality of work, our strategy emphasizes motivation and positive reinforcement for IT staff, starting with service agents.

We believe that focusing on improving the overall experience—how service agents feel about their work and their impact on helping users—is far more effective and fulfilling than measuring success through the volume of tickets resolved. This approach, which may seem unconventional, has proven successful with our clients. We've observed that about 30% of organizations that had SLA penalties in contracts in 2020 have since removed those penalties, reflecting a shift towards valuing quality of service and employee motivation over strict adherence to metrics.

 

Q: Regarding Culture, how can we reach this goal, especially in a huge company, where Management didn't take it in the first place?

A: Good question. Addressing cultural change within a large organization, especially where management has not initially prioritized it, requires a strategic approach. Key to this is demonstrating success through tangible examples and outcomes of decisions made. Sharing stories of progress and achievements can significantly influence the company's direction.

This does require having experience data available in an actionable format.  If you are a customer, we are happy to have a discussion with leaders to hear their thoughts. Sometimes, just presenting customers' scores and telling what we see in the data has helped. If you are a customer, I would still say that the best motivation and boost customers have gotten is from our Happy User Group events. The next one is in June 2024 in Helsinki, Finland.