Five structural findings from the world's largest dataset on employee IT experience — and what they mean for every CIO heading into the AI era.
1.77M
Employee survey responses in the dataset
914K
Post-incident surveys analyzed in 2025
9 + 3
Touchpoints measured across IT, HR and Finance
The Experience Intelligence Report
+50 Overall IT Experience3h 18min lost per incident−24 Mobile devices62% Doers in Western Europe13.3% of tickets: 2+ reassignmentsWalk-in +94 vs Portal +77+50 Overall IT Experience3h 18min lost per incident−24 Mobile devices62% Doers in Western Europe13.3% of tickets: 2+ reassignmentsWalk-in +94 vs Portal +77
§ 4 Where the data comes from
The largest continuous dataset on IT experience in the world.
HappySignals has measured employee experience across ticket-based IT support, digital touchpoints, and enterprise services since 2014. This benchmark draws on responses collected from 60+ countries through the HappySignals Experience Management platform — delivered continuously in the flow of work, never as an annual survey.
Total benchmark population · 2025
1.77M
Individual employee experience responses analyzed for this report. Collected at the point of experience — after a resolved ticket or during periodic touchpoint surveys.
IT Incidents
860k
Responses collected immediately after incident resolution.
IT Requests
803k
Responses covering provisioning, access, and planned fulfilment.
Touchpoint surveys
107k
Periodic responses across 9 digital & human touchpoints.
Enterprise organizations in
130+
countries · From 1,000 to 300,000+ employees. Internal and outsourced desks.
Western Eur.
40%
168,155 responses
North America
26%
110,987 responses
Asia
14%
57,170 responses
Eastern Eur.
6%
23,723 responses
South Am.
5%
19,870 responses
Other regions
9%
Africa, Oceania, CAm, MEast
Two headline metrics sit at the center of every chapter that follows: Happiness — an NPS-style score from −100 to +100 measuring how employees feel about a service — and Perceived lost time — the productive minutes employees feel they lose because IT is not supporting work as smoothly as it could. Both matter. A well-handled ticket can still cost hours.
ISG methodology
ISG findings are drawn from periodic research studies across individual organizations, including structured qualitative interviews conducted at 3, 6, or 12-month intervals.
§ 4 Read the full chapter text · Where the data comes from
The 2026 Global IT Experience Benchmark Report is built on data collected through the HappySignals platform across enterprise IT organizations worldwide. All data is anonymized and aggregated.
4.1 The dataset at a glance
1.77 million individual survey responses from enterprise employees
1,663,470 of these responses come from ticket-based IT support alone:
913,920 post-incident survey responses
749,550 post-request survey responses
The remaining responses come from other IT measurement areas, including overall IT experience, enterprise applications, laptops and computers, mobile devices, collaboration with IT, office environment, remote work, and service portal
IT support experience surveys are sent automatically after an IT incident or request ticket is resolved or closed
Surveys from other areas of IT, covering laptops, applications, remote work, and other areas, are sent on a scheduled basis and are not tied to a specific IT event
ISG findings are drawn from periodic research studies across individual organizations, including structured qualitative interviews conducted at 3, 6, or 12-month intervals
Each HappySignals survey has up to four components. Every respondent provides a Happiness score from 0 to 10. Beyond that, the surveys can include Experience Indicators, perceived lost time, and open-text comments. This layered structure means the data answers not just how employees felt, but also why, and at what cost in the flow of daily work.
4.2 Response quality
Response quality refers to how many of these four parts are completed in a typical survey response. The headline score is always present, but the additional layers vary by survey type.
For post-incident surveys, the average response rate is about 22%. Around 70% of respondents also select one or more Experience Indicators, around 50% estimate their perceived lost time, and around 30% leave an open-text comment.
For post-request surveys, the average response rate is about 17%. Around 60% of respondents also select one or more Experience Indicators, around 32% estimate their perceived lost time, and around 21% leave an open-text comment.
Survey type
Average response rate
Select Experience Indicators
Estimate perceived lost time
Leave open-text comment
Post-incident
22%
70%
50%
30%
Post-request
17%
60%
32%
21%
A note on methodology differences: HappySignals collects ticket-based data at the moment of resolution, while ISG typically conducts assessments at 3, 6, or 12-month intervals. Where relevant, we note the methodological context. Both approaches are valid and complementary: continuous real-time data and structured periodic research together produce a richer picture than either approach alone.
Happiness scores are reported on an NPS-style scale from -100 to +100. Respondents scoring 9-10 are classified as positive, 7-8 as neutral, and 0-6 as negative. The Happiness score is the percentage of positive respondents minus the percentage of negative respondents.
Part 01 · § 1–3
Key takeaways & reflections
Five findings · 1.77M responses
§ 1 Key takeaways
Five findings from this year's data that challenge common assumptions about what IT support is for — and what makes employees happy with it.
01Incidents
Fixing the ticket is not what makes employees happy.
Across more than 1.5 million incident Experience Indicator selections, only 6.6% relate to solving the ticket. Speed of support (29.2%), attitude (20.5%), and skills (19.0%) matter far more. Resolution is the baseline.
6.6%
of positive responses mention "solving the ticket"
02Handoffs
Around 1 in 8 tickets has 2+ reassignments.
Tickets with 2+ reassignments make up only 13.3% of total volume — but they are where lost time and Happiness deterioration accelerate most clearly. The reassignment curve is one of the strongest signals in the report.
13.3%
of tickets are reassigned 2 or more times
03Outsourcing
60% first-contact resolution. 45% more time lost.
Outsourced first-lines resolve 60% of incidents on first contact (vs. 44% internal). Yet employees report 45% more lost time per incident and 135% more per request. What looks efficient upstream creates waiting downstream.
+135%
more lost time per request, outsourced vs. internal
04Feedback
Happy users give 75% more feedback.
Positive responses contain an average of 3.04 Experience Indicators, compared with 1.73 for negative responses. Satisfied employees provide about 75% more structured feedback — richer data on the practices worth protecting and scaling.
3.04XIs
per positive response vs. 1.73 negative
05Regional context
The same service meets a different workforce.
In Western Europe, 62% of employees are "Doers" — technically capable, self-reliant, highly sensitive to delay. In Central America, that figure is 28%. The same service model, delivered to a different workforce, will produce very different experience outcomes. There is no universal benchmark without context.
62% vs. 28%
Doers in Western Europe vs. Central America
§ 1 Read the full chapter text · Key takeaways
Five findings from this year's data, rewritten for slide-friendly, shareable use.
1.1 Only 6.6% say solving the ticket made the experience good
Across more than 1.5 million IT incident Experience Indicator selections, only 6.6% relate to Solving the ticket. Employees were far more likely to point to Speed of support (29.2%), Service personnel's attitude (20.5%), and Service personnel's skills (19.0%). Resolution is the baseline. Experience is shaped by speed, competence, and communication.
Incident Experience Indicator
Share of selections
Speed of support
29.2%
Service personnel's attitude
20.5%
Service personnel's skills
19.0%
Solving the ticket
6.6%
1.2 Around 1 in 8 tickets has 2+ reassignments
Repeated reassignments create a disproportionate productivity cost.
Tickets with 2+ reassignments make up only 13.3% of the total volume, but they are where lost time and Happiness deterioration accelerate most clearly. The reassignment curve is one of the strongest signals in the report.
1.3 60% first-contact resolution. 45% more time lost.
Outsourced first-line service desks resolve 60% of incidents on first contact, compared with 44% for internal desks. And yet, employees supported by an outsourced first line report 45% more perceived lost time per incident and 135% more per IT request. What looks efficient in the front-line process can still create much more waiting time for employees afterward.
Benchmark comparison
Internal first line
Outsourced first line
Incidents resolved on first contact
44%
60%
Perceived lost time per incident
Baseline
45% higher
Perceived lost time per IT request
Baseline
135% higher
1.4 62% vs 28%: the same service meets a different workforce
In Western Europe, 62% of employees are Doers: technically capable, self-reliant users who are highly sensitive to delay. In Central America, that figure is 28%. The same service model, delivered to a different workforce, will produce very different experience outcomes. There is no universal benchmark without context.
1.5 Happy users give 75% more feedback
Positive responses contain an average of 3.04 Experience Indicators, compared with 1.73 for negative responses. That means satisfied employees provide about 75% more structured feedback on what worked. Better experience does not only improve sentiment. It also creates richer data on the practices worth protecting and scaling.
Response type
Average Experience Indicators selected
Benchmark implication
Positive response
3.04
Richer feedback on what worked
Negative response
1.73
Narrower problem signal
Difference
About 75% more
Good experiences generate more structured feedback
§ 2 · EssayReflections on the key findings
An industry improving in places, straining in others.
Taken together, the findings describe an IT support landscape that is improving in some respects while showing new forms of strain in others. Happiness remains relatively strong in several core areas, and some touchpoints clearly improved in 2025. At the same time, perceived lost time and friction remain significant in the places where employees depend on IT most continuously.
One of the clearest reflections from the benchmark is that the human side of IT support matters more than many traditional service metrics imply. Employees do care whether their issue is resolved, but the data shows they care at least as much about how support is delivered: how quickly someone responds, how clearly they communicate, whether they understand the situation, and whether the interaction feels respectful of the employee's time.
Another important reflection is that productivity and satisfaction do not always move together. A service can produce a reasonable Happiness score while still creating a large amount of perceived lost time. This is visible in areas such as service portals, outsourced first-line support, and some enterprise service workflows. That is why Happiness and perceived lost time are most useful when read together. One shows how the experience feels. The other shows how much work disruption employees believe it creates.
The benchmark also reinforces how much context matters. Region, support profile mix, company size, sourcing model, and ticket-routing complexity all shape how IT is experienced. The same service model can produce very different outcomes depending on who the employees are, what kind of work they do, and how many dependencies sit behind the support process. This is one reason broad benchmark comparisons should always be interpreted through local context.
Finally, the report points toward a practical lesson for IT leaders. The organizations most likely to improve are not simply the ones investing in new tools. They are the ones building feedback into how services are designed and managed. Continuous measurement makes it possible to see whether changes are reducing friction, where perceived lost time is accumulating, and which parts of the employee experience are becoming stronger or weaker over time.
§ 2 Read the full chapter text · Reflections on the key findings
Taken together, the findings in this report describe an IT support landscape that is improving in some respects while showing new forms of strain in others. Happiness remains relatively strong in several core areas, and some touchpoints clearly improved in 2025. At the same time, perceived lost time and friction remain significant in the places where employees depend on IT most continuously.
One of the clearest reflections from the benchmark is that the human side of IT support matters more than many traditional service metrics imply. Employees do care whether their issue is resolved, but the data shows they care at least as much about how support is delivered: how quickly someone responds, how clearly they communicate, whether they understand the situation, and whether the interaction feels respectful of the employee's time.
Another important reflection is that productivity and satisfaction do not always move together. A service can produce a reasonable Happiness score while still creating a large amount of perceived lost time. This is visible in areas such as service portals, outsourced first-line support, and some enterprise service workflows. That is why Happiness and perceived lost time are most useful when read together. One shows how the experience feels. The other shows how much work disruption employees believe it creates.
The benchmark also reinforces how much context matters. Region, support profile mix, company size, sourcing model, and ticket-routing complexity all shape how IT is experienced. The same service model can produce very different outcomes depending on who the employees are, what kind of work they do, and how many dependencies sit behind the support process. This is one reason broad benchmark comparisons should always be interpreted through local context.
Finally, the report points toward a practical lesson for IT leaders. The organizations most likely to improve are not simply the ones investing in new tools. They are the ones building feedback into how services are designed and managed. Continuous measurement makes it possible to see whether changes are reducing friction, where perceived lost time is accumulating, and which parts of the employee experience are becoming stronger or weaker over time.
§ 3 The state of IT experience · HappySignals Viewpoint
Technology is perhaps not the hardest part. Making it work for humans is.
The majority of the workforce is not keeping pace with the bleeding edge of AI. That gap is where experience problems live — and measuring the human experience is how organizations tell whether their investments are actually making working lives better.
ISG Perspective
There has been a marked increase in Lost Time for Western Europe and Eastern Europe for IT Incidents over the last 5 years compared to other regions where it has either decreased or increased only slightly. It raises an intriguing question: what is happening or not happening in Europe, compared to elsewhere, that is driving up lost time?
When users give a positive score about their experience for IT Incidents that have been resolved, they provide more reasons for doing so compared with users who give a neutral or negative score. One could speculate that the user expresses their relief, even gratitude, for the successful resolution by being extra-generous with their feedback.
The composition of the user population varies according to region. Doers, for example, represent 28% of the users in Central America compared to 62% in Western Europe, while the Supported cohort is highest in Asia (29%) and Middle East (30%) and lowest in Western Europe (9%). Organizations can use this data to tailor the processes and practices of their Service Desk to their particular region and its typical user composition.
Surprisingly, for IT Incidents resolved on first contact, Happiness is higher where there is an outsourced Service Desk (+85) compared to an internal Service Desk (+83).
Outsourced Service Desks are very time-hungry compared to internal Service Desks. For IT Incidents, in the Outsourced case, average Lost Time is 45% greater, and for IT Requests, 135% greater.
ISG · Research perspective on the 2026 benchmark
§ 5 Overall IT Experience
Happiness held. Lost time crept up.
Overall IT Experience has climbed from +34 in 2022 to +50 in 2025 — but did not improve further this year. At the same time, average perceived lost time per incident has grown from 3h 3min in 2021 to 3h 18min in 2025.
Happiness tells you how it feels. Lost time tells you what it costs. Both are necessary.
+50
Overall IT Experience — 2025
▲ +16 points vs. 2022 (+34) · flat year-on-year
3h 18m
Average perceived lost time per IT incident, 2025
▲ Rising steadily since 2021 (3h 3min)
§ 5.1 · Numbers · Overall IT experience
Overall IT experience · HappinessPeriodic survey
Metric
2022
2023
2024
2025
Happiness
+34
+39
+50
+50
§ 5 Read the full chapter text · Overall IT experience
5.1 Numbers
Metric
2022
2023
2024
2025
Happiness
+34
+39
+50
+50
2025 result:
Happiness: +50
5.2 Observations
Overall IT Experience improved from +34 in 2022 to +50 in 2024, and remained at +50 in 2025. This suggests that the broader employee sentiment towards IT has improved over the last few years, but did not continue to improve further in 2025.
The biggest experience driver in Overall IT Experience is Support at 39.1%, followed by Policies at 24.7% and Training at 18.4%. Tools account for 12.1% and Collaboration with IT for 5.7%.
This is interesting because Overall IT Experience is broader than the more specific measurement areas. Even so, support remains the single most important factor in how employees describe IT as a whole. Policies and training also have a substantial role, which suggests that employees do not experience IT only through devices, applications, or tickets, but also through the rules, guidance, and capability-building around them.
The structured indicators are mostly positive. Support for hybrid work, quality of applications, quality of IT equipment, and support quality and timeliness all have clearly stronger positive than negative shares. At the same time, the focus topics highlight slow response times, long resolution times, communication gaps, and system complexity and fragmentation.
This creates a useful contrast in the data. Overall sentiment towards IT is positive, but there are still recurring points of friction that stand out when employees describe their experience in their own words. This is one reason why Overall IT Experience is useful. It gives a broad signal on how IT is perceived, while also pointing towards areas that may need a more detailed look elsewhere in the benchmark.
It is important to note that Overall IT Experience is a measurement area in its own right. It is not calculated from the other measurement areas, and it should not be read as a roll-up of Services, Devices, Applications, or Remote Work. It captures how employees feel about IT as a whole through its own survey questions and its own continuous flow of responses.
5.3 Standardized benchmark findings
5.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Slow response times
14.0%
Long resolution times
11.7%
Communication gaps
11.2%
System complexity and fragmentation
10.8%
On-site support availability
10.6%
5.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Support
39.1%
Policies
24.7%
Training
18.4%
Tools
12.1%
Collaboration with IT
5.7%
5.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Knowing who to contact about IT issues
Collaboration with IT
17.2%
13.6%
69.2%
Policies and regulations
Policies
20.2%
13.1%
66.6%
Support for hybrid work
Policies
5.5%
5.9%
88.6%
Support quality and timeliness
Support
12.1%
7.1%
80.8%
Quality of applications
Tools
10.2%
4.0%
85.7%
Quality of IT equipment
Tools
10.9%
5.1%
83.9%
Improving IT skills
Training
9.7%
11.3%
78.9%
Indicator observations:
Support for hybrid work is the strongest positive signal in this chapter. This suggests employees often feel that IT is supporting modern ways of working better than before.
Quality of applications and Quality of IT equipment are also strongly positive, which helps explain why the overall score has reached a relatively high level.
Policies and regulations and Knowing who to contact about IT issues are more mixed than the other indicators. This suggests overall sentiment toward IT can still be held back by clarity, coordination, and how employees experience rules.
The contradiction in this measurement area is that the structured indicators are broadly positive, while the focus topics still point to delays, communication gaps, and fragmented systems. This suggests employees may feel generally positive about IT overall, while still noticing specific weaknesses that deserve deeper analysis elsewhere.
§ 12 Benchmarking nine IT touchpoints · overview
The nine IT touchpoints, five years running.
Two movements deserve attention. Enterprise Applications climbed sharply from +6 in 2023 to +39 in 2025. Meanwhile, Laptops turned negative at −6 and Mobile Devices fell to −24 — the only two touchpoints now in negative territory.
2021
Collaboration with IT+82
IT Support Services+78
Remote Work+64
Laptops & Computers+16
Service Portal+15
Mobile Devices+13
Touchpoint 2025 · Δ since 2021
IT Support Services+81+3
Remote Work+71+7
Collaboration with IT+72-10
Service Portal+33+18
Office Environment+44+4
Enterprise Applications+50+16
Laptops & Computers+39+23
Mobile Devices-6-19
n = 1.77M responses (1,663,470 ticket-based) · Happiness reported on NPS-style scale (−100 to +100) · Source: HappySignals 2026 Benchmark dataset
§ 12.1 · Benchmark overview · data behind the chart
Nine IT touchpoints · 5-year HappinessScale: −100 to +100
IT touchpoint
2021
2022
2023
2024
2025
IT support services (ticket)
+78
+79
+81
+82
+81
Collaboration with IT
+82
+84
+70
+65
+72
Remote work
+64
+77
+82
+71
+71
Overall IT experience (periodic)
—
+34
+39
+50
+50
Office environment
—
+40
+40
+29
+44
Enterprise applications
+16
+12
+6
+21
+39
Service portal
+15
+32
+27
+31
+33
Laptops and computers
+13
+7
+17
+8
−6
Mobile devices
−4
+8
+9
+7
−24
§ 8 Read the full chapter text · IT incidents
8.1 Happiness and perceived lost time
Metric
2021
2022
2023
2024
2025
Happiness
+76
+77
+79
+80
+78
Perceived lost time / ticket
3h 3min
3h 13min
3h 12min
3h 15min
3h 18min
2025 result:
Happiness: +78
Average perceived lost time: 3h 18min/ticket
8.2 Observations
Incident Happiness has stayed broadly stable over the last five years, moving between +76 and +80 and ending at +78 in 2025. Lost time, however, has gradually increased from 3h 3min to 3h 18min per incident. This suggests that employees continue to rate the interaction with IT support quite positively, even while the productivity impact of waiting has slowly grown.
This is an important pattern in incident experience. The data does not point to a dramatic breakdown in service quality. It points instead to a gap between how the support interaction feels and how long resolution takes from the employee's point of view. In other words, employees may still appreciate the service they receive, while also losing more time around the process.
The incident indicator data supports this reading. Service personnel's attitude, Service personnel's skills, and Speed of support all have very strong positive shares. By contrast, Solving the ticket has the highest negative share among the incident indicators. This is a useful contrast. The differentiating part of the incident experience is often not whether the issue was eventually fixed, but how clearly, quickly, and competently the interaction was handled.
The focus topics support the same picture. Long response times, Guidance and documentation clarity, Personalized support, and Delayed acknowledgement all stand out. Together, these findings suggest that incident experience is shaped not only by technical restoration, but by how visible, responsive, and human the support process feels while the employee is waiting.
8.3 Geographical differences
Regional differences in incident experience are substantial. Western Europe has the lowest incident happiness in the benchmark at +74, while Central America has the highest at +90. At the same time, Western Europe is not the region with the highest lost time in 2025. This is one reason why regional benchmarking needs context.
Two regional patterns stand out. First, Central America has improved markedly over time, especially in lost time. Second, Western Europe and Eastern Europe are the clearest cases where lost time has increased over the five-year period. Part of that context comes from the regional support profile mix discussed later in this report. Western Europe has a particularly high share of Doers, which helps explain why delays and inefficiencies may be felt more critically there.
Region
Incident Happiness 2025
Lost time 2021
Lost time 2025
Five-year pattern
Central America
+90
341 min/ticket
186 min/ticket
Marked decrease in lost time
North America
Not highlighted
214 min/ticket
183 min/ticket
Decrease in lost time
Asia
Not highlighted
235 min/ticket
218 min/ticket
Decrease in lost time
Western Europe
+74
160 min/ticket
178 min/ticket
Increase in lost time
Eastern Europe
Not highlighted
187 min/ticket
212 min/ticket
Largest increases with Western Europe
This is a useful reminder that the same service model will not produce the same experience everywhere. Employees in different regions bring different expectations, working styles, and support preferences. The benchmark is most useful when it helps organizations compare themselves in context, not when it encourages simplistic cross-region league tables.
8.4 Internal vs outsourced IT
One of the more surprising findings in the incidents data is the difference between internal and outsourced first-line support. Outsourced desks resolve a higher share of incidents on first contact and slightly outperform internal teams on first-contact happiness. And yet, employees supported by outsourced teams report much higher lost time overall.
This is a strong example of why happiness and lost time need to be read together. If we looked only at happiness, the outsourced model would appear at least as good as the internal one. Lost time shows a different side of the experience. The likely issue is not the first interaction itself, but what happens after it when the incident requires follow-up, coordination, escalation, or access to internal systems.
The source material also suggests that company size may be part of this pattern. Larger organizations are more likely to use outsourced first-line support and to operate with more complex provider landscapes. That can increase handoffs, slow coordination, and make waiting more visible to employees. This should be treated as context rather than a final explanation, but it is an important one.
8.5 Reassignments
Reassignments are one of the clearest negative patterns in incident experience. Every additional handoff lowers happiness and increases lost time. This is one of the most consistent findings across the ticket-based data.
The reason this matters so much is that reassignments are highly visible to employees. They do not just add waiting time. They often create a feeling that nobody owns the case clearly. The data and the partner comments both point in the same direction here: reducing unnecessary handoffs is one of the most practical ways to improve incident experience.
There is also an important caveat. Not every reassignment is a mistake. In some cases, one handoff is the right move if it gets the ticket quickly to the team best able to solve it. The goal is not zero routing. It is fewer handoffs that add no value.
8.6 Support channels
Support channels matter a great deal in incident experience. Walk-in support remains the strongest channel in both happiness and lost time, while portal-based support performs worst on those two measures. Phone sits between them, and chat can perform well in the right context.
This is another place where the data is more useful than operational metrics alone. A channel may look efficient from the IT side while still creating more effort or uncertainty for employees. The channel data helps show where speed, clarity, and personalization are helping and where they are not.
8.7 Experience indicators and focus topics
8.7.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Long response times
22.0%
Guidance and documentation clarity
17.3%
Personalized support
12.3%
Delayed acknowledgement
9.7%
Issue resolution
8.0%
8.7.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Communication
29.5%
Competence
27.2%
Process
20.9%
Speed
22.4%
8.7.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver. Some source labels are reused across incident and request surveys, and a few differ only slightly in wording. In this report, the report label is lightly normalized for readability while the exact source label remains visible and percentages stay separated by ticket type.
Incident survey indicators
Report label
Source label
Driver
Negative %
Neutral %
Positive %
Clarity of how to get help
Clarity of how to get help
Communication
8.2%
5.9%
85.9%
Clarity of instructions
Clarity of instructions
Communication
7.0%
3.8%
89.1%
Clarity of where to start
Clarity of where to start
Communication
7.0%
8.8%
84.2%
Keeping users informed about the process
Informing about process
Communication
5.2%
3.8%
91.0%
Keeping users informed about the process
Informing about the process
Communication
2.8%
1.8%
95.4%
Service personnel's attitude
Service personnel's attitude
Competence
1.0%
0.4%
98.6%
Service personnel's skills
Service personnel's skills
Competence
1.7%
0.6%
97.7%
Explaining the case
Explaining the case
Process
4.9%
2.1%
92.9%
Fulfilling the request
Fulfilling the request
Process
12.2%
0.6%
87.3%
Resolving the ticket
Solving the ticket
Process
21.6%
0.4%
77.9%
Speed of support
Speed of support
Speed
3.7%
2.5%
93.8%
Request indicators use a similar taxonomy but are reported separately in Chapter 9.
Indicator observations for incidents:
Service personnel's attitude and Service personnel's skills are among the strongest positive signals in the benchmark. This suggests employees often value the human quality of the support interaction very highly.
Speed of support is also strongly positive, which helps explain why incident happiness remains high overall.
Resolving the ticket is much more mixed than the other indicators and has the highest negative share in this chapter. This suggests that final resolution quality remains one of the more fragile parts of the incident experience.
The contradiction in incident experience is that many interaction indicators are strongly positive, while lost time continues to creep upward and the focus topics still highlight waiting, acknowledgement, and clarity. This suggests employees often value the support they receive, but still experience too much delay and process friction around the incident.
Part 03 · § 7–10
Ticket-based IT support
Incidents, requests, channels
§ 7 Ticket-based IT support
The largest body of data in the benchmark.
This section brings together the largest and most detailed body of data in the benchmark: ticket-based IT support.
It combines the core ticket-type findings on IT incidents and IT requests with the wider interpretive lenses that help explain why experience differs across organizations and employee groups. These include support channels, IT support profiles, industry context, company size, and sourcing model.
Taken together, these chapters show that ticket-based IT experience is not only about whether tickets are resolved. It is shaped by how quickly and clearly support moves, how many handoffs happen, which channels employees use, what kind of users they are, and what organizational context the service desk operates in.
§ 7 Read the full chapter text · Ticket-based IT support
This section brings together the largest and most detailed body of data in the benchmark: ticket-based IT support.
It combines the core ticket-type findings on IT incidents and IT requests with the wider interpretive lenses that help explain why experience differs across organizations and employee groups. These include support channels, IT support profiles, industry context, company size, and sourcing model.
Taken together, these chapters show that ticket-based IT experience is not only about whether tickets are resolved. It is shaped by how quickly and clearly support moves, how many handoffs happen, which channels employees use, what kind of users they are, and what organizational context the service desk operates in.
§ 8 IT incidents
Happiness broadly stable. Lost time drifting up.
§ 8.1 · Happiness and perceived lost time
IT incidents · 5-year trend
Happiness (left axis, −100 to +100)Lost time per ticket (right axis, minutes)
Happiness has held in a +76 to +80 band while lost time has drifted upward by 15 minutes per ticket over five years. The two axes use different scales to make small movements visible — the real story is a stable feeling with a widening productivity cost.
IT incidents · 5-year2025: +78 Happiness · 3h 18min lost time
Metric
2021
2022
2023
2024
2025
Happiness
+76
+77
+79
+80
+78
Perceived lost time / ticket
3h 3min
3h 13min
3h 12min
3h 15min
3h 18min
§ 8.2 · Observations. Incident Happiness has stayed broadly stable over the last five years, moving between +76 and +80 and ending at +78 in 2025. Lost time, however, has gradually increased from 3h 3min to 3h 18min per incident. Employees continue to rate the support interaction positively, even while the productivity impact of waiting has slowly grown.
The incident indicator data supports this reading. Service personnel's attitude, Service personnel's skills, and Speed of support all have very strong positive shares. By contrast, Solving the ticket has the highest negative share among the incident indicators. The differentiating part of the incident experience is often not whether the issue was eventually fixed, but how clearly, quickly, and competently the interaction was handled.
The focus topics support the same picture. Long response times, Guidance and documentation clarity, Personalized support, and Delayed acknowledgement all stand out. Incident experience is shaped not only by technical restoration, but by how visible, responsive, and human the support process feels while the employee is waiting.
§ 8.5 Reassignments · the cost of handoff
Interactive — hover to inspect
When a ticket bounces, happiness falls and lost time rises. Both compound.
A ticket handled by five teams costs the employee 8.5 more working hours than one resolved on first contact. Each handoff stacks on top of the last — happiness falls, lost time grows.
10h8h6h4h2h0
127min
2h 07m
0
First-time fix
+83baseline
+89m
216min
3h 36m
1
1 handoff
+78-5
+131m
347min
5h 47m
2
2 handoffs
+69-14
+101m
448min
7h 28m
3
3 handoffs
+59-24
+132m
580min
9h 40m
4
4 handoffs
+49-34
+55m
635min
10h 35m
5
5 handoffs
+42-41
Baseline lost time Added by each handoff→ Every extra handoff = more time lost, less happiness
§ 6 Read the full chapter text · Experience indicators
A Happiness score tells you how employees feel. Experience Indicators tell you which factors drove that feeling.
When employees complete a HappySignals survey, they are invited to select from a structured list of factors that help explain their score. These are called Experience Indicators, or XIs. Positive responses have positively worded options such as Speed of support or Service personnel's attitude. Negative responses have options that reflect what fell short, such as Lack of communication or Issue not resolved.
This is one of the strengths of the benchmark data. The survey does not stop at a single score. It adds a structured explanation layer that makes the results more actionable. In 2025, the data included 1,522,364 Experience Indicator selections from IT incident surveys alone, and a further 1,129,664 from IT request surveys. At that scale, the patterns are not anecdotal. They show what employees most consistently notice in their interactions with IT.
6.1 What actually drives IT experience
One of the clearest findings from the 2025 IT incident data is that a successful support experience is not defined mainly by the fact that a ticket was solved. It is shaped much more by how the support was delivered.
Only 6.6% of all incident Experience Indicator selections relate to Solving the ticket. By contrast, Speed of support, Service personnel's attitude, and Service personnel's skills together account for 68.7% of the selections shown in the benchmark view below.
Experience Indicator
Category
Total selections
Share of all incident Experience Indicator selections
Speed of support
Speed
444,448
29.2%
Service personnel's attitude
Competence
311,928
20.5%
Service personnel's skills
Competence
289,405
19.0%
Explaining the case
Process
239,515
15.7%
Solving the ticket
Process
99,945
6.6%
Clarity of instructions
Communication
61,559
4.0%
Clarity of how to get help
Communication
46,545
3.1%
The implication for IT teams is direct. Employees do notice whether their issue was resolved, but resolution alone does not define the experience. Speed, competence, and clear communication shape how support is remembered. These are often the parts of service delivery that are less visible in traditional operational metrics, but highly visible in employee experience data.
6.2 Satisfied employees select more factors
Another important finding concerns the number of Experience Indicators that accompany different types of scores. When employees rate an IT interaction positively, they select an average of 3.04 factors per response. Neutral respondents select 1.23, and negative respondents select 1.73.
This pattern matters beyond its surface. Satisfied employees are often providing more detailed feedback about what went right across multiple dimensions of the experience. Dissatisfied employees, by contrast, more often point to one specific thing that failed. That is useful for diagnosing a problem, but it provides a narrower view of the experience.
Avg. factors per positive response
Avg. factors per negative response
Avg. factors per neutral response
3.04
1.73
1.23
From a benchmarking perspective, this is useful because high-performing IT organizations are not only generating better scores. They are also generating richer feedback loops about what good experience looks like in practice.
§ 6 Experience indicators
The measurement behind every score
What employees actually name when they explain a score.
When employees rate a ticket, they pick from a structured list of factors that shaped the experience — experience indicators. The shape of the distribution below is the finding: Speed of support, attitude, and skills dominate. Solving the ticket itself sits well down the list. Resolution is the floor, not the experience.
% of positive experience indicators
Speed of support
37%
444,448
Service personnel's attitude
27%
311,928
Service personnel's skills
25%
289,405
Explaining the case
20%
239,515
Solving the ticket
7%
99,945
Clarity of instructions
5%
61,559
Clarity of how to get help
4%
46,545
Top drivers · 89.4% of selections Tail drivers · 13.7% of selectionsSource: 1,493,345 indicator selections · post-incident surveys · 2025
Average indicators selected per response
Positive experiences get three reasons. Negative ones, less than two.
Happy employees often credit attitude, skills, speed, and clarity all at once. Frustrated employees fixate on a single failure point. The asymmetry shapes what you can improve: negative feedback is sharper — but positive feedback reveals what a good support interaction actually contains.
Positive
3.07 indicators per response
3.07
Neutral
1.23
1.23
Negative
1.73
1.73
Top positive driver
Service personnel's attitude
96.3% positive · <1% negative
Speed of support
Real-time signals matter
93.8% positive in incidents
Fragile indicator
Solving the ticket
21.6% negative · highest in chapter
What this tells us
"User dissatisfaction is always dissatisfaction with something (not many things)."
ISG Perspective
It is intriguing to learn that when users give a positive score for their experience they provide more reasons for doing so compared with users who give a neutral or negative score. One could speculate that the user feels relief, satisfaction, even gratitude, about the successful resolution of their incident, and expresses these feelings by being extra-generous with their feedback.
We might also speculate that if an experience is perceived negatively, it is more likely that a user is preoccupied with a specific aspect of it. If you look into your own life, when you are frustrated or annoyed, it is about something in particular. To adapt a premise commonly discussed in philosophy ("consciousness is always consciousness of something") we could suggest that "user dissatisfaction is always dissatisfaction with something (not many things)".
We also wonder about the influence of culture here. There is a general view that in Asian cultures it is less acceptable to give negative feedback compared to, say, US or European cultures, and we do find evidence of this effect in our research at ISG. We might speculate therefore that the global average figure of number of drivers given for neutral experiences (1.23) and negative experiences (1.73) is being dragged down by the reticence to give negative feedback in certain cultures of Asia. On this hypothesis, if we look at the data for a region that does not have such cultural sensitivities about negative feedback, we might see a more even distribution for the number of drivers selected for all types of experiences, whether positive, negative or neutral.
ISG · Research perspective on experience indicators
§ 9 IT requests
Steady improvement. A quiet success story.
§ 9.1 · Happiness and perceived lost time
IT requests · 5-year trend
Happiness (left axis, −100 to +100)Lost time per ticket (right axis, minutes)
Requests tell a different story from incidents. Happiness has climbed four points over five years — the benchmark's quiet success story — while lost time has held essentially flat around 2h 47–52min. The two axes use different scales to make small movements visible.
IT requests · 5-year2025: +86 Happiness · 2h 47min lost time
Metric
2021
2022
2023
2024
2025
Happiness
+83
+84
+86
+87
+86
Perceived lost time / ticket
2h 50min
2h 48min
2h 52min
2h 46min
2h 47min
§ 9.2 · Observations. IT requests show a different pattern. Over the five-year window, Happiness has climbed from +83 to a current level of +86, with a high of +87 in 2024. Lost time has stayed roughly flat across the same period, fluctuating narrowly around 2h 47–52min. Compared to incidents, requests are a steadily improving area.
The request indicators underline where this improvement is coming from. Service personnel's attitude and Service personnel's skills have very strong positive shares, together with Service portal's capabilities, Service portal's ease-of-use, and Speed of support. Solving the ticket appears as the largest single driver of negative feedback for requests as well. The fundamentals of request handling — clear delivery, reasonable speed, usable portals, competent people — are working well, while fulfillment quality remains the main pain point.
The focus topics reinforce this interpretation. Long response times is the top focus topic for requests, alongside Guidance and documentation clarity and Scope of services and support catalog. Employees are increasingly asking for sharper visibility, clearer process articulation, and a service catalog that matches what they actually need — even as the underlying interaction quality continues to improve.
§ 9 Read the full chapter text · IT requests
9.1 Happiness and perceived lost time
Metric
2021
2022
2023
2024
2025
Happiness
+80
+82
+84
+85
+84
Perceived lost time / request
2h 52min
2h 59min
3h 16min
3h 16min
3h 31min
2025 result:
Happiness: +84
Average perceived lost time: 3h 31min/request
9.2 Observations
IT Requests continue to score higher on happiness than IT Incidents. In 2025, Request Happiness was +84 compared to +78 for incidents. At the same time, lost time per request was 3h 31min, which is slightly higher than for incidents. This is a useful reminder that predictable, planned interactions are not necessarily faster or less costly from the employee's point of view.
The longer-term pattern is also interesting. Happiness has improved from +80 in 2021 to +84 in 2025, but lost time has risen from 2h 52min to 3h 31min. This suggests that employees are generally satisfied with how requests are handled, even while the waiting time around those requests has increased.
The request indicator data helps explain why. Service personnel's attitude, Service personnel's skills, Clarity of instructions, and Informing about the process all have very strong positive shares. This points to a request experience that is often handled well from a communication and interaction point of view. What employees value is not only fulfilment itself, but being kept informed and given clarity while they wait.
There is also a clear contradiction in the data. The overall experience remains very positive, but the focus topics highlight Communication clarity, Response delays, and Speed of service as the biggest recurring themes. This suggests that requests can still feel slow or unclear even when employees are broadly satisfied with the support interaction itself.
9.3 Geographical differences
The regional benchmark for IT Requests shows a different pattern from IT Incidents. Happiness has risen in 6 of the 8 regions over the last five years and remained stable in the remaining 2. Central America, Middle East, and South America show the strongest gains.
On lost time, the European pattern is different from incidents. Western Europe was relatively stable between 2021 and 2024 before rising sharply in 2025. Eastern Europe, by contrast, shows a clear downward trend across the five-year period. This matters because it suggests the increase in European lost time is much more pronounced in incidents than in requests.
Request benchmark metric
Region
2021
2024
2025
Pattern
Happiness
Central America
+81
Not highlighted
+91
Strong gain
Happiness
Middle East
+78
Not highlighted
+87
Strong gain
Happiness
South America
+82
Not highlighted
+90
Strong gain
Lost time
Western Europe
126 min/request
129 min/request
171 min/request
Stable through 2024, sharp rise in 2025
Lost time
Eastern Europe
295 min/request
Not highlighted
224 min/request
Clear downward trend
9.4 Internal vs outsourced IT
The internal versus outsourced comparison becomes even more striking for IT Requests than for incidents. The source notes that outsourced desks produce much more lost time per request than internal desks, even where happiness appears broadly comparable.
This matters because requests often involve fulfilment steps, approvals, dependencies, and waiting rather than a direct troubleshooting interaction. That makes them especially sensitive to handoffs and process gaps. If a sourcing model adds extra waiting or coordination layers, the effect is likely to become visible in request lost time.
9.5 Company size and industry context
The source material suggests that company size may be part of the outsourced-versus-internal pattern here as well. Larger organizations are more likely to use outsourced first-line support and more complex service-provider models. That does not by itself explain the result, but it is likely one reason why request fulfilment can feel slower in outsourced environments.
There is not yet a dedicated request-only company size benchmark in the current benchmark set, so this should remain context rather than a standalone comparison section.
9.6 Reassignments
Requests are planned transactions, but reassignments still matter. Every unnecessary handoff adds waiting time, reduces clarity, and makes the request process feel less predictable to employees.
This is especially important for requests because employees often begin with a clearer expectation of what they need. When the process becomes fragmented, the frustration is not usually about the existence of the request itself. It is about how long it takes, how clearly it is communicated, and whether progress feels visible.
9.7 Experience indicators and focus topics
9.7.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Communication clarity
22.5%
Response delays
22.4%
Speed of service
15.8%
Information discoverability
8.4%
Staff availability
5.3%
9.7.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Competence
38.0%
Communication
38.4%
Process
19.1%
Speed
4.4%
9.7.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver. Some source labels are reused across incident and request surveys, and a few differ only slightly in wording. In this report, the report label is lightly normalized for readability while the exact source label remains visible and percentages stay separated by ticket type.
Request survey indicators
Report label
Source label
Driver
Negative %
Neutral %
Positive %
Clarity of how to get help
Clarity of how to get help
Communication
9.1%
6.8%
84.1%
Clarity of instructions
Clarity of instructions
Communication
1.7%
0.8%
97.4%
Clarity of where to start
Clarity of where to start
Communication
9.2%
7.7%
84.7%
Keeping users informed about the process
Informing about the process
Communication
2.8%
0.8%
96.3%
Service personnel's attitude
Service personnel's attitude
Competence
0.8%
0.3%
98.8%
Service personnel's skills
Service personnel's skills
Competence
1.2%
0.5%
98.4%
Explaining the case
Explaining the case
Process
2.8%
1.1%
96.1%
Fulfilling the request
Fulfilling the request
Process
6.2%
0.4%
93.4%
Keeping users informed about the process
Informing about process
Process
14.9%
5.6%
79.5%
Resolving the ticket
Solving the ticket
Process
19.7%
0.1%
80.2%
Speed of support
Speed of support
Speed
2.5%
2.0%
95.5%
Indicator observations:
Service personnel's attitude and Service personnel's skills are exceptionally strong positive signals. This suggests the human side of request fulfilment is generally experienced very well.
Clarity of instructions and Keeping users informed about the process are also strongly positive, which helps explain why request happiness stays high overall.
Keeping users informed about the process and Resolving the ticket are more mixed than the other request indicators. This suggests that once the request moves deeper into fulfilment, the experience becomes less consistently positive.
The contradiction in request experience is that many interaction and communication indicators are very strong, while focus topics still point clearly to delays, unclear communication, and slow service. This suggests employees often value how the request is handled, but still feel the process takes too long.
§ 10 Support channels
Walk-in is the gold standard. The portal carries the load.
The most-used channel is the least-loved. Portal usage for requests has grown from 69% to 82% since 2021 — while it produces the lowest Happiness and the highest lost time of any channel.
Walk-in
+94
84 min avg lost
Phone
+83
134 min avg lost
Chat
+79
172 min avg lost
Email
+78
217 min avg lost
Portal
+77
266 min avg lost
§ 10.1 · Support channels · numbers behind the chart
Request channels · 2025Ranked by Happiness
Channel
Happiness
Lost time / ticket
Share of requests · 2025
Share · 2021
Walk-in
+94
84 min
1%
1%
Phone
+83
134 min
4%
9%
Chat
+79
172 min
6%
6%
Email
+79
166 min
7%
15%
Portal
+77
182 min
82%
69%
§ 10.2 · Observations. All channels produce positive Happiness. But the highest-volume channel — the portal — delivers the lowest Happiness and the highest average lost time. Walk-in is used in only 1% of requests yet scores +94 with less than half the portal's lost time. The portal is the operational choice; it is rarely the preferred one.
§ 10 Read the full chapter text · Support channels
10.1 Numbers
2025 incident experience by support channel:
Channel
Happiness
Avg. perceived lost time
% of incident submissions
Walk-in
+94
1h 24min
4.3%
Phone
+83
2h 14min
31.7%
Chat
+79
2h 52min
9.5%
Email
+78
3h 37min
10.4%
Portal
+77
4h 26min
44.2%
Portal trend in IT requests:
Year
Portal Happiness (Requests)
Lost time via portal (Requests)
Portal share of request submissions
2021
+81
3h 1min
69%
2022
+83
2h 47min
72%
2023
+84
3h 23min
76%
2024
+85
3h 33min
79%
2025
+85
3h 46min
82%
10.2 Channel purpose and cost-scalability
Channel
Typical purpose
Relative cost
Relative scalability
Walk-in
Hands-on help for hardware issues, device swaps, and situations where in-person reassurance matters most
Highest
Lowest
Phone
Real-time support for urgent, complex, or unclear issues that benefit from direct conversation
High
Low to medium
Chat
Fast guided help, triage, and interactive troubleshooting where users want quick support without a call
Medium
Medium
Email
Asynchronous support for lower-urgency issues where a written trail is useful
Medium
Medium
Portal
Structured self-service, service requests, knowledge access, and standard repeatable tasks
Lowest
Highest
10.3 Observations
The channel through which employees contact IT has a clear effect on both Happiness and lost time. The benchmark pattern also lines up closely with the purpose and cost-scalability logic in the table above. Walk-in support, which is the highest-touch and least scalable channel, delivers the strongest incident experience by a wide margin at +94 Happiness and only 1h 24min of lost time. At the other end, portal support, which is the lowest-cost and most scalable channel, produces the weakest incident result at +77 and 4h 26min. This is a useful reminder that the most efficient channel for IT is not automatically the easiest one for employees.
Phone performs much better than the more digital and lower-touch channels, with +83 Happiness and 2h 14min of lost time. Chat sits in the middle, while email performs relatively poorly. That pattern suggests that when employees can interact with IT in real time and feel that someone clearly owns the issue, the experience improves. As support becomes more asynchronous or more dependent on self-navigation, the employee effort tends to become more visible.
The most important contradiction is the portal. It is the most heavily used support channel for incidents at 44.2% of submissions, and portal usage for requests has grown steadily from 69% in 2021 to 82% in 2025. But higher usage has not translated into equally strong experience. In request handling, portal Happiness has improved to +85, yet lost time has still risen from 3h 1min to 3h 46min. This suggests many organizations have been successful in shifting demand into the portal without removing enough waiting time or friction from the fulfilment process behind it.
This is also an important reminder that the portal can look different depending on how it is measured. As a ticket submission channel for requests, the portal can score relatively well on the interaction itself. As a broader ongoing touchpoint, portal experience is much weaker, because employees also feel the effort of searching, navigating, understanding content, and completing tasks over time. Both views matter, and together they show that adoption alone is not the same as a good self-service experience.
Walk-in support should not be misread as the answer for everything. It delivers the best experience, but it represents only 4.3% of incident submissions and is the least scalable model in the comparison. The practical lesson is not to move all demand to walk-in. It is to design channel strategy more intentionally: use high-touch channels where immediacy, reassurance, or physical handling matter most, and use lower-cost channels where the task is structured enough to work well without adding friction.
For IT leaders, support-channel strategy is one of the clearest areas where experience data adds value. Operational metrics may show cost, containment, or adoption, but they do not show whether the chosen channel is actually helping employees get back to work. Happiness and lost-time data, read together with channel purpose, scalability, and the support profile mix in this report, provide a stronger basis for deciding where to invest: improving the portal, expanding walk-in capacity, strengthening phone and chat support, or introducing more automation for simple request types.
Part 04 · § 12–19
Nine IT touchpoints
Beyond the ticket · devices, apps, portal, workplace
§ 13–19 Nine IT touchpoints · detail
What employees feel about every corner of IT.
7.3 · Mobile Devices
−24
280 min lost / month · down from +7 in 2024
The weakest touchpoint in the benchmark. Drivers: device model (41.6%), mobile data policies (20.3%), connectivity (18.1%). Battery life is the single strongest negative signal.
7.2 · Laptops & Computers
−6
308 min / month · down from +8 in 2024
Turned negative for the first time in five years. Device model (37.4%) is the biggest driver; selection of models and battery life lead the negative signals. Standardization is visibly mis-fitting some employee groups.
7.1 · Enterprise Apps
+39
274 min / month · up from +21 in 2024
The biggest positive swing among touchpoints. Usability (37.4%) dominates; impact on productivity remains the clearest negative signal.
7.4 · Remote Work
+71
253 min / month
Stable. Internet and VPN access remain the most fragile indicators within an otherwise positive picture.
7.7 · Collaboration with IT
+72
182 min / month · up from +65 in 2024
Recovering. Competence (43.4%) and communication (30.7%) drive the positive signal; service delays and internal communication remain the recurring friction.
7.5 · Office Environment
+44
187 min / month · recovered from +29
Meeting-room tech reliability and WiFi reliability remain the most persistent sources of friction despite a strong overall recovery.
7.6 · Service Portal
+33
325 min / month · the most-used, least-loved channel
82% of requests now go through the portal, up from 69% in 2021. Speed of portal is the single strongest negative indicator in the benchmark; usability (42.6%) dominates feedback.
7.8 · IT Support Services
+81
52 min lost / ticket · stable since 2021
The strongest touchpoint in the benchmark. Ticket-based support has held between +78 and +82 for five years. Speed of support (37%) and service personnel attitude (27%) are the dominant positive drivers.
7.9 · Overall IT
+50
periodic survey · stable from +50 in 2024
The broadest measure of IT sentiment, up from +34 in 2022. Support (39.1%) is the single biggest driver, followed by policies (24.7%) and training (18.4%) — confirming that the human side of IT shapes how the whole function is perceived.
§ 12 Read the full chapter text · Benchmarking nine IT touchpoints
Employees interact with IT in many ways beyond raising a support ticket. They use applications, rely on their laptop, work from home over a VPN, collaborate with IT teams directly, and depend on the office environment to function.
In an ideal world, every minute at work would be spent productively. In reality, that is rarely the case. Modern work includes interruptions, delays, switching between tools, and time spent working around friction rather than moving work forward. This broader pattern is visible well beyond IT. Microsoft's 2023 Work Trend Index found that 68% of people say they do not have enough uninterrupted focus time during the workday.
HappySignals measures experience across nine distinct IT touchpoints through periodic surveys. Alongside Happiness, it measures perceived lost time: the amount of productive time employees feel they lose because IT is not supporting work as smoothly as it could. For ticket-based IT support, perceived lost time is measured per ticket. For all other touchpoints, it is measured per month, reflecting the ongoing nature of those experiences rather than a single interaction.
Perceived lost time is not intended to be a precise accounting measure of total productivity. It is an employee-centered signal that helps show where work feels slowed down, interrupted, or made harder than it should be. This gives IT teams a practical way to identify where productivity is being lost and where reducing friction could improve the daily work experience.
That is also what makes the benchmark data rich. Happiness and perceived lost time provide two important headline views, but they are only part of the picture. Each touchpoint can also be examined through experience drivers, detailed experience indicators, recurring focus topics from open text comments, and multi-year benchmark trends. Together, these data points make it possible to look beyond whether a score is high or low and understand what is shaping the experience underneath.
12.1 Benchmark overview
IT touchpoint
2021
2022
2023
2024
2025
IT support services (ticket)
+78
+79
+81
+82
+81
Collaboration with IT
+82
+84
+70
+65
+72
Remote work
+64
+77
+82
+71
+71
Overall IT experience (periodic)
-
+34
+39
+50
+50
Office environment
-
+40
+40
+29
+44
Enterprise applications
+16
+12
+6
+21
+39
Service portal
+15
+32
+27
+31
+33
Laptops and computers
+13
+7
+17
+8
-6
Mobile devices
-4
+8
+9
+7
-24
Two broad patterns stand out in the 2025 benchmark. First, the strongest scores still come from touchpoints where the human side of IT is most visible. Ticket-based IT support remains the highest-scoring area at +81, while Collaboration with IT reaches +72 and Remote work stays at +71. These results suggest that employees generally value direct support, human interaction, and working models that help them get on with their day.
Second, the weakest scores come from the everyday digital tools and devices that employees depend on constantly. Mobile devices fell sharply to -24, making it the only clearly negative touchpoint in the benchmark. Laptops and computers also turned negative at -6. This matters because device and mobile experience are not occasional interactions. They shape work continuously, which means even moderate friction can become highly visible over time.
There are also some important positive shifts in the 2025 data. Enterprise applications improved strongly to +39, up from +21 in 2024 and only +6 in 2023. Office environment also recovered from +29 to +44, and Collaboration with IT improved from +65 to +72. Taken together, these changes suggest that several organizations have made meaningful progress in the quality of shared tools, workspaces, and day-to-day interaction with IT.
At the same time, the benchmark shows that adoption and satisfaction do not always move together. Service portal usage is high and still increasing in the wider benchmark, yet happiness is only +33, with perceived lost time remaining high. The broader Overall IT experience score is a healthier +50, but it did not improve further in 2025. This suggests that employees may feel generally positive about IT overall while still encountering friction in important moments such as self-service, response speed, or everyday device use.
The nine touchpoints together show why employee experience data matters. IT performance is not experienced through one channel alone. Employees judge IT through the full environment around them: support interactions, policies, applications, hardware, access, collaboration, and the places where work happens. Some of those touchpoints are clearly improving, while others now show visible strain.
The chapters that follow explore where those gains and gaps come from. They move from this overview into the more specific patterns behind each touchpoint: where Happiness is improving or declining, where perceived lost time is accumulating, which experience indicators matter most, and what employees most often raise in their own words. Together, they show that improving IT experience is not only about resolving issues faster. It is about reducing friction across the full working day, especially in the touchpoints that employees rely on most often and notice most immediately when they fall short.
§ 13–19 Read the full chapter text for each touchpoint
§ 13 Enterprise applications
13.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
+16
+12
+6
+21
+39
Perceived lost time / month
4h 58min
4h 31min
5h 24min
6h 10min
4h 34min
2025 result:
Happiness: +39
Average perceived lost time: 4h 34min/month
13.2 Observations
Enterprise Applications improved clearly in 2025. Happiness rose to +39, up from +21 in 2024 and +6 in 2023. Lost time also dropped from 6h 10min per month in 2024 to 4h 34min in 2025.
The most important finding is not only that the scores improved. It is what employees say matters most. The biggest experience driver is Usability at 37%, followed by Support at 27% and Efficiency at 18%. This tells us that enterprise application experience is not only about whether the system is available. It is about whether people can use it easily, get help when needed, and complete their work without extra effort.
The strongest negative signals make this even clearer. Impact on productivity has the highest negative share in the benchmark, and speed of application is another major pain point. This matters because employees do not experience enterprise applications as technical systems. They experience them as part of their working day. If the application is slow, hard to use, or hard to understand, work becomes harder.
The open-text topics support the same message. User guidance materials, system reliability, user interface complexity, and slow system performance all stand out. Together, these show that a human-centric approach to enterprise applications means more than fixing incidents or improving technical performance. It means helping people understand the tools, trust the tools, and use the tools without friction.
For IT leaders, this is an important point. Managing enterprise applications well is not only a technology task. It is also a people experience task. Better application experience can come from clearer design, better guidance, stronger support, and faster performance. The value is not only better scores. It is better daily work for employees.
13.3 Standardized benchmark findings
13.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
User guidance materials
18.5%
System reliability
12.0%
User interface complexity
11.7%
Slow system performance
11.3%
File management and sharing
7.4%
13.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Usability
37.4%
Support
27.4%
Efficiency
17.9%
Data
10.7%
Reliability
4.1%
Personalization
2.5%
13.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Accuracy of data
Data
9.7%
7.2%
83.1%
Ease of authentication
Efficiency
19.1%
10.1%
70.8%
Impact on productivity
Efficiency
54.3%
2.6%
43.1%
Speed of application
Efficiency
42.0%
19.4%
38.7%
App customization
Personalization
20.2%
8.9%
70.9%
Reliability of application
Reliability
18.9%
7.7%
73.4%
Support for application
Support
17.0%
5.6%
77.5%
Training for application
Support
14.0%
4.7%
81.2%
Finding what is needed
Usability
20.0%
12.0%
68.0%
Usability of application
Usability
19.6%
11.2%
69.3%
Indicator observations:
Impact on productivity stands out as the clearest negative signal. More than half of the selections are negative, which shows how quickly employees notice when applications make work harder instead of easier.
Speed of application is one of the most divided indicators in this chapter. It appears strongly in both positive and negative contexts, which suggests application speed is not a background technical detail. It is a visible part of daily employee experience.
Support for application and Training for application are much more represented as positive influences than negative ones. This suggests support is often remembered as a strength when it helps employees move forward.
Finding what is needed and Usability of application lean positive overall, but they still carry meaningful negative shares. That is an important tension in the data. Usability is the biggest driver overall, yet it is not consistently good. It can either help work flow smoothly or create friction, depending on the application and context.
§ 14 Laptops and computers
14.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
+13
+7
+17
+8
-6
Perceived lost time / month
5h 29min
5h 54min
4h 1min
4h 58min
5h 8min
2025 result:
Happiness: -6
Average perceived lost time: 5h 8min/month
14.2 Observations
Laptops and Computers is one of the clearest warning signs in this benchmark. Happiness turned negative in 2025 for the first time in five years, falling from +8 in 2024 to -6. Lost time also increased to 5h 8min per month, which means employees are not only less happy with their devices, but also losing more productive time because of them.
The strongest message in the data is that employees experience laptops and computers very personally. The biggest experience driver is Model at 37.4%, followed by Performance at 20.2% and Reliability at 15.3%. This matters because it shows that endpoint experience is not only about whether devices technically work. It is also about whether the device fits the employee's work, role, and daily needs.
The data also shows that hardware decisions are highly visible to employees. Battery life, speed, reliability, updates, and storage all stand out. This is important in a human-centric approach to endpoint management. Employees do not separate the technical quality of the device from their ability to work well. If the device is slow, unstable, or poorly matched to the job, the experience quickly becomes negative.
There is also an interesting tension in the results. Model is the biggest driver overall, but inside that driver the signals are mixed. Memory and storage space and Device size and weight are mostly positive, while Length of battery life and Selection of models are much more negative. This suggests the challenge is not simply that laptop hardware is poor. It is that some parts of the device experience are working well, while others create clear frustration.
The focus topics support the same picture. Device and software personalization, stability and performance, storage capacity, update-related disruptions, and unstable network connections all stand out. Together, these findings show that laptop experience is shaped by the full working setup around the employee, not just by a single device specification.
For IT leaders, the message is clear. Managing laptops and computers well is not only about standardization and technical control. It is also about giving different employee groups the right device fit, reliable performance, and enough flexibility to do their work without friction.
14.3 Standardized benchmark findings
14.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Device and software personalization
20.3%
Stability and performance
17.4%
Storage capacity
12.2%
Update-related disruptions
9.9%
Unstable network connections
8.7%
14.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Model
37.4%
Performance
20.2%
Reliability
15.3%
Updates
12.4%
Accessories
7.8%
Process
6.9%
14.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Quality of accessories
Accessories
22.5%
18.3%
59.2%
Device size and weight
Model
17.0%
12.0%
71.0%
Length of battery life
Model
41.6%
19.0%
39.4%
Memory and storage space
Model
15.8%
6.6%
77.6%
Selection of models
Model
53.0%
2.8%
44.2%
Speed of device
Performance
39.2%
11.9%
48.9%
Changing device
Process
18.3%
7.7%
74.0%
Reliability of device
Reliability
26.8%
6.4%
66.8%
Forced updates and settings
Updates
32.0%
5.1%
62.9%
Indicator observations:
Selection of models is the clearest negative signal in this chapter. More than half of the selections are negative, which suggests employees often feel the available device choices do not fit their work needs well enough.
Length of battery life and Speed of device are also strongly split indicators. Both appear heavily in negative experiences, which makes them highly visible parts of the daily device experience.
Memory and storage space and Changing device are much more represented as positive influences than negative ones. This suggests some practical parts of device management are working better than the headline score alone might imply.
Forced updates and settings is mostly positive overall, but it still has a meaningful negative share. This creates an important tension in the data. Standardization may support many employees, while creating visible friction for others.
§ 15 Mobile devices
15.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
-4
+8
+9
+7
-24
Perceived lost time / month
5h 52min
4h 10min
3h 59min
4h 41min
4h 40min
2025 result:
Happiness: -24
Average perceived lost time: 4h 40min/month
15.2 Observations
Mobile Devices is the weakest measurement area in this benchmark in 2025. Happiness fell sharply from +7 in 2024 to -24, while lost time stayed high at 4h 40min per month. This means the employee experience with mobile work has clearly worsened, even though the productivity burden did not rise much further from the already high 2024 level.
The main finding is that mobile experience is not shaped by one issue alone. The biggest driver is Model at 41.6%, followed by Policies at 20.3% and Connectivity at 18.1%. This is important because it shows that mobile experience sits at the intersection of hardware, telecom services, and access policies. A better phone model alone will not solve the problem if access is restricted or connectivity is unreliable.
The indicator pattern makes this even clearer. Length of battery life is the strongest negative signal, while Storage space is also heavily negative. At the same time, Suitability of phone model and Screen and device size are mostly positive. This creates an important tension in the data. Employees are not saying that mobile devices are bad in every way. They are saying that some parts of the experience work well, while other parts create clear friction in daily work.
The open-text topics support the same message. Application access restrictions, outdated devices, device and network speed, and device selection flexibility all stand out. Together, these findings suggest that the mobile experience is strongly affected by how much freedom and continuity employees have when working across locations and situations.
For IT leaders, this matters because mobile experience is easy to underestimate. In many organizations, it affects field work, travel, frontline work, and hybrid work. A human-centric approach means looking beyond the phone itself and understanding the full mobile work experience: access, connectivity, battery life, speed, and whether the device actually fits the employee's work context.
15.3 Standardized benchmark findings
15.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Application access restrictions
16.2%
Outdated devices
13.2%
Device and network speed
11.1%
Device selection flexibility
8.2%
Network connectivity reliability
8.0%
15.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Model
41.6%
Policies
20.3%
Connectivity
18.1%
Process
10.1%
Access
5.1%
Reliability
4.8%
15.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Access to application
Access
24.1%
11.2%
64.7%
Signal and reception
Connectivity
22.2%
11.2%
66.7%
Length of battery life
Model
51.4%
20.8%
27.7%
Screen and device size
Model
15.5%
6.8%
77.6%
Storage space
Model
36.4%
24.3%
39.3%
Suitability of phone model
Model
15.8%
6.3%
77.9%
Data speed and limits
Policies
15.3%
8.2%
76.5%
Security and access restrictions
Policies
18.9%
13.3%
67.8%
Changing device
Process
9.1%
5.6%
85.3%
Reliability of device
Reliability
17.3%
5.9%
76.8%
Indicator observations:
Length of battery life is the clearest negative signal in this chapter. More than half of the selections are negative, which shows how strongly battery problems shape the daily mobile experience.
Storage space is also heavily split toward negative experience. This suggests that mobile work is affected not only by device quality, but also by practical limits that build up over time.
Suitability of phone model, Screen and device size, and Changing device are much more represented as positive influences than negative ones. This suggests some parts of mobile device management are working well.
Policies show an interesting contradiction. Data speed and limits is mostly positive overall, but Security and access restrictions is much more mixed. This suggests policy can either support mobile work or get in the way of it, depending on how it is experienced by employees.
§ 16 Collaboration with IT
16.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
+82
+84
+70
+65
+72
Perceived lost time / month
2h 14min
1h 57min
3h 31min
3h 58min
3h 2min
2025 result:
Happiness: +72
Average perceived lost time: 3h 2min/month
16.2 Observations
Collaboration with IT remains one of the strongest measurement areas in the benchmark, but the long-term pattern is more mixed than the 2025 score alone suggests. Happiness improved from +65 in 2024 to +72 in 2025, and lost time also improved from 3h 58min to 3h 2min per month. Even so, both measures are still clearly weaker than they were in 2021 and 2022.
The main finding is that employees still experience Collaboration with IT primarily through people. The biggest experience driver is Competence at 43.4%, followed by Communication at 30.7% and Training at 22.3%. This matters because it shows that good collaboration with IT is not mainly about technology. It is about how capable, clear, and helpful IT feels to employees in daily work.
There is also an important credibility point in the data. The structured indicators are overwhelmingly positive. Language used by IT, Interaction style of IT staff, and Communicating updates and outages all have very strong positive shares. At the same time, the focus topics point to recurring friction around service delays, internal communication, support accessibility, and process clarity. This suggests that the general relationship with IT is often experienced positively, while specific moments of delay, confusion, or weak follow-through still stand out strongly in comments.
That tension is important in a human-centric reading of the results. Employees may appreciate IT staff and still feel frustrated when communication is unclear, processes are hard to follow, or support is difficult to access at the right moment. In other words, good intentions and good interaction style do matter, but they are not always enough on their own.
For IT leaders, the message is encouraging but demanding. Collaboration with IT is already a relative strength, which means there is something valuable to protect. At the same time, the data suggests that stronger communication clarity, better accessibility, and less delay could make this experience more consistent and reduce the gap between positive overall sentiment and the friction that still appears in open feedback.
16.3 Standardized benchmark findings
16.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Service delays
18.5%
Internal communication
14.8%
Process clarity
13.6%
Support accessibility
13.0%
Stakeholder alignment
11.1%
16.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Competence
43.4%
Communication
30.7%
Training
22.3%
User understanding
3.5%
16.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Communicating updates and outages
Communication
4.2%
3.2%
92.6%
Language used by IT
Communication
1.7%
0.9%
97.4%
Interaction style of IT staff
Competence
2.3%
0.8%
96.8%
Training to build IT skills
Training
5.8%
6.4%
87.8%
Understanding employee role
User understanding
4.2%
4.2%
91.5%
Indicator observations:
Language used by IT and Interaction style of IT staff are among the strongest positive signals in the benchmark. This suggests employees often feel that IT communicates in a respectful and understandable way.
Communicating updates and outages is also strongly positive overall, which supports the idea that communication quality is a real strength in this area.
Training to build IT skills is still mostly positive, but it is the most mixed indicator in this chapter. This suggests training is more uneven than the other collaboration signals.
The contradiction in this measurement area is that the structured indicators look very strong, while the focus topics still highlight delays, accessibility, and process clarity. This suggests the relationship with IT is often good, but the experience can still break down in specific situations where timing, clarity, or follow-through matter most.
§ 17 Office environment
17.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
—
+40
+40
+29
+44
Perceived lost time / month
—
3h
3h 3min
3h 14min
3h 7min
2025 result:
Happiness: +44
Average perceived lost time: 3h 7min/month
17.2 Observations
Office Environment recovered well in 2025. Happiness rose from +29 in 2024 to +44, while lost time improved slightly from 3h 14min to 3h 7min per month. This makes it one of the clearer positive shifts in this year's benchmark.
The main finding is that office experience is shaped by shared infrastructure. The biggest experience drivers are Connectivity at 30.6%, Meeting rooms at 28.0%, and Shared Equipment at 26.5%. This matters because the office experience is not defined by one employee's device alone. It is shaped by the systems and spaces that people rely on together: Wi-Fi, meeting rooms, printers, screens, desk setup, and local support.
The data shows that this measurement area is highly visible when things go wrong. Meeting room technology, Wi-Fi reliability, ergonomics, and local IT support all stand out in the focus topics. These are not hidden background issues. They affect collaboration in real time, often in front of other people, which makes the experience more noticeable and sometimes more frustrating.
There is also a useful contradiction in the data. Most structured indicators are clearly positive overall, including IT support in the office, Printing, and Sufficiency of meeting room equipment. At the same time, the open-text topics still highlight WiFi reliability, Meeting room technology reliability, and Local IT support availability as recurring friction points. This suggests the general office setup works reasonably well for many employees, while specific locations, rooms, or moments of failure still leave a strong impression.
For IT leaders, this is important because office experience is often managed through infrastructure projects and local support models. A human-centric approach means not only tracking whether the office is technically equipped, but whether people can actually work, meet, connect, and collaborate smoothly in the spaces they use every day.
17.3 Standardized benchmark findings
17.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
WiFi reliability
19.7%
Local IT support availability
13.5%
Meeting room technology reliability
12.0%
Workspace ergonomics
12.0%
Meeting room connectivity
10.8%
17.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Connectivity
30.6%
Meeting rooms
28.0%
Shared Equipment
26.5%
Support
14.9%
17.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Wi-Fi reliability
Connectivity
14.1%
13.6%
72.4%
Wi-Fi speed
Connectivity
11.6%
11.8%
76.6%
Meeting room booking
Meeting rooms
10.6%
13.4%
76.1%
Reliability of meeting room devices
Meeting rooms
17.5%
14.6%
67.9%
Sufficiency of meeting room equipment
Meeting rooms
13.3%
8.0%
78.7%
Desk accessories
Shared Equipment
17.3%
14.3%
68.4%
Ergonomic setup at workstations
Shared Equipment
14.4%
8.3%
77.3%
Printing
Shared Equipment
10.8%
10.1%
79.1%
IT support in the office
Support
7.1%
3.8%
89.1%
Indicator observations:
IT support in the office is one of the strongest positive signals in this chapter. This suggests local support presence is often appreciated when employees need help in the office.
Reliability of meeting room devices is the most mixed indicator here and the clearest negative pressure point among the structured signals. This shows how visible meeting room failures are in the shared office experience.
Wi-Fi reliability and Desk accessories also carry meaningful negative shares, even though they are positive overall. This suggests that office basics matter a lot and are quickly noticed when they fall short.
The contradiction in this measurement area is that many indicators look healthy overall, while open-text topics still focus strongly on WiFi reliability, meeting room technology, and local support availability. This suggests office experience is often good in general, but uneven across locations and situations.
§ 18 Remote work
18.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
+64
+77
+82
+71
+71
Perceived lost time / month
4h 45min
4h 21min
4h 4min
4h 57min
4h 13min
2025 result:
Happiness: +71
Average perceived lost time: 4h 13min/month
18.2 Observations
Remote Work remains one of the better-scoring measurement areas in the benchmark, but the long-term story is mixed. Happiness stayed at +71 in 2025, while lost time improved from 4h 57min in 2024 to 4h 13min per month. Even so, the experience has not returned to its 2023 peak of +82, which suggests the recovery is only partial.
The main finding is that remote work depends on a combination of support, access, and policy fit. The biggest experience drivers are Support at 30.0%, Access at 28.4%, and Policies at 27.5%. This matters because remote work experience is not only about connectivity. It is about whether employees can get what they need, understand how remote work is meant to function, and receive help when something gets in the way.
There is a useful contradiction in the data. The overall driver share for Connectivity is relatively low at 14.0%, yet the connectivity indicators are clearly more negative than most of the others. Internet connection and VPN Access have much more mixed results than Accessing applications, Remote work practices, and Support for remote work, which are strongly positive. This suggests connectivity may not be the most common part of the remote work experience, but when it fails, it has a strong negative effect.
The focus topics support the same picture. Network reliability, support responsiveness, and remote work flexibility stand out most clearly, followed by remote work equipment and VPN configuration hurdles. Together, these findings suggest that remote work is experienced as a full working environment, not just a technical setup. Employees notice whether remote work is reliable, supported, and practical in everyday life.
For IT leaders, this matters because a remote work model can look stable on the surface while still creating friction underneath. A human-centric approach means paying attention not only to policy and access, but also to the everyday weak points that can make remote work feel fragile when employees most need it to work smoothly.
18.3 Standardized benchmark findings
18.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Network reliability
15.3%
Support responsiveness
15.1%
Remote work flexibility
12.2%
Remote work equipment
9.4%
In-person collaboration
7.4%
18.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Support
30.0%
Access
28.4%
Policies
27.5%
Connectivity
14.0%
Tools
7.0%
18.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Accessing applications
Access
1.8%
1.6%
96.6%
Internet connection
Connectivity
20.1%
19.2%
60.7%
VPN Access
Connectivity
19.8%
18.4%
61.9%
Remote work practices
Policies
1.4%
1.1%
97.5%
Support for remote work
Support
2.3%
1.3%
96.4%
Collaboration tools
Tools
14.3%
10.5%
75.2%
Remote work equipment
Tools
14.3%
20.3%
65.5%
Indicator observations:
Remote work practices, Accessing applications, and Support for remote work are all very strong positive signals. This suggests many employees feel the overall remote work model is understandable and supported.
Internet connection and VPN Access are much more mixed than the other indicators. This shows that connectivity remains one of the most fragile parts of the remote work experience.
Remote work equipment is also more mixed than the headline score might suggest, with a relatively large neutral share. This may point to uneven quality or adequacy across different employee situations.
The contradiction in this measurement area is that the overall score is strong and several indicators are highly positive, yet connectivity and remote access still stand out as recurring weak points in both structured data and focus topics. This suggests remote work often works well until a core dependency fails.
§ 19 Service portal
19.1 Numbers
Metric
2021
2022
2023
2024
2025
Happiness
+15
+32
+27
+31
+33
Perceived lost time / month
—
5h 21min
5h 46min
5h 41min
5h 25min
% of request submissions via portal
69%
72%
76%
79%
82%
2025 result:
Happiness: +33
Average perceived lost time: 5h 25min/month
Share of request submissions via portal: 82%
19.2 Observations
Service Portal is one of the clearest examples in this benchmark of a strategy succeeding operationally before it succeeds experientially. In 2025, 82% of all IT requests were submitted through the portal, up from 69% in 2021. This shows that organizations have been highly successful in moving demand to self-service.
From an IT strategy and budget point of view, this makes sense. Service portals are usually meant to reduce manual workload for support teams, standardize how requests enter IT, and help employees solve simple needs without direct agent support. In that sense, the portal is doing an important job.
But the experience data shows the other side of the story. Happiness is only +33, and lost time remains high at 5h 25min per month. This figure refers to the broader periodic Service Portal touchpoint, not to per-request portal submission data. This suggests that many employees are using the portal because it is the main route into IT, not because it is genuinely easy or efficient.
The strongest experience driver is Usability at 42.6%, followed by Content at 25.9% and Process at 21.8%. That matters because it shows the core challenge is not only technical performance. It is whether employees can find what they need, understand what to do, and complete the task without unnecessary effort.
The indicator data makes this very visible. Speed of portal is the strongest negative signal in the chapter. Finding what is needed, Reporting issues, and Requesting something are also heavily weighted toward negative experience. At the same time, Accuracy of content is much more positive. This is an important contradiction in the data. The problem is not simply that the portal lacks information. It is that employees still struggle to navigate and use it effectively.
The focus topics support the same conclusion. Ticket submission, platform usability, finding services, and follow-up consistency all stand out. Together, these findings suggest that many organizations have already achieved portal adoption, but have not yet fully achieved portal ease of use.
For IT leaders, this is a useful reminder. A self-service strategy creates value only when it reduces work for both IT and employees. If the portal lowers workload for the service desk but increases effort for employees, the cost may be moved rather than removed. A human-centric approach to self-service means measuring not only adoption, but also how much friction the portal creates in the employee's working day.
19.3 Standardized benchmark findings
19.3.1 Focus topics
Focus Topics are recurring themes identified from open-text feedback. The list below shows the top 5 recurring topics in this benchmark view:
Focus topic
Share of comments
Ticket submission
19.0%
Platform usability
18.0%
Finding services
9.8%
Follow-up consistency
8.9%
Appointment process
6.0%
19.3.2 Experience drivers
Experience Drivers are higher-level categories that group related Experience Indicators into broader reason themes. The percentages show how often each driver influences end-user experience in this measurement area.
Experience driver
Influence on end-user experience
Usability
42.6%
Content
25.9%
Process
21.8%
Speed
9.7%
19.3.3 Experience indicators
Experience Indicators are structured survey answer options selected by employees to explain their rating. Each indicator belongs to one Experience Driver, and the table shows whether selections were associated with negative, neutral, or positive responses.
Experience indicator
Driver
Negative %
Neutral %
Positive %
Clarity of content
Content
24.0%
5.8%
70.2%
Accuracy of content
Content
14.8%
3.0%
82.2%
Completing a task
Process
26.8%
0.3%
72.9%
Speed of portal
Speed
54.7%
10.9%
34.4%
Finding what is needed
Usability
33.6%
5.8%
60.6%
Reporting issues
Usability
34.5%
4.4%
61.2%
Requesting something
Usability
33.5%
5.5%
61.0%
Indicator observations:
Speed of portal is the clearest negative signal in this chapter. More than half of the selections are negative, which shows how quickly employees notice when the portal feels slow.
Finding what is needed, Reporting issues, and Requesting something are all strongly mixed toward negative experience. This suggests that core self-service tasks are still harder than they should be.
Accuracy of content is much more positive than negative, which is an important contrast. The issue is not only what information exists in the portal, but whether employees can reach and use it easily.
The contradiction in this measurement area is that portal usage is very high and still rising, while the strongest experience signals remain usability and speed pain points. This suggests many employees use the portal because they must, not because it offers a smooth experience.
§ 8.3 Geographical differences · Incidents
Higher national wellbeing. Lower IT Happiness.
The pattern is consistent and counter-intuitive. A score of +74 in Western Europe and +90 in Central America may describe equally well-functioning IT organizations — serving very different workforces.
Four profiles, defined by two axes: technical confidence, and preference for self-resolution vs. IT-assisted resolution. Click a profile to see its numbers.
Doers report the lowest Happiness and among the highest lost time — not because they can't solve problems, but because they know what "good" looks like and notice every delay. Services designed around the average user will systematically underperform with this population.
DOER
+79
Happiness
215
Min lost / incident
Technically capable, prefers to resolve issues independently. Concentrated in Western Europe (62% of workforce) and North America (57%).
↑ Tech confidence
↓ Less confident
← Self-solve
IT-solve →
High-tech · Self
Doer
+79
High-tech · IT
Prioritizer
+86
Less-tech · Self
Trier
+86
Less-tech · IT
Supported
+89
§ 11.1 · IT support profiles · numbers
Four profiles · Happiness and lost time · 2025Share of total workforce
Profile
Axis
Happiness
Lost time / ticket
Share
Doer
High-tech · Self-resolve
+79
215 min
54%
Prioritizer
High-tech · IT-assisted
+86
188 min
22%
Trier
Less-tech · Self-resolve
+86
158 min
13%
Supported
Less-tech · IT-assisted
+89
149 min
11%
§ 11.2 · Observations. Doers dominate the workforce and report the lowest Happiness with the highest lost time — not because they cannot solve problems, but because they know what "good" looks like and notice every delay. The Supported profile, by contrast, reports the highest Happiness at +89 with the lowest lost time at 149 minutes. Regional distribution matters: Western Europe is 62% Doers, North America 57% Doers — these workforces are the most demanding to serve.
§ 11 Read the full chapter text · IT support profiles
11.1 Numbers
HappySignals defines IT support profiles using two behavioral dimensions:
Competence: how capable the user is of solving the issue or discussing IT matters
Attitude: how willing the user is to solve the problem on their own versus having IT handle it
Profile
Competence
Attitude
Typical support preference
Doer
High
Wants to solve issues independently
Self-service, direct expert contact, clear status tracking
Prioritizer
High
Prefers IT to handle the issue
Fast direct help, quick resolution during the session
Trier
Lower
Wants to try and learn
Clear instructions, personal help that also teaches
Supported
Lower
Prefers not to self-resolve
Familiar channels, simple language, personal or on-site help
Support profile distribution across regions in 2025:
Support profiles 2025
Africa
Asia
South America
Central America
North America
Western Europe
Eastern Europe
Oceania
Middle East
% of total
2%
14%
5%
4%
26%
40%
6%
2%
2%
n 2025
9,508
57,170
19,870
16,191
110,987
168,155
23,723
10,282
7,501
Doers
31%
32%
32%
28%
57%
62%
52%
58%
29%
Prioritizers
38%
26%
38%
39%
19%
22%
31%
17%
33%
Supported
23%
29%
23%
26%
14%
9%
13%
13%
30%
Triers
8%
13%
7%
7%
10%
6%
5%
11%
8%
11.2 Observations
IT support profiles are one of the most useful ways to explain why the same service model can produce very different experience outcomes in different employee populations. The key point from the HappySignals guide is that these are not fictional personas. They are behavior-based profiles built around how employees relate to IT problems: how capable they are, and how much they want to solve issues themselves.
The regional distribution data shows why this matters so much. Western Europe stands out with 62% Doers, the highest proportion in the benchmark, while Central America has only 28%. At the same time, Western Europe has only 9% Supported users, compared with 30% in the Middle East and 29% in Asia. This means regions are not only receiving support differently. They are bringing very different expectations, habits, and tolerance levels to the support experience.
Support profile comparison
Region
Share of users
Why it matters
Doers
Western Europe
62%
Highest proportion in the benchmark
Doers
Central America
28%
Much smaller technically self-reliant segment
Supported users
Middle East
30%
Highest Supported share
Supported users
Asia
29%
High Supported share
Supported users
Western Europe
9%
Lowest Supported share
The most important implication is that technically capable employees are not necessarily the easiest employees to satisfy. Doers often know what good support looks like, have usually already tried to solve the issue themselves, and become frustrated when support feels slow, repetitive, or insufficiently expert. Prioritizers are also technically confident, but they do not want to spend their own time on IT issues. They value speed, directness, and minimal effort from their side. Triers and Supported employees need something different: clearer guidance, simpler language, more reassurance, and more visible help in choosing the right channel.
This has direct consequences for service design. A service desk with a high Doer population should not expect generic first-line handling and basic scripts to feel good enough. These users often respond better to strong status visibility, direct access to expertise, and channels that let them move quickly. A service desk with more Supported or Trier employees should invest more in familiar channels, plain language, personal assistance, and making it obvious where to start. The same channel strategy or communication style will not work equally well for both groups.
For IT leaders, support profiles are valuable because they turn broad satisfaction patterns into something more actionable. Instead of asking only whether employees are happy with IT, profiles help explain who is unhappy, why, and what kind of service design is more likely to work for them. This is especially useful in multinational organizations, where regional profile mix can shift the benchmark outcome even when the underlying service quality is similar.
"When our customers look for places to improve productivity, reassignments are the most consistent starting point. The data is clear to read and the improvement actions are concrete."
HappySignals · Customer patterns
§ 22 Industry insights
Finance leads. Technology lags.
Technology companies report the lowest incident Happiness (+69) despite the most sophisticated IT environments — consistent with the pattern that technically capable employees are more demanding, not less.
Finance & Insurance
+88
96 min
Publishing
+87
118 min
FMCG
+83
228 min
Recruitment & Staffing
+83
150 min
Public Sector
+83
115 min
Healthcare & Pharma
+81
200 min
Energy & Utilities
+80
164 min
Manufacturing
+77
197 min
Transportation
+74
343 min
Retail
+69
191 min
Technology
+69
186 min
Incident Happiness 2025. Bar fill indicates score on 0–100 scale. Lost time per incident on right.
§ 22.1 · Industry benchmark · numbers
Incident Happiness by industry · 2025Ranked
Industry
Incident Happiness
Lost time / incident
Finance & Insurance
+88
2h 31min
Public sector
+85
2h 40min
Healthcare & pharma
+83
2h 55min
Manufacturing
+80
3h 12min
Retail & consumer
+79
3h 18min
Energy & utilities
+78
3h 20min
Professional services
+76
3h 25min
Telecoms & media
+74
3h 42min
Technology
+71
3h 55min
§ 22 Read the full chapter text · Industry insights
22.1 Numbers
Industry benchmarks for ticket-based IT support in 2025:
Industry
Incident Happiness
Incident perceived lost time
Request Happiness
Request perceived lost time
Finance and insurance
+88
1h 36min
+89
2h 11min
Publishing
+87
1h 58min
+88
3h 12min
FMCG
+83
3h 48min
+87
4h 42min
Recruitment & Staffing
+83
2h 30min
+92
2h 6min
Public Sector
+83
1h 55min
+86
1h 31min
Healthcare & Pharma
+81
3h 20min
+79
2h 50min
Energy & Utilities
+80
2h 44min
+85
2h 45min
Manufacturing
+77
3h 17min
+80
3h 36min
Small MSP
+77
2h 8min
+80
2h 6min
Transportation
+74
5h 43min
+76
2h 18min
Retail
+69
3h 11min
+72
3h 41min
Technology
+69
3h 6min
+82
1h 15min
22.2 Observations
Ticket-based IT experience varies significantly by industry. Finance and insurance sits at the top of the benchmark for both incidents and requests, while retail and technology are among the weakest for Happiness. This is important because it shows that the employee experience of IT support is shaped not only by service desk quality, but also by the operating context around it: work intensity, system complexity, business criticality, and the expectations employees bring to support interactions.
Transportation stands out most clearly on lost time. At 5h 43min per incident, it is far above every other sector in this comparison. That suggests IT failures in transportation environments may be especially disruptive to the employee's ability to work, making every delay more visible and costly.
There are also some important contrasts between incidents and requests. Technology has very low incident Happiness at +69, yet request Happiness is much stronger at +82 and request lost time is relatively low at 1h 15min. That suggests technically demanding environments do not necessarily struggle across every service type in the same way. Some industries may handle planned fulfilment relatively well while still performing poorly when employees need incident support.
For IT leaders, industry benchmarks are most useful as context rather than as league tables. A public sector organization at +83 for incidents may be performing well for its environment even if it does not look similar to finance and insurance at +88. The practical value comes from comparing your organization to peers facing similar complexity, workforce patterns, and service expectations.
Part 06 · § 20–25
The wider context
ESM, industry, sourcing & conclusion
§ 21 The wider context
Why experience differs between organizations.
IT experience does not sit in a vacuum. The same support team, processes, and tools will produce different outcomes depending on the industry the organization operates in, how large it is, and how its service desk is sourced. This section brings together those three lenses and sets them alongside the comparison with HR and Finance as Enterprise Service Management areas.
The wider context is particularly useful for leaders who want to sanity-check their own numbers. "Good" does not mean the same thing in every context. A +78 Happiness score in Finance is performing close to a sector benchmark; the same +78 in an enterprise-scale outsourced environment is actually a strong result given the structural constraints.
§ 21 Read the full chapter text · The wider context
The benchmark data in this section adds context to ticket-based IT experience. Industry, company size, and sourcing do not explain everything on their own, but they help explain why similar IT organizations can produce very different employee experience outcomes.
This matters because IT experience should not be read as if every organization operates under the same conditions. The complexity of the environment, the nature of the workforce, the structure of the service desk, and the number of handoffs behind the scenes all shape how support is experienced by employees.
The following chapters look at this wider context through three lenses. Industry insights compares how ticket-based IT support differs across sectors with different operating realities and employee expectations. Company size impact looks at how organizational scale affects coordination, complexity, and the speed at which tickets move. Internal vs outsourced service desk examines how different sourcing models can produce similar Happiness scores while creating very different levels of perceived lost time.
These comparisons are best read as contextual benchmarks rather than league tables. Their purpose is not to suggest that every organization should look the same, but to help explain why a score means different things in different operating environments. In that sense, the wider context is not background detail. It is part of how the benchmark should be interpreted.
§ 24 Internal vs outsourced service desk
When happiness lies, lost time tells the truth
Same happiness. Twice the lost time.
Internal and outsourced service desks produce identical happiness scores — but very different productivity costs.
If the benchmark looked only at Happiness, the two sourcing models would appear effectively identical. Lost time reveals the dimension Happiness hides: how long work is disrupted while the case moves through the process.
Internal 1st-line service desk
Incident Happiness
+81
Request Happiness
+84
Lost / incident
2h 17m
Lost / request
1h 35m
Shorter paths to resolution. Better familiarity with business systems. Less reliant on approvals from customer-side teams.
External 1st-line service desk
Incident Happiness
+81
Request Happiness
+84
Lost / incident
3h 20m
Lost / request
3h 43m
Acceptable front-line interaction, but more elapsed time: time-zone handoffs, contractual scope boundaries, weaker familiarity with internal systems.
The gap: Outsourced desks add 1h 03m more lost time per incident and 2h 08m more per request. At enterprise scale, that is a multi-million-dollar productivity delta — invisible to any Happiness dashboard.
§ 24.1 · Internal vs. outsourced · numbers
1st-line service desk · sourcing comparison · 2025Happiness on NPS-style scale
Metric
Internal
Outsourced
Δ
Incident Happiness
+81
+81
0
Request Happiness
+84
+84
0
Lost time / incident
2h 17min
3h 20min
+1h 03min · +46%
Lost time / request
1h 35min
3h 43min
+2h 08min · +135%
§ 24 Read the full chapter text · Internal vs outsourced service desk
24.1 Numbers
Internal versus external first-line service desk experience:
Service desk type
Incident Happiness
Incident perceived lost time
Request Happiness
Request perceived lost time
Internal 1st line service desk
+81
2h 17min
+84
1h 35min
External 1st line service desk
+81
3h 20min
+84
3h 43min
24.2 Observations
This is one of the most striking comparisons in the ticket-based benchmark. Internal and external first-line service desks produce the same Happiness scores in both incidents and requests. But the lost-time gap is substantial. Employees supported by an outsourced first line lose 1h 3min more time per incident and 2h 8min more time per request.
This matters because it shows why Happiness and lost time need to be read together. If the benchmark looked only at Happiness, the two sourcing models would appear effectively identical. Lost time reveals a different dimension of the employee experience: how long work is disrupted while the case moves through the process.
The source material suggests several reasons why outsourced models may create more lost time even when the front-line interaction itself is acceptable. Missed communications across time zones, weaker familiarity with internal systems, more reliance on customer-side teams for approvals or escalations, tighter contractual scope boundaries, and less business context can all add elapsed time without necessarily making the first interaction feel worse.
There is also an important link to company size. Larger organizations are more likely to use outsourced first-line models and more likely to operate with multiple service providers. That can make coordination slower and increase the number of dependencies around the ticket. In that sense, the outsourced-versus-internal comparison is partly also a complexity comparison.
The conclusion is not that outsourcing is inherently better or worse. It is that the two models perform differently on different dimensions. Outsourced first-line teams may handle straightforward interactions well, but the productivity cost to employees appears to rise much faster when the ticket requires follow-up, coordination, or handoff beyond the first contact.
The ISG view below refers specifically to the first-contact and reassignment lifecycle, not to the overall +81 / +81 benchmark averages shown in the table above.
§ 20 Beyond IT · ESM
IT · HR · Finance on the same yardstick
In ESM, finance is the happiest. Also the most time-expensive.
HappySignals is increasingly used to measure experience across HR and Finance service functions using the same framework. Put side by side, the three functions tell a consistent story: a service can feel well handled while still costing hours.
IT Services
Happiness 2025
+81
3h 14m
lost / ticket
↓ 1pt Held steady vs. 2024 (+82). Lost time marginally improved.
HR Services
Happiness 2025
+80
1h 37m
lost / ticket
↓ 1h 20m Lost time fell sharply from 2h 57m — the strongest efficiency gain of any service function this year.
Finance Services
Happiness 2025
+83
3h 27m
lost / ticket
↓ 1h 18m Highest happiness of the three, but still the longest interruption — payroll, payments, and expense timing raise the perceived stakes.
When HR, Finance, and IT all measure experience on the same yardstick, the organization gains something rarer than any individual score: a shared language for how service feels to employees — and a basis for honest, cross-functional improvement.
§ 20.1 · Enterprise Service Management · numbers
IT, HR and Finance services on one yardstick · 2024 → 2025Happiness · lost time
Service function
Happiness 2024
Happiness 2025
Lost time 2024
Lost time 2025
IT services
+82
+81
3h 18min
3h 14min
HR services
+81
+80
2h 57min
1h 37min
Finance services
+83
+83
4h 45min
3h 27min
§ 20 Read the full chapter text · Beyond IT: Enterprise service management
HappySignals is increasingly used to measure employee experience across HR and Finance service functions, not just IT, using the same survey framework. This makes it possible to compare service experience across enterprise functions on a consistent basis.
The comparison is useful because the services are different, but the employee perspective is comparable. In all three cases, employees are interacting with a support function, waiting for a process to move forward, and forming a view on whether the service helped them effectively and without unnecessary friction.
Service function
Happiness 2025
Avg. perceived lost time per ticket 2025
Happiness 2024
Perceived lost time 2024
IT
+81
3h 14min
+82
3h 16min
HR
+80
1h 37min
-
2h 57min
Finance
+83
3h 27min
-
4h 45min
Finance services produce the highest Happiness at +83, but also the highest perceived lost time per ticket at 3h 27min. HR services have by far the shortest perceived lost time at 1h 37min per ticket, despite a Happiness score that is broadly comparable to IT. This matters because it shows again why Happiness and perceived lost time should be read together. A service can feel well handled overall while still interrupting work for a meaningful amount of time.
The Finance result is also intuitively plausible. Financial requests often carry greater urgency and complexity, and the stakes of delay, such as payroll errors, payment processing, or expense reimbursements, may heighten the employee's perception of time lost while waiting for resolution.
There are also positive signs in the year-on-year movement. HR perceived lost time fell from 2h 57min in 2024 to 1h 37min in 2025, a substantial improvement. Finance perceived lost time also reduced clearly, from 4h 45min to 3h 27min over the same period.
This chapter is not intended to suggest that IT, HR, and Finance should be managed in the same way. The functions have different case types, workflows, and business risks. The value of the comparison is that it creates a shared view of how service is experienced across the enterprise, making it easier to see where friction is highest and where improvement is happening fastest.
§ 25 Conclusion
The gap between "the ticket is closed" and "the employee is helped" is where IT experience lives.
Across 1.77 million responses — including 1,663,470 from IT incidents and requests alone — five patterns are consistent enough to be treated as structural findings rather than observations.
01 · How, not what
How IT interacts with employees matters more than whether it resolves their issue.
02 · Reassignment
Every unnecessary handoff compounds cost. Reducing them is the highest-return IT investment.
03 · Context
Regional and workforce context is the primary lens. A +74 and a +90 can describe equally well-run IT.
04 · Lost time
Happiness tells you if employees are satisfied. Lost time tells you what dissatisfaction costs. Both are necessary.
05 · Continuously
The organizations improving fastest are those measuring continuously — not annually.
06 · Human
No automation eliminates the human factors of IT experience. It changes who or what is responsible for them.
§ 25 Read the full chapter text · Conclusion
IT organizations have spent decades measuring outputs: tickets resolved, SLAs met, and first-contact resolution rates achieved. These are useful numbers. But they describe what IT does, not what the experience of IT is for the people it serves.
The data in this report describes that experience layer, and it tells a story that operational metrics alone cannot. Tickets can be closed while employees are still confused. SLAs can be met while employees are waiting hours longer than necessary. First-contact resolution can be high while the people served still feel unsupported, uninformed, and invisible.
The gap between "the ticket is closed" and "the employee is helped" is where IT experience lives.
How IT interacts with employees matters more than whether it resolves their issue. Agent attitude, speed, and communication account for the overwhelming majority of positive experience factors, not the resolution outcome itself.
Every unnecessary reassignment has a compounding cost. Reducing reassignments is among the highest-return investments available to most IT teams.
Regional and workforce context is not background information. It is a primary lens through which experience data should be interpreted.
Perceived lost time is the business impact metric. Happiness tells you whether employees are satisfied. Perceived lost time tells you what friction may be costing them in their daily work. Both are necessary.
The organizations improving fastest are those measuring continuously. One annual survey does not reveal trends, does not capture the effect of changes, and does not provide the granularity needed for informed decisions.
25.1 A final word: AI and the human dimension
The IT industry will continue to invest heavily in AI-powered tools, automation, and the goal of the autonomous service desk. This report does not question the value of that direction. But the data here points consistently toward a conclusion that is easy to underweight in periods of rapid technology change: the experience of IT remains fundamentally human.
It is shaped by whether an explanation was clear, whether a process respected the employee's time, and whether support helped work move forward without unnecessary friction. Automation does not remove these factors. It changes who or what is responsible for them. The organizations most likely to navigate the AI transition well are the ones that continue to measure how these experience factors land, continuously, specifically, and with a genuine willingness to act on what they find.
Download The full report
Get every chart, table and perspective.
Covering every touchpoint, region, and industry cut — plus the full HappySignals and ISG commentary. Delivered as PDF.