The problem with CSAT isn’t the score. It’s what the score can’t tell you.
Sharon put it plainly: CSAT is “not bad,” it’s “just incomplete.”
A thumbs up or thumbs down might tell you something happened. But it rarely tells you:
- what part of the experience failed,
- what the person needed in the moment,
- what should change next time.
Or as Sharon described it: “CSAT’s like a signal… but it’s a faint one.”
That faint signal becomes even less useful when the organization doesn’t act on it. Donna was blunt about their earlier reality: they collected the scores — and then did nothing with them. Not half the time. “All of the time.”
“CSAT’s like a signal… but it’s a faint one.” — Sharon Aggarwal
“Resolved” isn’t the same as “felt resolved”
One of the most useful parts of the conversation wasn’t about tooling. It was about what people remember.
Sharon described the toughest feedback they see: comments that boil down to I wasn’t listened to or I was ignored. Even when the technical issue is fixed, the experience can still collapse at the last hurdle if people aren’t kept informed.
This is where many operational metrics accidentally mislead us. They track process outputs (time, volume, closure). But they don’t capture the moment an employee decides:
- “IT doesn’t care,” or
- “I won’t bother raising this next time,” or
- “I’ll find a workaround.”
And that’s exactly why their experience work became deeply connected to communication.
“You said, we did”: turning feedback into a habit, not a headline
Sharon described a mantra they reinforce across the wider technology group: “You said, we did.”
But what’s interesting is how they use it.
Before, improvements were communicated generically: “we fixed the thing.” Now, they anchor updates in the human impact:
- what it did to someone’s day,
- how it affected productivity,
- what friction it removed.
That one shift changes the meaning of improvement. It stops being an IT announcement and becomes a proof point that the employee’s voice mattered.
It also builds trust — because it closes the loop.
Prioritization without politics: replacing “the loudest voice in the room”
The second big change Sharon called out was prioritization.
Historically, many IT orgs prioritize improvements based on assumptions, legacy thinking, or whoever has the biggest title.
Experience signals change that conversation. Instead of debating perceptions, teams can see where friction is actually happening — from the business perspective, not IT’s internal story.
Donna added a practical point that every service delivery leader will recognize: a single complaint reaching someone senior doesn’t always mean the whole environment is broken.
Experience data doesn’t invalidate the individual’s experience. It helps you respond at the right scale — “rationally and more constructively.”
That’s what mature governance looks like: empathy without overreaction.
What they built once they had real experience insights
Donna shared that once they had trustworthy signals, they didn’t just “track experience.” They built operating mechanisms around it.
Some of the practices they now run include:
- Mystery shopping activities to test the experience end-to-end
- Customer journey mapping based on hotspots in the data
- Happiness user groups to go deeper where needed
- Learning and training opportunities for customers and IT colleagues
- Continuous improvement registers that create a visible backlog of initiatives tied to experience insights
This matters because it shows the difference between collecting feedback and building a system that can use feedback.
The unexpected upside: motivation, not a stick
Mark raised something many teams worry about: will experience metrics become “a stick to beat people with”?
Sharon flipped that. When individuals are called out positively — especially service desk and support teams — it becomes immediate recognition. People see their work land with a real person.
Even when something technically fails, employees often remember the human part: Did someone care? Did they keep me informed? Did they try?
That’s not fluffy. It’s operational. It tells you what behaviors create trust — and what breaks it.
A memorable lesson in governance: Copilot, a survey, and “Donald Trump”
Donna closed with a story that perfectly captures why experience programs need process.
A technology leader received one of their proactive hardware surveys and decided to respond using Microsoft Copilot — “in the style of Donald Trump.” The result was a page-long, over-the-top celebration of laptops: “the best laptops ever.”
It was funny — but also a moment of truth.
Donna had to go back and explain: you do know how visible this feedback is, right?
The punchline: the leader didn’t realize they did anything with the feedback. And learning that they had a robust review-and-response loop actually delighted him.
That’s the hidden work of experience management: not just collecting input, but building the credibility that comes from acting on it.
Lessons for teams trying to go beyond CSAT
- Start. Don’t wait for perfect. Sharon recalled early hesitation — and the push that mattered: “be brave, just do it.” You can tweak surveys and wording as you learn.
- Treat feedback like a loop, not a scoreboard. If you ask, you have to act — or employees will stop believing it matters.
- Make communication part of the definition of “resolved.” The last mile is where trust is won or lost.
- Use experience to prioritize value, not noise. Move away from internal politics and toward visible friction points.
- Celebrate what works. Most feedback is positive when you actually listen — and that can change team energy fast.
What’s next: more signals, more learning
Both Sharon and Donna sounded energized about what comes next: extending surveys, bringing in new insights teams are asking for, and continuing to evolve the journey.
That’s the real “Beyond CSAT” story.
Not replacing a score with another score — but building a way of working where IT can confidently answer:
- What are people really experiencing?
- Where is productivity being lost?
- What should we fix first — and why?
Because great IT decisions don’t come from gut feelings. They come from seeing IT through employees’ eyes.
