Few people know what they like about work, and if they do, HR teams don’t know about it.

Consider The PX Test: A Snapshot of Organizational Health

Jessica Zwaan
9 min readSep 26, 2024

If you’re reading this you’ve probably contemplated, filled out, hated, complained about, ignored, or tried to make sense of an Employee Engagement Survey. I certainly have done all of the above. In fact, I plan on hating them again, right now.

But I don’t intend to just hate them, I’m also going to hate a little bit on ENPS too. Balance. 🙏

I’ve been on the recieving end of engagement surveys for years and I’m sick of them 1) as a leader and, most importantly probably, 2) as an employee (Bonus: one most would consider to be “above average engaged”).

Anyway, today while I was on a panel I was asked about things I’ve changed my mind about. I feel like I am mostly pretty good at “strong opinions, loosely held” (but don’t we all think we’re good at things that are popular and good to like?). The answer was about my approach to data. Specifically data that we’re trained to think is “good” for decisions.

Reviewing a single pie graph can rot the brain

Often, I find myself referring to or thinking about Ben Stancil’s blog. A few months ago he talked about a former brand director at Nike who had published a LinkedIn article outlining a log of the ways in which they had “screwed up their brand.” (Ben’s words, not mine, although — lol).

The article outlines some pretty sweeping criticism for several strategic missteps in how they handled their brand and marketing function. Ben comments that, “the internet quickly collapsed the post into a single viral headline: Nike’s original sin was trying to be “data-driven.”” The post itself gives a scathing view of the approach taken by Nike to do less fluffy stuff and more data stuff. It’s good, check it out.

These (data, logic, metrics) are things that, in People and in Ops, feel like…honestly, good ideas, guys. Existing in a world where SHRM and CIPD have been putting data-literacy and HR analytics at the top of almost every HR trend list since the day I was born, make it hard to reflect on data-centricity exploding like a pie in the face and not feel a little defensive.

And god have I bandied this same message around for years, so bear with me here as I reflect on my own hypocrisy. Although, maybe it’s not hypocrisy and maybe it’s more nuance. At least, it’s nice to think that way.

Ben says it perfectly: If it is so easy to criticize Nike’s strategy from a distance, why was it so hard to resist it from up close?

In People Ops, we don’t have millions of rows of consumer data to transform into performance marketing campaigns, so we try to do other things to be logical and rational and results-based. Sometimes we succeed, and sometimes we miss the forest through the trees, and I think that engagement surveys are one of those places.

It’s important to recognise that, yes, absolutely: under the right conditions, data helps us make better decisions. It helps us to see a path to an otherwise invisible truth, argue our point of view more clearly, and track the progression towards a goal. Wielding numbers has helped me in my career, and I have spent countless hours writing blogs, speaking on panels, and writing a freaking book, where I encouraged data literacy. But rarely have I genuinely spoken about the downsides and gaps in our obsession with being more like Marketing. The implication I have fed into (and profited from) is that with data we’re smarter, more commercial, better leaders. The implication I’ve ignored is that the things we know through evidence we cannot plot, is weaker and silly. That being smart enough to connect cause and effect isn’t enough without a bar graph. That all data is the same, no matter what we ask, or how we ask it. And that’s not actually true. Like, at all.

I’ve said it before and I’ll say it again, but human beings and their motivations are extraordinarily messy and complicated. What I didn’t ever say, and I honestly should have, is that extrapolations from data are really just good reflections of a past we’ve already seen that we apply to a future we don’t know. In short, they’re guesses, and there are ways to make good guesses and ways to make bad ones.

So where does that connect back to engagement surveys? Well. There are a few issues:

  • What and how often you ask
  • What it genuinely means to people
  • How you interpret it

These surveys suck, let’s be honest: What and how often you ask

“On a scale of one to ten how likely are you to recommend Chaos Inc as a great place to work?” and other lies we tell ourselves and each other once a year or so.

A recent conversation with my CEO about ENPS ended with this banger, “There was zero correlation with ENPS and retention. We had our highest scores during our worst retention.”

A 2014 survey found that 70% of employees do not respond to surveys and nearly 30% of them think they are useless. Cool. Cooooool.

Feeling so engaged right now.

Let’s be real: engagement surveys as they have been designed are skewed, simplistic snapshots that tell you more about how employees are feeling in the moment than providing any actionable insight. Most people won’t be honest anyway — they’re either trying to be nice or they’re too worried about how their feedback will be used. The incentives aren’t aligned.

We ask questions like, “How effectively is my manager is able to think strategically?” and “Are you proud to work here?” in a way that make engagement scores only effective at measuring the perceptions of a blob of employees, but not in a way that addresses causation.

In a tech company where we have real-time data from every other part of the business, relying on a quarterly or annual survey to assess engagement is borderline irresponsible. It’s like trying to run a high-performance car engine while taking temperature readings once a year, while also asking our favourite colour of race car paint (pink, btw). In my experience at least, real insight comes from looking at behavior against operational realities — analyzing communication patterns, productivity trends, and actual outcomes in real-time against other data like performance and tenure — all while knowing the fundamentals of company success are present.

What matters where it matters: what it genuinely means to people

Years ago I spoke to a founder of a really cool survey tool that tried to solve this problem. Nico Blier-Silvestri at Platypus. He gave a great analogy: If the TFL wanted to know about my engagement with the Tube they may ask me what I thought of the tube from 1 to 10. I might give a 10, but I never catch it because I ride a bike 99% of the time. A nurse who travels the nightshift, may answer a 4. Is my rating valued the same? In an engagement survey, yes. What the TFL need to know is “does the tube run on time” and “how often do you need it to?” We need to ask and measure what is important to a cohort as an outcome. Or we need to agree from the top that, yeah, these have to be important and present for everyone for us to be successful.

In almost every instance of engagement surveys we’re treating all answers equally, as if every employee cares about every issue in the same way. We’re averaging out responses without any understanding of what actually matters to people — so we end up prioritizing the average opinions, not the most critical ones.

In reality, some issues are deal-breakers for some employees, while others couldn’t care less. But we’re not measuring importance or relevance, just averaging out blanket responses and pretending it’s data-driven insight. Without understanding the weight each person places on different aspects of their experience, or making weightings unnecessary, we’re flying blind. It’s lazy data science — and worse, it’s bad leadership — because we’re not asking the right questions, and we’re definitely not learning what drives retention or performance in any meaningful way.

Analysis anaphylactic shock

Finally, we’re treating these surveys like they provide statistically significant insights when, in reality, we’re working with a tiny, non-random, often unrepresentative sample. Just because we got a 60% response rate doesn’t mean those responses tell us anything meaningful. That’s like trying to predict product-market fit based on three customer reviews and calling it statistically valid. In any other part of the business, we’d get laughed out of the room for using such small, biased datasets to make decisions.

And don’t get me started on HR’s general lack of cross-referencing this data with actual performance or cohort information. We’re not comparing feedback from long-term performer cohorts, teams with low turnover, or those consistently hitting KPIs against the rest of the organization. We’re treating all feedback as if it’s equally valuable, whether it comes from a high performer driving revenue or someone who arrived last week. That’s not data-driven decision-making — that’s just looking at raw numbers without any real analysis. With engagement surveys, we’re content to look at averages and call it a day.

Ok, let’s be Solutions Orientated. The PX test.

So let’s figure out if your company is actually set up for success by measuring a baseline of the exact same things that have to exist, or if we’re just another company mimicking the superficial trappings of high-performance without understanding the fundamentals.

Here’s a test I suggest should ask monthly that doesn’t care about your ping pong tables or company pride. I’ve co-opted this from Rands in Repose, who co-opted it from the Joel Test. Inspiration, stealing, tomay-to, tomah-to.

Each of the questions below are things that the company cares about and can take action on. It’s not about the feeling engagement, it’s about the inputs to it. Then, you can do the work of measuring those outcomes against cohorts and cross-sections. Will it be statistically significant? Maybe not, but it will tell you something about what is observable.

Scoring

Each of the questions above is a simple Yes or No. Each Yes is ‘+1’. Each No is ‘0’.

  • 10–12: Congrats. Now don’t get complacent.
  • 7–9: In high-performance companies, this is close to failing.
  • 0–6: Yikes.

To quote Spolsky, “The truth is that most companies are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.”

Company success hinges on multiple factors. Take a stellar engineering team, for example: if they’re working on a product that doesn’t resonate with users, the outcome of this survey won’t mean the company will succeed. That said, when all else is equal, prioritizing these 12 essential elements will foster a disciplined team that consistently delivers exceptional results — and then that leaves your leadership to focus on strategy. Meaning, the people strategy can be isolated as less likely to blame.

What Now?

If you scored poorly, fix it. Actually talk to your people. Not in a survey, not in a town hall. Have real conversations. Realize that this stuff is never “done.” The moment you think you’ve figured it out is the moment you start messing it all up again.

Remember, building a functional company isn’t rocket science. It’s harder sometimes. At least rockets follow the laws of physics. People are messy, companies are chaotic, and the only constant is that something’s always broken. Your job is to keep fixing it, keep improving, and try not to add to the chaos.

Ok that’s all from me, folks. 👋

👉 Buy my book on Amazon! 👈
I talk plenty more about this way of working, and how to use product management methodologies day-to-day, I’ve been told it’s a good read, but I’m never quite sure.

Check out my LinkedIn
Check out the things I have done/do do
Follow me on twitter: @JessicaMayZwaan

Me and my cat, looking professional

--

--

Jessica Zwaan

G’day. 🐨 I am a person and I like to think I am good enough to do it professionally. So that’s what I do.