Part 1: Performance & Compensation: ‘Two Households Both Alike In Dignity’
It’s an expectation that Performance and Compensation go hand in hand and that they work together effortlessly. The problem is they’re chalk and cheese. Oil and water. Montagues and Capulets. 🪡🎈
Performance is messy. It’s chaotic, ever-changing, complex, personal, and often comes with a grain of the subjective. It materialises in the form of OKRs, written feedback, behaviours, attitudes, and compliments from a range of folks across teams, levels, roles.
Compensation — ehhhhh, not so much. The market forces behind compensation and the salary market can be chaotic and unpredictable, just like any market, but for the most part numbers are numbers. Once you understand how your compensation structure should work, your framework should be able to give you a somewhat repeatable figure at each inquiry.
Note: I’ve written extensively on how to build a compensation model which is more representative of market forces; built like a pricing model. If you are reading this and you don’t yet have a philosophy or compensation approach, I suggest you go back and take a read of these. It will help a lot, I promise.
If you have a compensation framework already, you are probably effective at applying it to new hires. Hiring someone in as Software Engineer II, you plug some credentials and values into a compensation calculator and you get a number. Level 2, New York Based, Fullstack Engineer. Easy.
But what happens when your team are already in place? What do you “plug in” then to Pave or Figures or your big excel sheet… “Doing well” “Exceeding Expectations”? What does that mean in their pay cheque?
In order to retain great team members, you need to master the art and science of performance-based compensation. I’ve seen even established HR leaders struggle with the bridge between philosophy and practice, because they’ve not been in the room when a compensation strategy was developed. They understand the numbers and inputs, the likely understand the output, but not always the connectors. It’s a hard problem: bridge the gap between qualitative feedback and a dollar figure. Not only bridge it, but do it logically and repeatably.
I love a three-parter, so here we go. I’m going to write three blog posts over the next few weeks. The culmination of these will inform and guide you along the practices you must master to effectively connect Performance and Compensation.
Doing these things effectively you will improve:
- Procedural and distributive fairness of your compensation philosophy,
- Company and individual performance, and
- Retention of the employees most aligned with your culture and ways of working.
The three parts will cover:
Part 1: Measuring Performance
Part 2: Calibrating Performance
Part 3: Forecasting, budgeting, and applying to compensation
Part 1: Measuring performance.
“If you can’t measure it, you can’t improve it”
Often attributed to Peter Drucker, the actual quote by Deming is “It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.” This quote emphasizes the importance of using data to make improvements, while also recognizing that data is not the only factor in managing a system.
When it comes to people operations, it is important to keep a holistic mindset. Human beings cannot be reduced to data points, averages, or clicks. While data is a useful tool, averages and numbers should not be the only factor in decision-making. Further, they may not be easy to agree on and may cause imbalances and unfairness in your team in some instances. But in assessing performance this is the delicate line you must walk: taking someone’s holistic role performance and turning it into something consistent, measurable, and reportable.
Throughout my years I’ve ebbed and flowed between ‘siding with’ different measurements for performance: 9box grids, scales, outcome metrics, decimal ratings. Over the years I’ve come to realise that I don’t really care which one is used. The only thing I care about is whatever you pick being something that moves the behaviour of your team towards your aspirational cultural strategy. This means I encourage you to build something that actively guides and moves people, not something purely role-competency based.
An easy way to do this is to use a two dimensional grid not dissimilar to the 9box grid; one for role performance and competencies (coding, accounting) and the other for a strategically crucial behaviour your company wants to encourage and measure. At Whereby we use active, autonomous growth because we’re a fully distributed team where growing independently, consistently, and effectively is crucial to your success. In other companies you may use Values Behaviours, KPI Output, or even something like entrepreneurial skills.
But how do you measure growth?
Good question. Same way you measure competencies. Through clear and consistent documentation and a shared understanding of what behaviours are, and which aren’t, examples of performant team members.
Active Growth in our framework is crucial. Succeeding at it can mean progression, compensation changes, and promotions to new roles within our team. We take that responsibility seriously, as should you. Many teams build complex competency frameworks for their employees (too complex if you ask me, but that’s a blog for another day), but fail to effectively define the behaviours they want to encourage within their team.
At Whereby, the kind of autonomous, active growth we expect looks like a team member taking effective, independent, and consistent steps towards learning more about their role. This means striving to add required, relevant skills and experience to their ‘skills toolbox’. This can be evidenced through the behaviours we list in our manager & team guide. The more senior you are, the more high-impact we expect your behaviours to be. Likewise, if you’re in a managerial role we expect behaviours specific to relevant areas such as delegation or mentoring. We give some broad numbers to aim for, and we expect managers to back up their rating with evidence (which we will talk about more in Part 2).
The reason I quite like two axis approach and not, for example, a single measurement from 1 to 5 (or Does not Meet/Exceeds All), is because the 2 measurements above work together to be greater than the sum of their parts. Together the two ratings help to guide behaviour somewhere other than just “do better” — it’s a simple and visual representation of performance that helps managers and team members alike to calibrate on a 1:1 basis easily.
How we built our snapshot.
We built our framework on the principles of product management and user research. We took a problem statement and worked from first principles. Our performance snapshot is the outcome of that rigor; a tool our managers use to quickly identify how their direct reports are operating at any given time. It is also the tool we use to quantify and measure performance on a qualitative basis, while being designed to be user-friendly and rooted in our values.
To develop the performance snapshot, our research approach was based on Erika Hall’s book, Just Enough Research, coupled with excellent advice from our internal user researchers at Whereby. We reviewed literature on different theories of work, including Herzberg’s Two Factor Theory covering motivators and demotivators of work, and researched how performance reviews are carried out in other organizations. You can read more about our process of user testing and iteration here.
In order for you to assess performance in a qualitative and repeatable way, you also need something like our performance snapshot. Whatever you build, ensure that your tool has the capacity to interface with the quantitative and qualitative world, because without that you will not be able to connect it to your compensation structure.
At Whereby our performance snapshot tool has three “faces” — one is employee facing, one is manager facing, and one faces our system. The team facing one is pictured above and is simple to understand, can be used regularly enough to provide direction, but is not so specific that someone will take a number or a rating to heart. We’ve built it very intentionally to do this.
For managers, we have another version of the snapshot to help guide them towards more effective managerial action. The snapshot, and their team’s location within the snapshot, gives them some direction to take in terms of how to coach or work with their team. It provides a first step before getting the People team involved, and gives some ideas on what guides or tools you may want to search for in our management documentation.
The necessary, but ugly, face of qualitative performance
The face I want to focus a bit on here is our beautiful snapshot’s ugly third face. It’s ugly because I built it in Google Sheets and no one in the design team helped me, but also because it’s literally just there to speak to compensation changes. It’s not pretty. It’s not friendly. It’s not easy to understand. It’s not helpful to people.
But what it does is provide a number. “Overload” becomes “2”. “Misfiring” is a “-1”. These numbers don’t actually mean anything alone, they’re just a way to turn a year of feedback, performance, values, ideas, compliments, critique, and chaos into a simple input: -4 to 4.
If your manager has rated you as over performing in your role, and stable in your growth, they will have documented that with a binder of valuable evidence and context. Your compensation methodology doesn’t care, it just wants an input for the calculator. So, your year of hard work is a 2. It means nothing to you, just like a role-code in Pave means nothing to you. It’s just a way to tag a data point.
Our team never interfaces with this, although they know it exists. The reason is that these numbers don’t actually mean anything. In any one year the performance changes for 1, 2, 3 could be the same. Almost always all numbers under 0 recieve the same output in their compensation. It’s purely the way we take all of the feedback and performance data, put it into a grinder, mince it up, and this gives us the sausage.
The grinder, however, is where things get interesting. So that’s what we’ll talk about in Part 2: Calibration. I will talk to the complex process between evidence and outcome. How to run a calibration, why you should have them, and what their aims are. Sure, we have a rating… but how do we know everyone is doing it fairly?
Part 3 will take the rating above, and show you how to turn it into a compensation ‘algorithm’ which can produce repeatable, fair compensation changes.
Sounds good? I agree. Part 2 is here.
👉 Buy my book on Amazon! 👈
I talk plenty more about this way of working, and how to use product management methodologies day-to-day, I’ve been told it’s a good read, but I’m never quite sure.
Check out my LinkedIn
Check out the things I have done/do do
Follow me on twitter: @JessicaMayZwaan