I’ve noticed the “employee vs management” mindset pretty much everywhere I’ve worked regardless of industry. I think that one main issue is with the word “peer”. It seems that management is placed on a sort of pedestal providing insulation from the review process. Realistically, management works for staff in a symbiotic relationship. Knowing this, it is sensible for management to receive feedback from their staff. How else would a manager truly understand their performance? Consider this - would you rate the performance of an MVC based app solely on its views?
Imagine one of those space warfare games where you have to assemble a fleet, and you have a certain number of points to spend. Do you want that Rigellian Battlecruiser at 1000 points? Or do you only have the budget for a handful of Denebian escorts at 50 points each? Hold that thought...
Your peers should have a reasonably good idea of how valuable you actually are. They review your code, they see you help fix problems (or fail to), they see you volunteer to fix unglamourous bugs (or decline to). There should be some way of capturing this. Written peer feedback is mostly meaningless, as you pointed out, but I wonder if there is some way of deriving a numerical score for a person, based on the (anonymous) marks that their peers give them, and weighted by some metric of how closely they collaborate. Your final total reflects some weighted average of how well they people you work with appreciate what you are doing. This total affects promotion and compensation, but it also determines your price on the internal market. A manager looking for new team members can get you by paying a cost based on your current rating. (This means that people who are overlooked by the system, and rated lower than they should be, may turn out to be a potential bargain for someone else.)
I can think of all kinds of objections to this, of course. It smells a bit like stack ranking. It's probably open to gaming and manipulation. And people may not like being reduced to a number (although that's what compensation is, really.) But at the same time, I think we need to solve the problem of the kind of peer reviews you describe, where everyone is just basically nice. Their ought to be some kind of cost to the ratings that you give out. Overrating or underrating someone should have some kind of consequence.
Interesting. I worry that the moment you make it anonymous and reduce it to numbers, it just becomes a bit inhumane. You don't know why you're getting the rating, you don't know who's doing it to you, you can't provide context, appeal, or bargain. It just feels rough when your career is on the line.
There's also an interesting sub-problem, also present in free-form feedback systems: how do you decide who to ask for feedback to begin with? If you let the person choose, many will strategize: a safe mix of friendly reviewers, plus one critic thrown in so that the results look organic. And if you pick algorithmically, you risk asking the wrong people. Or simply the wrong number of them... perhaps the person is working closely with two tech leads who give glowing reviews, but you also ask eight folks who don't work with this person and give ambivalent feedback, unfairly dragging down the average?
On some level, we might trying to engineer a solution to a non-engineering problem. We started with the assertion that the manager-centric process has flaws. Are there incremental improvements that don't involve switching to a completely new paradigm?
Peer feedback feels cool, but committee- / consensus-based approaches are not a great way to run a business. We don't ask employees to vote on every product change; we defer to individuals who (hopefully) know their space and know what they're doing. Not sure if performance management needs to be different.
The thought that led to my suggestion was this: If I were asked to tackle a brand new engineering project, and told I could create a small new team from within my current org, I have a pretty good idea of who I'd want to pick. If several of us were forming new teams at the same time, and we took turns choosing people, I think I'd know the order in which I'd pick them. I might even end up haggling with the new team captains over who got whom ;-)
In other words, I do have a reasonable idea of how effective my peers are. In theory, this should translate into the reviews that I write for them, but in practice, as you pointed out, everyone just ends up being blandly nice. But in the scenario I outlined, my ratings of my peers would actually matter, and I'm going to chose the ones that I think are more effective over the ones I think are less so.
I'm trying to think of a peer rating system that would let me (or force me) to give the same honest ratings as the ones that I would give if the outcome actually mattered. I'm not sure of the best way to do that, though.
Even if I could, there are still problems. My ratings could be wrong, or biased. But so could my manager's, and spreading the ratings over one's peers reducre the effects of managerial bias (both unconscious and conscious). (As a side-note, I'm not sure I agree with you about the flaws of consensus-based approaches. Dictatorial approaches don't have a great historical track record ;-) And even if certain decisions need to be made by a few people, I'm not sure that performance ratings are necessarily one of them.)
But a more serious issue is that it reeks of stack ranking, and we know how bad that is. And in some sense, the idea of individual performance is antithetical to teamwork. Maybe we need a system in which the entire team is given a rating, and they have to agree between themselves as to how it gets divided ;-)
I've always fancied the idea of each team being treated like a mini-startup, responsible for generating income for the company. Leadership is responsible for clearly articulating what is considered value (revenue) and how to measure it for the team - what makes this hard is leadership will have to value intangibles like engagement as well, but no one ever said leadership was easy. The team is responsible for maximizing profit (revenue - expenses). Given this, it's in the teams best interest to have the best possible people on their team. Let the teams decides who stays and who goes, and who the top performers are - they'll want to reward their Michael Jordans in order to keep them on their team as it benefits them all.
The problem is, of course, that it's hard to isolate one team's contribution. Team A writes the infrastructure that Team B uses to deliver a feature. Which team did the most?
I think we've all agreed, and to some extent know, what we want. Teams should be rewarded for working well together, and delivering great work. The trouble lies in how to measure it? Especially when the metric you use to measure it becomes the goal ;-)
It explains long work hours in the case study as a result of performance reviews and other management practices that create anxiety about professional competence. Even if you don’t agree with all of the conclusions I found it fascinating to read a sociologist’s analysis of something I’ve lived through.
This is a problem I could never solve. I've been in large (>1,000 employees) tech companies for 1.5 year total, out of 13 years career so far - all the rest, at startups with less than 15 people (some of the time, as founder or CTO).
In this 1.5 years I've been through 3 performance reviews (every 6 months~). My reviews were great each time - great feedback and high satisfaction from managers and willing to promote me.
Yet, I hated every moment of it. Filling the tedious forms, playing the politics of selling my achievements, showing how my KPIs performance align with OKRs and WTF just let me work and leave me alone.
I hated it so much that I quit big corp to go back to small startups.
There, I can just do the work without the bullshit.
And in small startups you have visibility on all the team, you know whether someone is doing a good work or not. No "review" needed. This environment is fun for me - no bullshit, only quality work.
A good leader will appreciate the merits of their employees. However, this is often not the case. Some leaders are too preoccupied with their own work to conduct fair and objective evaluations. They base their judgments on their personal feelings towards the employees.
I once had a leader who always favored someone. I was in that position at one point. It did not matter what we did, the outcome was always predictable. The leader’s preference determined the results.
I have not encountered any effective performance review methods so far. Developers should consider changing their jobs or companies, or starting their own ventures in the meantime.
Yeah, a common defense of the Google model is "but doesn't that help you if you have a terrible manager"? And the rebuttal to that is simple: even in a fantasy world where the perf process isn't manager-dependent, you'd have a rotten time dealing with that day-to-day.
On the flip side, if you have a good and effective manager who values your work, it's smooth sailing pretty much no matter how the process is designed.
Paraphrasing Camus: “The employee should be happy knowing their fate is determined arbitrarily”
I work at a place where after everything is said and done your chances at promotion are based on: tenure, how much of a nice person you are and your partner’s (most senior manager) largess at the time. Once this truth is understood, performance management process throughout the year can simply be disregarded.
I sometimes wonder if explicitly adding randomness would help. Pick twice as many candidates in the best way you can, then choose half at random. Anyone who loses can blame it on bad luck, and the winners know that they are lucky to be there (so don't get too cocky).
Maybe? It's a tad inhumane to turn it into a lottery, but part of the problem is that large-scale performance management is already probabilistic. As with hiring, it's never perfect; you're happy if it's working right 80-90% of the time across the company. Sometimes, the most honest answer is "you had bad luck" - you picked a reviewer who dropped the ball, there was someone on the committee who didn't like you, there were too many good candidates this time around, whatever.
I’ve noticed the “employee vs management” mindset pretty much everywhere I’ve worked regardless of industry. I think that one main issue is with the word “peer”. It seems that management is placed on a sort of pedestal providing insulation from the review process. Realistically, management works for staff in a symbiotic relationship. Knowing this, it is sensible for management to receive feedback from their staff. How else would a manager truly understand their performance? Consider this - would you rate the performance of an MVC based app solely on its views?
Imagine one of those space warfare games where you have to assemble a fleet, and you have a certain number of points to spend. Do you want that Rigellian Battlecruiser at 1000 points? Or do you only have the budget for a handful of Denebian escorts at 50 points each? Hold that thought...
Your peers should have a reasonably good idea of how valuable you actually are. They review your code, they see you help fix problems (or fail to), they see you volunteer to fix unglamourous bugs (or decline to). There should be some way of capturing this. Written peer feedback is mostly meaningless, as you pointed out, but I wonder if there is some way of deriving a numerical score for a person, based on the (anonymous) marks that their peers give them, and weighted by some metric of how closely they collaborate. Your final total reflects some weighted average of how well they people you work with appreciate what you are doing. This total affects promotion and compensation, but it also determines your price on the internal market. A manager looking for new team members can get you by paying a cost based on your current rating. (This means that people who are overlooked by the system, and rated lower than they should be, may turn out to be a potential bargain for someone else.)
I can think of all kinds of objections to this, of course. It smells a bit like stack ranking. It's probably open to gaming and manipulation. And people may not like being reduced to a number (although that's what compensation is, really.) But at the same time, I think we need to solve the problem of the kind of peer reviews you describe, where everyone is just basically nice. Their ought to be some kind of cost to the ratings that you give out. Overrating or underrating someone should have some kind of consequence.
Interesting. I worry that the moment you make it anonymous and reduce it to numbers, it just becomes a bit inhumane. You don't know why you're getting the rating, you don't know who's doing it to you, you can't provide context, appeal, or bargain. It just feels rough when your career is on the line.
There's also an interesting sub-problem, also present in free-form feedback systems: how do you decide who to ask for feedback to begin with? If you let the person choose, many will strategize: a safe mix of friendly reviewers, plus one critic thrown in so that the results look organic. And if you pick algorithmically, you risk asking the wrong people. Or simply the wrong number of them... perhaps the person is working closely with two tech leads who give glowing reviews, but you also ask eight folks who don't work with this person and give ambivalent feedback, unfairly dragging down the average?
On some level, we might trying to engineer a solution to a non-engineering problem. We started with the assertion that the manager-centric process has flaws. Are there incremental improvements that don't involve switching to a completely new paradigm?
Peer feedback feels cool, but committee- / consensus-based approaches are not a great way to run a business. We don't ask employees to vote on every product change; we defer to individuals who (hopefully) know their space and know what they're doing. Not sure if performance management needs to be different.
The thought that led to my suggestion was this: If I were asked to tackle a brand new engineering project, and told I could create a small new team from within my current org, I have a pretty good idea of who I'd want to pick. If several of us were forming new teams at the same time, and we took turns choosing people, I think I'd know the order in which I'd pick them. I might even end up haggling with the new team captains over who got whom ;-)
In other words, I do have a reasonable idea of how effective my peers are. In theory, this should translate into the reviews that I write for them, but in practice, as you pointed out, everyone just ends up being blandly nice. But in the scenario I outlined, my ratings of my peers would actually matter, and I'm going to chose the ones that I think are more effective over the ones I think are less so.
I'm trying to think of a peer rating system that would let me (or force me) to give the same honest ratings as the ones that I would give if the outcome actually mattered. I'm not sure of the best way to do that, though.
Even if I could, there are still problems. My ratings could be wrong, or biased. But so could my manager's, and spreading the ratings over one's peers reducre the effects of managerial bias (both unconscious and conscious). (As a side-note, I'm not sure I agree with you about the flaws of consensus-based approaches. Dictatorial approaches don't have a great historical track record ;-) And even if certain decisions need to be made by a few people, I'm not sure that performance ratings are necessarily one of them.)
But a more serious issue is that it reeks of stack ranking, and we know how bad that is. And in some sense, the idea of individual performance is antithetical to teamwork. Maybe we need a system in which the entire team is given a rating, and they have to agree between themselves as to how it gets divided ;-)
I've always fancied the idea of each team being treated like a mini-startup, responsible for generating income for the company. Leadership is responsible for clearly articulating what is considered value (revenue) and how to measure it for the team - what makes this hard is leadership will have to value intangibles like engagement as well, but no one ever said leadership was easy. The team is responsible for maximizing profit (revenue - expenses). Given this, it's in the teams best interest to have the best possible people on their team. Let the teams decides who stays and who goes, and who the top performers are - they'll want to reward their Michael Jordans in order to keep them on their team as it benefits them all.
The problem is, of course, that it's hard to isolate one team's contribution. Team A writes the infrastructure that Team B uses to deliver a feature. Which team did the most?
I think we've all agreed, and to some extent know, what we want. Teams should be rewarded for working well together, and delivering great work. The trouble lies in how to measure it? Especially when the metric you use to measure it becomes the goal ;-)
Talking about the anxiety aspect of performance reviews, I found this paper quite interesting: https://www.researchgate.net/profile/Ofer-Sharone-2/publication/292499150_Engineering_overwork_Bell-curve_management_at_a_high-tech_firm/links/5cd039ef458515712e95ab3b/Engineering-overwork-Bell-curve-management-at-a-high-tech-firm.pdf
It explains long work hours in the case study as a result of performance reviews and other management practices that create anxiety about professional competence. Even if you don’t agree with all of the conclusions I found it fascinating to read a sociologist’s analysis of something I’ve lived through.
This is a problem I could never solve. I've been in large (>1,000 employees) tech companies for 1.5 year total, out of 13 years career so far - all the rest, at startups with less than 15 people (some of the time, as founder or CTO).
In this 1.5 years I've been through 3 performance reviews (every 6 months~). My reviews were great each time - great feedback and high satisfaction from managers and willing to promote me.
Yet, I hated every moment of it. Filling the tedious forms, playing the politics of selling my achievements, showing how my KPIs performance align with OKRs and WTF just let me work and leave me alone.
I hated it so much that I quit big corp to go back to small startups.
There, I can just do the work without the bullshit.
And in small startups you have visibility on all the team, you know whether someone is doing a good work or not. No "review" needed. This environment is fun for me - no bullshit, only quality work.
Performance management gets hard to take seriously after a while, especially as a high performer and they just felt like needless waste.
Easier to just get things done and go home and if someone's not performing, just let them know so we can save the pomp and circumstance.
A good leader will appreciate the merits of their employees. However, this is often not the case. Some leaders are too preoccupied with their own work to conduct fair and objective evaluations. They base their judgments on their personal feelings towards the employees.
I once had a leader who always favored someone. I was in that position at one point. It did not matter what we did, the outcome was always predictable. The leader’s preference determined the results.
I have not encountered any effective performance review methods so far. Developers should consider changing their jobs or companies, or starting their own ventures in the meantime.
Yeah, a common defense of the Google model is "but doesn't that help you if you have a terrible manager"? And the rebuttal to that is simple: even in a fantasy world where the perf process isn't manager-dependent, you'd have a rotten time dealing with that day-to-day.
On the flip side, if you have a good and effective manager who values your work, it's smooth sailing pretty much no matter how the process is designed.
Paraphrasing Camus: “The employee should be happy knowing their fate is determined arbitrarily”
I work at a place where after everything is said and done your chances at promotion are based on: tenure, how much of a nice person you are and your partner’s (most senior manager) largess at the time. Once this truth is understood, performance management process throughout the year can simply be disregarded.
I sometimes wonder if explicitly adding randomness would help. Pick twice as many candidates in the best way you can, then choose half at random. Anyone who loses can blame it on bad luck, and the winners know that they are lucky to be there (so don't get too cocky).
Maybe? It's a tad inhumane to turn it into a lottery, but part of the problem is that large-scale performance management is already probabilistic. As with hiring, it's never perfect; you're happy if it's working right 80-90% of the time across the company. Sometimes, the most honest answer is "you had bad luck" - you picked a reviewer who dropped the ball, there was someone on the committee who didn't like you, there were too many good candidates this time around, whatever.