Download a copy of our Developer Experience Analysis

Request Free Report

Engineering Performance Reviews

Designing effective reviews for product development teams is a particularly challenging task. It entails providing actionable feedback to specialized roles, such as software engineers, UX/UI designers and product managers. Too often companies rely on the same generic review process for all employees. This is generally ineffective, as people in technical roles require in-depth feedback on the technical skills they need to succeed in their positions.

Over years managing software development teams we have experimented on and evolved several technical review processes. Below are some of the ideas and strategies we found most effective.

Technical Performance Reviews

Avoid generic, unstructured reviews.

In many companies, the review process in centrally controlled, usually by the HR department. For the sake of simplicity, HR departments often apply the same generic review process to the entire organization. Collecting unstructured feedback with a small number of open-ended prompts such as ‘List Jane’s strengths’ or, even worse, ‘How do you think Jane did this quarter?’. These types of reviews are ineffective at evaluating technical roles for a few reasons:

  • Open-ended questions are difficult for reviewers to complete. They don’t set clear expectations or guidelines for what information to provide. They effectively ask the reviewer to come up with a definition of what constitutes expected behavior and performance.
  • The lack of guidance they provide means response quality will vary wildly. Some reviewers will provide lots of detailed feedback, while others only the minimum required.
  • Worst of all is the fact that this input is very difficult to process and provide as effective feedback at the end of the review. To illustrate the point, imagine a manager with 5 reports, each being reviewed by 3 peers, 1 manager and themselves. That’s a total of 25 blocks of unstructured text someone has to process at the end of the review. Take that one level up the hierarchy, to the manager’s manager, and that could be 125 blocks of unstructured text someone needs to process to really understand how a team is doing.

Target review content.

In my experience, the best feedback is generated by reviews that really focus in on the skills people need to succeed in their particular position and circumstances. These skills vary greatly by role, seniority and organization. They may also evolve over time within a specific company, as the requirements for success change.

The more you can target review content for each subject’s particular role and situation, the more effective the process will be.

The more you can target review content for each subject’s particular role and situation, the more effective the will be. For software engineers that may include reviewing specific skills like problem solving in code, code structure and technical definition. Whereas for product managers you might review skills such as product design* and ability to communicate a compelling product vision. Structuring review content and tailoring it to the review subject, makes it far easier for reviewers to respond to questions. They'll know exactly which aspects/skills to evaluate. It also means that extracting meaningful feedback at the end of the review process is quick and easy.

Mix multiple choice and free-text inputs.

Multiple choice questions are great for providing structure and focus in a review. That said, they suffer from a couple of problems. The first is that they don’t allow people to provide additional feedback beyond the structured response. Many people like to qualify their responses or provide ideas for development. This content is particularly useful when providing feedback at the end of a review.

We generally structure review question sets with 3-5 skill-specific multiple choice questions followed 1 optional free-text input, with a prompt like - ‘Please provide additional comments as rationale for your answers above.

Another issue with multiple choice questions is that they can be too easy to complete. Reviewers may speed through a review without giving enough thought to responses. To avoid these issues we’ve found that it’s best to intersperse sets of structured (multiple-choice) questions with some unstructured open-text inputs. We generally structure review question sets with 3-5 skill-specific multiple choice questions followed 1 optional free-text input, with a prompt like - ‘Please provide additional comments as rationale for your answers above.

Incidentally, we’ve found the amount of additional optional content provided by a reviewer to be a good indicator of how strongly they feel about their feedback. If someone provides particularly strong feedback in multiple choice responses but doesn’t bother to provide any rationale, it may well indicate that they don’t really feel that confident with their analysis. Whereas if they have taken the time to write a more detailed response, it's likely they feel far more strongly about the issue.

360 Reviews are better than manager reviews.

This is an obvious point but worth mentioning anyway: Reviews that include peer feedback are generally better than only having direct manager/report feedback. They may take a bit more time but the wider perspective and resulting additional weight given to feedback provided are significant benefits. To ensure a wide range of feedback it's important to consider the following when selecting peers:

  • Take care to choose peers that have worked directly with the evaluee. People don't appreciate being reviewed by peers who have little context and are they are likely to reject feedback as a result.
  • Take the evaluee’s input on preferred peers but select peers from as wide of a group as possible to ensure a mixed perspective.
  • If possible, select peers with a varied level of experience and seniority
  • If possible, include at least one peer from a parallel role (i.e, engineer reviewing a designer) for a different perspective.
  • Don’t overload an individual peer, try to balance reviews out evenly. We aim for a maximum of 3 peer reviews per person.

Rotate 360-degree and Manager reviews.

360 reviews include many more participants and as a result are more time-consuming. To minimize this issue, while still providing regular feedback, we like alternating 360-degree and managers reviews at 3 month intervals. For example:

January - Full 360-degree review including 3 peers reviewers per evaluee
April - Smaller review with managers and reports reviewing each other
June - Another full 360-degree review
September - Smaller review with managers and reports reviewing each other

Incorporate review of goals & achievements

Technical skills are notoriously difficult to evaluate objectively. The perception that a review has not been fair generally results in people not taking feedback seriously and feeling demotivated by the process.

To ensure that reviews are as objective as possible, it's important to take a step back and review the actual work someone has completed over time. Asking questions like What were a person's real achievements relative to the goals they set over the period? and Were they well equipped and in a position to effective deliver on their goals?

This goals review step can be incorporated as part of the core review process or as it can be a separate process altogether. Managing technical goals and OKRs is a lengthy topic and we'll cover in more detail at a later stage. For now, I'll say that this is an essential part of reviewing any product development team.

Conclusion

360 reviews are a good way to provide people with feedback on how they are doing but they must be well designed to do so effectively. Providing meaningful feedback to people in technical roles is particularly challenging. It’s important to tailor reviews to the roles and skills relevant to success in your company. Doing so will ensure that reviews run smoothly and deliver actionable feedback.

REquest a demo

Book a Demo