Cutting through managerial biases using data

This article first appeared in DNA India on 15th June, 2017.

Laszlo Bock, the former SVP of People Operations at Google, wrote a wonderful book called “Work Rules” which is a must-read for anyone trying to understand how to make their people processes more data driven. One of my favourite examples in the book is the fact that Google not only made their interviews structured in such a way as to collect data on the ratings that the interviewers gave to job applicants, but also used this data to rank interviewers and to help to train those whose ratings led to indifferent hires.

Here is another scenario. Let’s assume an organisation wants to move out the bottom 10% of their performers based on their ratings. What is the best way to do this? Typically, the performers will be sorted based on their ratings and the bottom 10% will then be let go. However, is this the correct way to do it? We all know that managers vary significantly in the way they rate their employees. There are those that have extremely high expectations from their team members and hence, are particularly stingy when it comes to higher ratings.There are others who are extra lenient in their ratings leading to their teams loving them while even poor performers are rated as competent.

Given such a scenario, would it be right to do a simple sort of all the ratings and cull out the bottom 10%? Isn’t it unfair to those employees with a tough manager? While annual reviews have a process of “normalisation” wherein the ratings are normalised, this is quite unscientific and based more on mutual back-scratching between managers. You know – “I’ll accept a slightly lower rating for this team member if you agree to boost the rating of my favourite chap”. Is there a valid, data-driven way to go about normalising such ratings?

Fortunately, there is! Multilevel analysis deals with the analysis of multilevel data and focuses on three main ideas: a) Within-group agreement, b)reliability and c) non-independence. The within-group agreement deals with the consistency of ratings within a group and can be used, for example, in ensuring hiring decisions. Reliability metrics deals with consistency the individual raters have across employees or candidates. For example, one manager might consistently rate all his employees on a 1-3 scale while another might use a 3-5 scale. Hence, a 3 rating from the first manager is equivalent to a 5 from the second even though the ratings are very different. Non-independence deals with the fact that people in a group can often be subjected to a group bias leading to loss of independence in their ratings.

These three metrics can help organisations to use data to ensure a fair system for their employees or candidates for hiring. While no system is perfect, the more one takes subjectivity out of decision making, the fairer is the perception of the system by the employees within an organisation. This can at least help in cutting through managerial biases.

Image Source: