Values and Critical Behaviors
by Ralph Dandrea, Frederick Beer, Jonathan Coupal, Sean Flaherty and Hernan Chiosso
Determining a team member’s efficacy can be a daunting task. Traditionally, managers have relied upon their memories of recent events to critique performance during a review, which can lead to rather vague comments and, in turn, nebulous results. If we really want to see specific improvements, we need a measuring stick that can consistently gauge a team member’s values and critical behaviors.
After operating ITX for 14 years, my partners and I started to realize that some of the best work was being done by a handful of team members, and that others seemed busy but perhaps weren’t pulling as much weight. We liked all of our staff and could discern no obvious differences across the board as to why some of them were propelling the company forward and others appeared to be contributing less. All of these people were hand-picked for their skills, personality and other attributes that we thought would benefit us, so why wasn’t the team as effective as we’d imagined? We pictured them as powerful horses attached to a wagon. Though we had chosen only strong horses to pull the wagon, some of them were going in different directions, so the great horses were not making a great team. We began to think about how we could make sure that each person was aligned with what we wanted to accomplish as a company and sat down to brainstorm. We quickly developed a rating system that would gauge each team member’s true fit to our company, which we utilize extensively today.
How We Developed the System
The first step was to determine which horses were facing in the right direction to keep the cart moving and which ones were creating the resistance that was holding back the best team members, despite their considerable efforts. Then, we could eliminate the poor performers and create a team that would pull a much bigger load. An organization cannot be successful without unity, and a group of likeminded people is going to make a lot more progress than any one of them could individually. If we aren’t fulfilling our mission and living our values as a company, it is only to the extent that we have certain people on the team who aren’t doing so, and we made it our goal to identify them.
We need to make a distinction here that trips up a lot of managers when they are judging their staff. Asking if someone is living our values and aligning himself with our mission is a different question than asking if he is a good person. We tend to let how we feel about people affect how we view their performance level, as opposed to really looking at whether they are consistently meeting our goals. We don’t enjoy saying that people are performing poorly when we think they’re nice, but nearly everyone in an organization is going to be a decent individual. It’s not about being good or bad; it’s about alignment. The underlying premise is, if we improve alignment, we’ll get better team performance.
We knew we couldn’t just ask our managers to tell us if each team member was aligned with our corporate ideology. It would be too easy for them to come back and say that so-and-so was definitely aligned because he’s such a great guy. There had to be a system that evaluated people based on specific questions so that we could determine how they were living our values, such as integrity, rather than how they might talk about or intend to live by them.
Next, we created a system for those values wherein we ask if a person lives that value nearly all the time, most of the time or less than most of the time. The answers are then assigned two points, one point or zero points, respectively, so an individual can get a total score between 0 and 10 for alignment with our values. This is not what they believe, and not what we would like them to do, but an actual demonstration of how they live their lives.
For the other side of the equation, we came up with five performance questions. Whereas with the values questions, we could rather easily combine similar values into five key ones, with performance, we felt the list could be endless, so we brainstormed and wrote down all kinds of things that would make someone a good performer. Then, to narrow the list, which contained 64 attributes, we thought of three people who were good performers and three people who were not such good performers, current and former team members. We rated those six people against each of the 64 dimensions of performance that we had compiled in our brainstorm. To do this, we used only a 0 or 1 system. If we saw that everybody got a 1, a “yes”, for a specific attribute, the good and the bad performers, then the question was not a useful one because it didn’t differentiate. Likewise, if everybody got a 0, a “no”, then that wasn’t a distinguishing question. Third, if we found that two of the good performers got a 1 and one got a 0, there was too much variation to make it a valid question.
Once we eliminated the non-distinguishing questions, the ones where we had all ones, all zeroes or too much variability to be valuable, we wound up with eight questions. We combined those that seemed redundant and came up with what we felt were the five key determining factors for performance fit with our company. Then we applied the 0, 1 and 2 method again for each of the questions to determine fit for the second half of the equation.
As an example, one of the performance questions we ask is, “Can this person get themselves unstuck, or do they require frequent supervisory intervention?” If the manager can’t remember the last time he had to intervene with this particular person, he would give a score of 2. If he has had to intervene once in a while, he assigns a 1, and if intervention has been required more than once in a while, the person gets a 0.
With five questions for each category at a maximum of two points per question, each team member can have up to 10 points on each side. We determine fit with the company by multiplying the two numbers. The reason we take the product instead of the sum is that it doesn’t allow a person to be really good on values but really bad on performance and still seem to make up for it. Our team members need to have both. The more they are aligned with our mission, the higher their total score will be, and if their total is low on either side, their overall score is going to drop significantly. The highest someone can score is 100, 10 on each side. If someone has a 9 on performance and only a 3 on values, they’re rated a low score of 27, because we can’t allow alignment with performance to substitute for values. A high performer with poor values is eventually going to prove a bad fit.
Once we looked at the results of our test group, we were so impressed by how well it worked that we applied the model to everyone in our organization and it continued to produce interesting results. It singled out the people we had thought might be poor performers. It also made us take a closer look at questionable performers.
Using the Results
|42 or less||Transition out||Anchors|
|45 – 54||Counsel and reevaluate||Lottery Tickets|
|56 – 64||Coach for continued improvement||Solid Performers|
|72 – 100||Ensure high job satisfaction||A-Players|
Next, we had to decide what to do with these results. We determined that we needed to transition out individuals with a score of less than 42 and to counsel those with a score of 45 to 54. Typically, people in that range lost points because they weren’t consistently living our values or consistently performing. It’s easier to get somebody from a 1 to a 2 than from a 0 to a 1. If a person gets a 1 on a value, he might have that same value, but he may not be living it all the time because he doesn’t know how. For example, a lot of people don’t know how to live integrity the way we define it, so we have to teach them. If we can define it and give them the rules for it, then they can do a much better job for us. We can usually move team members up a couple of points with coaching, which will get them out of that danger zone, but they’ve got to get to the next level in six months to remain with us.
The next set of products is 56, 63 and 64. Those are solid performers, and companies who utilize this system are going to find that a lot of their staff will fall into this range. These are good players, solid people that we want to keep. We can still coach them a little because there is always room for improvement, but they are already pulling their weight in the organization – they are horses going generally in the right direction.
With 72, 81, 90 and 100, these are people you want to put into golden handcuffs. You absolutely don’t want to lose them. They are your key team members. There are only four possible scores in this range: 8-9, 9-9, 10-9 and 10-10. The action we take here is to make sure that these individuals are receiving excellent compensation, have very high job satisfaction, enjoy what they’re doing and have the opportunity to grow. Usually, no more than 10 percent of an organization’s team members will fall into this category.
If a company has never done any alignment work before and its evaluation shows that everyone is a contributor, the measurements are not helpful. Something is being overlooked. In our experience, we saw that some managers applied the system very rigorously, meaning that they weren’t afraid to ask tough questions and give honest answers. Others didn’t want to admit that somebody who was a good person wasn’t a good fit. They wrestled with that, the distinction between being a bad person and being a bad fit. To equalize the results, they tried all sorts of things, such as attempting to institute half-points or to change the system to rate people higher if they believed in a value but just had a hard time implementing it. The managers who are assessing people have to really accept that calling someone a bad fit is not besmirching their character.
To benefit from the rating system, managers have to be well trained so that this discrepancy can be eliminated; otherwise, the information will be flawed. One of the ways to determine whether people are inflating scores is to have multiple people rate the same individual. That makes it easy to see if there’s one person who is consistently doing the padding. Except in rare cases, it really doesn’t happen that managers underrate their staff.
Using the System for New Hires
In addition to rating current team members for their fit, we are using this system for new hires. Again, in the interview process, we don’t actually ask candidates how they think they rate on integrity. Instead, we say, “Tell me about a time when you didn’t do what you said you were going to do, when you made a promise and broke it. How did you deal with it?” If they talk about cleaning up the mess, then we know that they get the concept of integrity. If they just say they made a mistake and, well, sometimes that happens, we know they’re not a fit for integrity.
Another thing we instituted was a set of questions about performance that we can ask interviewees’ references. We ask about things like their ability to get themselves unstuck and whether they spend their time on value-added activities. We found that if we ask references pointed questions about performance, we get better feedback. People giving references are often hand-picked because they have a high opinion of the person who is being referenced. They will tend to overrate or say good things about the individual because they think he is a good person, the same situation that can happen internally at first. However, if you ask references specific things, like if the person requires a lot of supervision, they’re not going to want to lie. They may try to put the person in a good light, but they will also be more honest. Using this system has greatly helped us to ensure that someone is a good fit before we extend a job offer.
Additionally, we put the rating system into practice when we have a team member who is leaving us or maybe hinting that he might be looking for something else. If someone tells us that he’s not happy in our organization, his score will determine our response. If a team member with a score of 81 turns in his notice, we’re going to pull out all the stops to keep him, whereas if he has a 30 or 25, we know that his departure will only improve the organization. You can just imagine how this has helped us to be quick and decisive when handling these types of situations. We never had that before. In the past, we’d flounder, wondering what to do. Now we know exactly what we need to do. We also understand that just because we’re not pursuing someone who resigns doesn’t mean we think he’s a bad person. He’s just not a good fit, and we have the ability to know that now.
For example, we had team member who was a really nice guy. He was great to have a beer with and was very friendly. He helped people on the weekends doing different things and participated in a lot of company activities, but when he came to us and said he was going to be leaving, we distinguished between the fact that he was a such a nice guy and that he wasn’t such a great performer. In the past,
we would have been more alarmed by his departure, worrying that a nice person wanted to leave. We would have made a mistake and kept him. People interview for personality automatically, but we have to make sure that new hires also fit with our mission and values, and the performance that we need.
Future Development of the System
Because we’ve learned that we will have the strongest relationships with those who are most aligned with our values and performance, we’re starting to apply a similar process to clients and vendors. It’s really held to be true so far. Just like with our current and prospective team members, we found that some customers we thought were good for us weren’t, and others we thought were not so good turned out to be the best to work with and the most profitable.
We’ll also begin looking at how we can eliminate as much subjectivity from the process as possible, to lessen the variability in the way different managers are rating people. If we ask three different managers to rate the same person, we’ll sometimes get numbers that are different. Fortunately, the variance we’re getting is more of magnitude as opposed to variability, in that one manager will tend to rate high in general or one might rate low. It’s not that we have one manager rating someone a 90 and another manager rating that same person a 30. I want to see how we can use language to structure the questions in a way that eliminates as much subjectivity as possible. We’ve done that to a large extent already, but if we keep testing new language, I believe we can reach the point where we get the same results no matter who is doing the rating.
The last piece of development that we’re doing is formulating a monthly conversation that should take place between every supervisor and each of his direct reports. Throughout the month, the manager will take notes about team members in terms of how they’ve lived each of our five values and five performance questions, then discuss those observations with them. The manager will also talk about the things he’s witnessed that month that were inconsistent with those key points. The feedback will be useful in identifying patterns and areas that need improvement, as well as determining strategies for coaching.
© 2012 Ralph Dandrea. All rights reserved.