By: Ingwer Borg
Employee surveys can have many positive effects, but they unfold their full potential only if they are communicated effectively. In practice, communication is often done by distributing thousands of computer-generated reports more or less simultaneously to all managers.
Each report focusses on a manager’s organizational unit, showing what his/her subordinates said in the survey, typically in comparison to the results of higher-order units. This approach has the disadvantage that it leaves many managers and employees in a limbo, wondering what “they” (i.e., executives, top managers) think about the survey results, and what “they” intend to do now.
A better way to communicate is to roll out the survey results in a top-down cascade, beginning with top management, and ending with the working groups at the shop floor. Right after the survey, some very general feedback can be given (e.g., about the participation rate or on some general trends on global job satisfaction), and also information about the next step, but all reports for managers should be held back until top management has seen the results and is ready to formulate first responses to this information. Such responses include, in particular, one or two areas of focus: “The survey showed to us that X is a major issue in the company. Most likely this is also true in your area of responsibility. If so, we expect you [Manager M] and your team to make a positive contribution. Please report by [date] to [M’s supervisor] what you have done or plan to do—or that you feel that no action is needed or no action is possible in your area, and why.” The area of focus can even be a field of action, a clearly assigned task. For example, in one large IT company, the executive board said that “the survey showed us that our strategy must be better communicated and sold more effectively to all employees. Every officer will report to the Board by [date] on the actions that are implemented in his/her area of responsibility. And here is what we will do: …”.
The individual managers at the next-lower level of the hierarchy will then receive their survey reports together with such areas of focus. This will align the follow-processes, giving them strategic punch. Middle managers may add one or two additional areas of focus that deal with problems or opportunities relevant for their own areas of responsibility. So, supervisors at the bottom of the hierarchy will receive their survey reports together with a few top-down directions and goals for subsequent activities. This prevents that they simply report just about any action as their response to the survey (e.g., actions that were running anyway or “Mickey Mouse” actions such as re-decorating the rest rooms). Middle managers must also make sure that they have something solid to say when they report to their supervisors what they did in response to the areas of focus. Hence, they will, in turn, see that their subordinates deliver something that upper management likes to hear.
Figure 1. An MDS bubble plot, where bubble size represents percent agreement to item (“favorableness”); halo around index for commitment (the “dependent” variable) shows potential drivers for action planning .
But how can top managers identify powerful areas of focus? The typical PPT presentations on the survey results for top managers consist of a sheer endless series of colorful and animated slides with bullet points and histograms. They exhibit findings such as the company’s global results, often relative to industry benchmarks (where available), and compare the results of various organizational units, strata, and points in time (e.g., Production, Marketing, Sales, etc.; large subsidiaries; blue collar vs. white collar; new vs. older employees; this survey vs. last year’s survey). This is all nice to know, but hard to remember and even harder to translate into a survey “story” of what leads to what and why. Thus, managers are lost in piecemeal statistics that offer no convincing leads for what to do. They then grab what appears plausible, do what they wanted to do anyway, or simply do not respond to the data at all (thereby throwing away an opportunity for strong actions). What is typically missing in such presentations is something that shows, in a compact and accessible way, the structure of the data and information on what drives what. What is needed is a single slide that that supports data-guided discussions on what to focus on in action planning. Figure 1 shows an example. This display is a multidimensional scaling (MDS) plot of the inter-correlations of the items of a survey in a large IT company. Each point represents a question from the survey. The distance between any two points shows how similar the answers to these two items were in the survey: Knowing the answer of a person to item X, you can easily predict what the person says on item Y if the points X and Y are close neighbors (such as, for example, “enjoy my work” and “satisfied with tasks”). Items that are far apart in the MDS plot are unrelated (such as “satisfied with working conditions” and “trainings are good”). Here, a person who is satisfied with one issue, for example, may also be satisfied with the other issue. Or he/she may not. You cannot tell: The correlation is zero. (In the typical employee survey, there are no items with negative correlations. This makes MDS simple.)
In Figure 1 we also have a special variable, “commitment”, shown here as a small square. This variable is an index that shows the person’s tendency to leave the company in the near future. (The index is simply the mean value of items such as turnover tendency (reflected), advocacy, and pride in company). Turnover was a serious problem in this company, and so top management was particularly interested in drivers for turnover. To find them, a cloud is drawn about the commitment point in Figure 1. It shows what items are good predictors of commitment.
Now assume the plot in Figure 1 was made on a rubber sheet. If you were to grab this sheet with two fingers on any point X and pull it up, what would happen? The sheet would first lift off at this point, forming a small cone: Pulling up the sheet on point X would cause the neighborhood of X to go up too. Now imagine you would succeed improving (“pull up”) the item “performance=money” item in Figure 1. Since this is so closely associated with commitment, this would most likely also positively affect (“pull up”) the person’s commitment toward the organization. “Satisfied with chances for advancement” and “Satisfied with pay” are also drivers of commitment.
Our MDS plot also shows by the size of the points (“bubbles”) the extent to which these items were rated positively, and “money=performance” obviously received a particularly poor rating. Moreover, we also know from other data, that this rating is clearly below industry benchmarks and so it is realistic to assume that it can be improved! Hence, improving the relation of high performance to monetary rewards (and to chance for advancement) offers itself as a promising candidate for action. Naturally, improving this relation is not easy, requiring a complex action with changes in the pay system, but also in properly assessing performance by supervisors. So, this area of focus requires efforts on all levels, and by various departments.
Using MDS has been found to be an effective method in practice when working with managers on finding areas of focus. Managers understand such plots quickly, and then spend much time discussing the relationships of the various items and topics.
Borg, I., & Mastrangelo, P. (2008). Employee surveys in management. Cambridge, MA: Hogrefe-Huber.
Borg, I., Groenen, P. J. F. & Mair, P. (2012). Applied multidimensional scaling. New York: Springer.