By: Francisco Gil-White
School of Business
In the popular media—and hence in popular imagination—psychometricians have advanced so far they can tease apart our motives and traumas with tremendous ease and almost supernatural precision. Perhaps the psychometric industry is relying for its growth on such popular perceptions, because the scientific research, at least, suggests that psychometric products do not work as advertised.
The popular imagination is informed by such vectors as David Fincher’s movie The Game (1997). The star, Michael Douglas, is the protagonist of a company-administered ‘adventure,’ full of thrills and dangers he may not fully understand are part of the ‘game.’ For this to work, this peculiar company must predict his future behavior the way a physicist charts the trajectory of a rocket, for which a few preliminary hours of verbal responses to stimuli and checking boxes on paper are, amazingly, more than enough.
Modern human resources managers share this Hollywood faith. Among other goals, HR directors have been asked to: 1) reduce recruitment and rotation costs; 2) improve operational efficiency and worker productivity; 3) strengthen organizational harmony; and 4) bolster individual career coherence. With psychometric wizardry, they appear to believe, they can understand people as completely as the corporate psychologists in The Game, identify all the relevant personalities, fit them into their optimally functional cubbyholes, and properly articulate them for smooth system behavior: a corporate Swiss watch. But is the world like the movies?
Consider the Myers-Briggs Type Indicator (MBTI). Based on a (hotly disputed) interpretation of Carl Jung’s work, the MBTI proposes four dimensions of personality, each with two poles: 1) Extroversion (E) vs. Introversion (I); 2) Sensing (S) vs. Intuition (N); 3) Thinking (T) vs. Feeling (F); and 4) Judgment (J) vs. Perception (P). Your personality ‘type’ is determined by pegging you, for each dimension, to one of its poles. Thus, an ESTJ is an (E)xtroverted, (S)ensing, (T)hinking, (J)udging ‘type.’ Consulting Psychologists Press (CPP), owner of the MBTI, claims the MBTI will assist a company in ‘leadership and coaching,’ ‘team building,’ ‘conflict management,’ ‘career exploration,’ ‘selection,’ and ‘retention.’ For this, they will charge you.
So how good is the MBTI? In 1993, psychologist David Pittenger summarized an enormous mountain of studies that had, up to then, examined its quality. There was trouble.
First, for each dimension, in the raw data, there is no evidence of bimodal distribution: “[M]ost people score between the two extremes.” Nothing wrong with that so long as you respect the world, but the MBTI doesn’t. If two people in, say, the Extroverted-Introverted dimension, score nearly identical to each other on either side of the critical diagnostic value, the MBTI will call one an E and the other an I, making them artificially different.
This—together with run-of-the-mill sampling error—no doubt contributes to the MBTI’s rather striking ‘test-retest reliability’ failures: its inability in over half of all cases to assign the same personality to the same person (even when you retest just 5 weeks apart). This is embarrassing; the very idea of ‘personality’ requires (at least moderate) stability over time. In ‘test validity’ the MBTI does poorly too: results of factor analyses are inconsistent with the four dimensions posited. But the ‘ecological validity’ problem is perhaps the worst. For its backers, the very point of the MBTI is to assist human resources managers in career planning and placement, and yet the purported ‘personalities’ picked out by the test do not appear to correlate with significant behavioral choices—in particular, career choices. Neither is ‘type’ correlated with success in a given occupation.
At the time of Pittenger’s article the MBTI was the most widely employed tool for purposes of counseling, selecting, and placing staff in organizations all over the world. Appropriately, Pittenger’s devastating summary of MBTI shortcomings was published in the Journal of Career Planning and Employment in order “to caution against undue reliance upon its use without fully investigating the accuracy of its test results.” It had zero effect.
Two decades later Stephen Robbins and Timothy Judge, in their bestselling Organizational Behavior, a business college textbook, communicate a strong academic consensus that “most of the evidence” militates against “the MBTI’s validity as a measure of personality,” and conclude reasonably (though still tepidly) that “managers probably shouldn’t use it as a selection test for job candidates,” yet the MBTI continues to be, in their words, “the most widely used personality-assessment instrument in the world.”
This is interesting: hyper-rational bureaucracies focused on increasing revenue and reducing costs (private companies) are consistently choosing to pay good money for a test that fails entirely to fulfill its advertised function. Many companies using the MBTI are successful and wealthy—Robbins and Judge give some examples: “Apple Computer, AT&T, Citigroup, GE, 3M Co., many hospitals and educational institutions, and even the US Armed Forces”—so they can well afford the minimum expense required to find out whether the test they pay for is any good. (And shouldn’t they have learned by now that a best-selling college textbook cites them as examples of big companies that employ a useless test?)
But the puzzle is deeper because the problem is wider. In 2004 Annie Murphy Paul, a former senior editor at Psychology Today, published The Cult of Personality: How Personality Tests are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves. There is an entire section devoted to the MBTI and its massive failures, but Murphy Paul’s investigation is exhaustive: every major personality test is examined—and found to fail miserably. And yet, in the last few decades personality tests have become an international growth industry as companies everywhere fall head-over-heels in their enthusiasm to employ such products in their recruitment, selection, and career placement processes (writes Paul: “Today personality testing is a $400-million industry, one that’s expanding annually by 8 percent to 10 percent”).
And it isn’t just personality tests. The other major branch of psychometrics is IQ testing, or so-called ‘intelligence’ tests. The history of these tests includes a striking series of scandals involving misused statistics and outright faking of data. Most infamously, (Sir) Cyril Burt, a leading psychologist of his day, invented nonexistent sets of identical twins and published under an assumed name in the journal he himself edited (British Journal of Statistical Psychology) in order to advance this idea. The point was to argue that ‘intelligence’ was a unitary trait with 80% heritability—a figure still bandied about today (and by Robbins & Judge, no less)—the better to sell innocent Western populations on the ideas and policies of the eugenics movement, which later culminated in German Nazism. One could expect such a history to make people at least suspicious about the reliability of such tests, and ordinary folk resistant to be submitted to them, especially when so much modern intelligence testing is underwritten by the infamous Pioneer Fund, created by public racists. But intelligence tests have grown in popularity in the last half century, in tandem with personality tests.
Finally, there is the question of what psychologist Joel Mitchell calls the ‘psychometrician’s fallacy’: psychometricians have been using “methods, such as rating scales, which numerically code ordinal relations, and yet in their analyses of such data psychologists [have] preferred statistical procedures… permissible only for interval or ratio scale measures, such as arithmetic means, variances, product moment correlation coefficients, and associated statistical significance tests (so-called parametric statistics).” The work of computing such statistics, though feverish, is essentially meaningless. It amounts to pretending that qualities are quantities; that numerals are numbers. In other words, it is an insoluble philosophical problem that was simply brushed aside in the early stages of the emergence of the psychological discipline. Even without doing parametric statistics, the use of ordinal scales for subjective responses presents daunting problems (e.g. when I check ‘4’—on a scale from 1 to 5—to a question that asks me to express the intensity of some subjective psychological experience, how can we know that this intensity is not precisely what you call a ‘2’?).
How much of psychometrics has been flying under the scientific radar?
We have here, it appears, a phenomenon ripe for the kind of ‘systems analysis’ that has of late become so fashionable in business management academic circles. We need a better understanding 1) of the dynamics of the psychology profession itself, as an institutional system (a contribution to sociology of science); 2) of the internal articulation of roles and flows of information within businesses that explain their decisions on these questions of human resources management (a contribution to organizational behavior dynamics); and 3) of the manner in which business school graduates are being trained (or not trained) that makes them oblivious to—or insufficiently knowledgeable to address—the problems here discussed, with ensuing stability for these problems. In addition, we may need a better understanding of how different spheres interact and articulate with each other, including the general public’s perceptions, grounded in a person’s passage through the education system; the relationship between psychologists and the psychometrics business; the non-relationship between psychologists and statisticians; the law regime concerning discrimination, plus official approval by government bureaucracies for purportedly ‘neutral’ psychometric tests (perverse incentives); and the contact between human resources departments and academic psychology. We may learn a lot about the joint operation of academic, legal, and business systems, and may emerge with important lessons concerning the sociology and philosophy of social science and academic business education.
But for those who care most about the bottom line, a proper investigation of these linked phenomena may start the ball rolling towards significant cost savings, and real improvements in recruitment, career placement, and rotation. A major sales argument for psychometric tests has been their supposed low cost, compared to alternative strategies, but only useful products can make claims to be relatively cheap; those that fail to achieve results—and which may be imposing all manner of costs—are always expensive, however little one paid for them. It’s time for more hard-nosed business, and less superstition.♦
- Fancher, R. (1985).The intelligence men: Makers of the IQ controversy.New York: Norton.
- Mitchell, J. (2009).The Psychometricians Fallacy: Too Clever by Half? British Journal of Mathematical and Statistical Psychology, 62(41-55) Measuring the MBTI. . .And Coming Up Short. Journal of Career Planning and Employment, 54(1), 48-52.
- Paul, A. M. (2004). The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves. New York: Free Press.
- Pittenger, D. (1993). Measuring the MBTI. . .And Coming Up Short. Journal of Career Planning and Employment, 54(1), 48-52.
- Robbins, S. P., & Judge, T. A. (2011). Organizational Behavior (14 ed.). New Jersey: Pearson Education / Prentice Hall. (p.137)