You’ve probably had some experience with a pre-hire assessment.
Maybe you applied for a job and rated yourself on how much you agree with a variety of statements (“I like dogs” – strongly agree, or, “I’m the life of the party” – maybe if it’s a LAN party?). Maybe you’re an HR manager hoping to solve your organization’s performance, turnover, engagement, etc., etc., problems and someone has told you “Just ask your applicants what three people they’d take to dinner” to figure out what kind of person they are (please don’t). Or maybe you’re a college career counselor and you want to help your university’s students figure out what they want to do with their lives – you’ve probably run into assessments that you didn’t even know were used in hiring.
At some point, you’ve either taken an assessment or been pitched an assessment to solve all your problems. Ever hear of the Myers-Briggs Type Indicator (MBTI)? After taking it, people often feel that it captures that elusive Who They Are, an intoxicating thought that has led it to be used all over education and industry. Unfortunately, besides painting a nice picture of yourself, it really doesn’t do anything. The MBTI isn’t alone in having this problem.
Do is an important word in a pre-hire assessment because if your pre-hire assessment does things well, it helps you select people who work harder and smarter and stick around longer, or it helps you find a job you’ll love. Sounds good, right? I’ll spend some time unpacking what “do” really means throughout this post.
But whether you’re an applicant, a hiring manager, or a career counselor, you should be concerned about what a pre-hire assessment measures, how it measures those things, and why those things are important – in short, what do they do?
There’s not a lot of material available to interested persons on what a pre-hire assessment actually is, and the material available is frequently buried under psychological or statistical jargon and mysterious numbers (at best) or vague sentiments (at worst).
The goal of this blog post is to demystify the pre-hire assessment and, more generally, psychological measurement, in accessible terminology so you can be informed on the hiring process and make educated decisions about the hiring tools available to you.
There’s good a chance you haven’t heard the word “construct” unless you were a psychology or philosophy major in college. Constructs are, however, critical to the pre-hire assessment. In short, they are the “what” (as in what is being measured?) of the assessment. There is a whole body of science dedicated to “how” best measure that what – psychological measurement or, more concisely, psychometrics. (jobZology employs several psychologists well-versed in psychometrics, so when I refer to “we” I am specifically referring to us jobZology psychologists but also generally to all psychologists using these practices.)
Let’s start with a short example: anxiety. Anxiety is a psychological construct, meaning a quality of a person that describes something. Anxiety is not directly observable although we can see aspects of it (heart rate or sweating). These aspects, conversely, are not constructs but rather real objects, or things we can observe. These objects individually contribute to our understanding of the construct (heart racing, palms sweating, and you have negative thoughts? You’re probably anxious about something.) See below.
Using real objects like bodily reactions to infer someone’s anxiety is analogous to the process used in the pre-hire assessment (and not because taking a test for a job is anxiety-provoking).
When we measure a construct with an assessment, what we’re really asking is for a person to indicate where he or she stands on a class of observable objects we believe to be related to the construct of interest.
Returning to my first example, the statement, “I’m the life of the party,” doesn’t say much about a person by itself, but if a person also indicates that they make friends easily, have no trouble being the center of attention, and happily speak in public, there’s a very good chance that person is extroverted, or generally socially-inclined and outwardly focused.
Extroversion isn’t directly observable, so we call it a construct, but we believe (and have a lot of scientific evidence) that there is a set of well-defined, observable behaviors that indicate if someone is extroverted. Assessment, then, is the process of asking someone to tell us about those behaviors so we can infer his or her amount of that construct. See below:
But why do we care if a person is extroverted in the hiring process?
Think about it conceptually: An extroverted person has an easier time talking to new people, speaking up and asking questions, and being engaged in new situations – in short, they possess characteristics that might make them successful in the on-boarding and socialization process starting a new job.
This is the start of a research hypothesis: if a person is extroverted, are they more successful starting new jobs? As scientists interested in these kinds of questions, we find an organization that is also interested in these kinds of questions, we measure applicants’ levels of extroversion and track their success starting their new jobs. And in fact, there is a large amount of evidence to suggest that the more extroverted you are, the more likely it is you’ll succeed in starting a new job.
The phrase, “…the more this, the more likely that” is thrown around a lot in the discussion of a pre-hire assessment. We’ve covered the first part of what a pre-hire assessment does (measure a construct) and here’s the second part: prediction, or, “…the more this, the more likely that”.
This is where good pre-hire assessments and other assessments, such as the MBTI, split ways. Constructs can be related to one another and discovering those relationships (or lack thereof) is a core goal of psychological research. Knowing the relationships between constructs allows us to make predictions about one construct we haven’t measured with a construct that we have measured.
Case in point: A hiring manager wants to know who, among all applicants, will perform the best on the job. We can measure a variety of things from the applicants, but we can’t measure job performance without hiring them. But what if we had a set of constructs we could measure that we knew were related to job performance?
Voila! The pre-hire assessment is born.
So, what do we measure?
Well, that depends on the purpose. The MBTI is going to be a bad bet for two reasons (for a lengthier discussion, see this post). First, taking the MBTI multiple times will often yield different outcomes. Unlike well-defined constructs, the MBTI is not reliable, meaning that measuring the same person multiple times or in multiple ways (e.g., different versions of the assessment) will produce different results (recall my vague sentiments comment – the “types” of the MBTI are vague enough that you’ll find personal insight in any of them). This directly affects the accuracy, or validity, of the assessment when we try to predict something.
Imagine you are playing darts. To win, you want to be both accurate and precise – you want to not only hit the bullseye but also hit the bullseye consistently. An unreliable measure is like playing darts while balancing on a ball – you can’t hold your body in the same place, so you lose all precision and, therefore, all accuracy in your throw. In other words, the MBTI can’t do anything for you because it can’t predict anything of interest.
There are several measures of constructs that are reliable and valid for predicting job outcomes. At jobZology, we specialize in measures that predict fit and job attitudes (intentions to withdraw, job satisfaction, and organizational commitment, for example). We measure an applicant’s interests, values, workplace preferences, and personality, which we then use to predict how well they will fit in the job, how satisfied they will be, and how likely they are to leave the job.
For the final part of this post, I want to break down exactly how (and what) we know when we predict one construct with another.
I mentioned at the beginning of this post that one of my goals was to demystify the statistical jargon that goes along with a pre-hire assessment. That’s because there’s a good chance that if you’re ever in the position of purchasing, delivering, or analyzing the results of a pre-hire assessment that numbers will come into play.
Numbers allow us to express the relationship between two or more constructs consistently across situations. Since we calculate and report numbers consistently, we can compare different relationships to see which is stronger, explore whether adding a new construct improves our ability to predict another construct, or average the results of many studies together to accumulate general evidence for the strength of a given relationship. It’s this last point I’ll spend my remaining time on, so I can discuss what the numbers behind a pre-hire assessment means.
The language of any advanced field is inherently jargon and adding numbers only complicates the matter further. Below I do my best to succinctly introduce three important concepts, but I can’t claim that you’ll walk away from this post being fluent in statistics. Instead, I want to introduce these concepts as best as I can to avoid either ignoring them or scaring you away from ever understanding them and, hopefully, start a longer conversation about what these numbers mean for you or your organization.
These are the three statistical terms I’ll refer to: r, r2, and variance explained.
Without diving deep into statistics and geometry, the r coefficient indicates the strength (ranging from -1.0 to +1.0, with 0 indicating no relationship) and type of relationship (negative or positive) between two variables.
Two variables with a perfectly correlated positive relationship (r = 1) means that for every increase (or decrease) in one variable there is an exactly corresponding increase (or decrease) in the second variable. The two variables might as well be the same thing (and if r = 1, they probably are).
For example, if you were to record the time reported by two different, well-maintained watches, each second for a minute, it’s likely the two variables (times of watch one and two) would be perfectly or nearly perfectly related. If we were to plot this relationship, it would look something like this:
In a summary of the empirical findings on perceived fit (essentially, the extent to which someone believes he or she is a good fit at work) and intentions to quit, Kristof-Brown et al. (2005) reported the average correlation, or the average relationship between these two variables across many scientific studies, to be r = -.40. Not as great as the relationship between the watches (but nothing practically will be), but still a moderate and, in this case, negative relationship.
This brings us to our second term, r2, which, mathematically, is just the square of r (the reason we do this is beyond the scope of this blog, so rest easy), but it gives us our third term, variance explained.
In vague terms, variance explained tells us how much of variable 2 is related to variable 1. In the first example, the square of the perfectly positive relationship between the two watch times is 1 (12 = 1 x 1 = 1), meaning that 100% of the time on watch 2 is accounted for by watch 1.
No two different constructs, especially psychological constructs, ever get this close. Computing r2 for fit and intentions to quit returns a value of .16, or 16% of the variance explained. You can picture it like this (if this were the watches, it would be two completely blue circles – very boring):
This means that, holding all equal, 16% of a person’s intentions to quit can be explained by his or her perceived fit. Thinking about this conceptually, this should make sense: While fit is very important to explaining if one might leave his or her job, there are other reasons someone might feel like leaving (they’re bored at work, they commute too long, the benefits aren’t great, etc.).
In fact, with psychological variables, there are often so many relationships between constructs that finding even a few variables that explain a good proportion of the variance (for example, fit), can be tremendously helpful. When making recommendations for clients, we often advise on the benefits of using a few good, known constructs to make theoretically sound decisions about whom to hire.
Let’s substitute fit and intentions to quit for watches one and two (but recall that the r statistic was negative – meaning that as fit goes up, intentions to quit go down).
If we lived in a world where fit and intentions to quit were perfectly negatively correlated and our only goal was to hire people who never want to leave the organization, we would know exactly who to hire every time. We would always hire the applicants with the highest fit because they would be the least likely to leave.
We don’t live in this world, however, but we do live in a world where we can take many constructs like fit and use the best of them together to enhance our ability to predict intentions to quit, satisfaction, job performance – whatever your goal as an organization might be.
This translates into a workforce more satisfied, more productive, and less likely to leave (saving you more in hiring, on-boarding, and training costs) than either not using a pre-hire assessment or using a pre-hire assessment that does nothing.
This is what it means for your pre-hire assessment to “do” something. A good pre-hire assessment will accurately measure well-defined constructs that have empirical support in the scientific literature and established relationships to job outcomes.
A good pre-hire assessment allows you to predict things you care about with a considerable amount of accuracy. This, in turn, strengthens your workforce and provides scientific rationale for your hiring decisions.
So, maybe you’ve taken an assessment before or considered using one for your organization. Now, hopefully, you have a much better idea of what you took, why you took it, or why you would use one.