Can you validly and reliably measure a construct with just a single item? If so, what does that mean for academics and practitioners? In this episode, Dr. Russell Matthews (University of Alabama), Laura Pineault (Wayne State University), and Yeong-Hyun Hong (University of Alabama), join me to talk about their new paper, Normalizing the Use of Single-Item Measures: Validation of the Single-Item Compendium for Organizational Psychology in the Journal of Business and Psychology.
This transcript is AI-generated and may contain inaccuracies. Please do not quote myself or any of my guests based on this transcript.
[00:00:00] Ben Butina, Ph.D.: Hello, and welcome to the Department 12 Podcast, where we talk about everything I-O psych. I’m your host, Dr. Ben Butina. Today, I am joined by three guests. First up is Dr. Russell Matthews, Professor of Management at the University of Alabama. How are you today, Russell?
[00:00:15] Russell Matthews, Ph.D.: I’m doing great.
[00:00:16] Ben Butina, Ph.D.: Next up is Laura Pineault, a doctoral candidate in I-O Psych at Wayne state University and a Senior Research Science Analyst for a consulting company. How’s your day going so far, Laura,
[00:00:26] Laura Pineault: Everything’s great here. So excited to be here.
[00:00:29] Ben Butina, Ph.D.: And finally we have Yeong-hyun Hong, a PhD candidate in Management at the University of Alabama. You usually go by YH, as I understand it. How are you today, YH?
[00:00:39] Yeong-hyun Hong: I’m really great today. Thank you.
[00:00:41] Ben Butina, Ph.D.: So congratulations to all three of you on the publication of your recent article in the Journal of Business and Psychology, it’s called Normalizing the Use of Single Item Measures: Validation of the Single Item Compendium for Organizational Psychology. And I understand it was all the rage at SIOP, which just ended as we’re recording this. Russell, you’re the lead author on this paper, so let me start by asking you what got you interested in this topic to begin with?
[00:01:08] Russell Matthews, Ph.D.: When I was still a grad student, so that was fair number of years ago, I was interning at IBM, and I got the chance to redesign their exit survey. But my supervisor, who was also an I-O psychologist, Tanya Delaney, she’s like, you get one question for everything you want to ask about.
I was like, we can’t do that.We got into all of that wholes shenanigans about, you have to have multi item measures. And she was like, no, you get one item. So I had to wing it there, because there wasn’t really great guidance at that point in time. Since then I did a paper with Gwen Fisher on single items. And then this is just the continuation of it.
[00:01:41] Ben Butina, Ph.D.: Laura, this seems like a massive effort from reading the paper. How many studies were involved in the paper and about how long did it take to create it?
[00:01:50] Laura Pineault: We have five studies within our paper and this effort really started in 2020.
So it’s been a two year continuous collaboration. Russell and myself and YH have never met in person. And so doing this work entirely online, collaborating via email and passing, drafts back and forth and study designs for the past two years.
[00:02:17] Ben Butina, Ph.D.: Wow. YH, there are you know, many findings in the paper. What, in your opinion is the single biggest takeaway, if you had to put it on a billboard?
[00:02:27] Yeong-hyun Hong: There is really good evidence that single item measures can show really good reliabilities and construct validities, which are defined in the studies three and four. I’m very excited about these findings because we were able to show that a single item measures are not inherently invalid.
So a single item, it just can be fairly valid and reliable once they went through really a thorough development process. So, I’m very excited about these findings.
[00:02:59] Ben Butina, Ph.D.: I was also surprised and pleased to see the findings in the paper. Russell from an academic standpoint, what do you hope the impact of this paper is going to be?
[00:03:09] Russell Matthews, Ph.D.: I don’t know what I hope the impact will be. What I hope is people don’t walk away with the notion that we’re advocating that anything and everything can be measured with a single item, right? That’s definitely not what we’re trying to advocate here.
But I guess on the academic side, what I’m hoping is more reviewers and more editors at journals will be more open to the idea of single items, so long as the researchers have demonstrated the need for, or the utility of, the single item and the validity of their item within their study. I think if you talk to most academics, using single items, this is like a no brainer. And they’re like, fantastic. Now I can add something to site to support this. But I think what we tried to convey again from an academic standpoint is just like with any multi-item measure, it’s still incumbent on the researcher who wants to use the single item measure to show that it’s a reliable and valid measure.
And maybe now that there’s this compendium of different single item measures, that’ll help some of those ongoing issues of measurement and that sort of stuff related to single items.
[00:05:07] Ben Butina, Ph.D.: YH, can you tell me what surprised you the most, either about the findings of the paper or about the process of writing it?
[00:05:15] Yeong-hyun Hong: Both of them are really good experiences for me, especially as a doctoral student. The experienced that I have learned through the research process, including the data collection, writing and revision process were great experiences for me.
And I’m very excited about the results, too, because. I believe that there may be a lot of future potential users of single item measures, which can help address the theory practice gap by facilitating more use of items in the surveys among practitioners, because it’s short and there’s more visibility to use those shortened, survey for the practical research. So I’m very happy about that.
[00:05:56] Ben Butina, Ph.D.: So, it sounds like if we’re using single item assessment responsibly, we can create a survey that is shorter and, potentially, we can create a survey that measures more constructs than we would otherwise. Is that fair characterization?
[00:06:12] Russell Matthews, Ph.D.: It’s funny you say that because that’s one of the driving things for me, one of the things that I’m always worried about in a lot of the research, because I’m an editor and a reviewer on a lot of different things is we see a lot in deficient models. Like when people are testing things, I would argue a lot of the models are deficient.
They don’t measure everything that they could. And obviously there’s practical reasons why you can’t stick everything in a survey. But potentially with the use of some of these single item measures, as you were alluding to, we can start tapping into some other tangentially related [issues]. We can rule [them] out.
Some of my friends were in economics and they’re really worried about robustness tests. Does the effect still hold after controlling for X, Y, and Z? And I think there’s some real utility here, making sure that we can measure some other stuff.
And I’m going to self-promote even more for a second We just had a paper that’s conditionally accepted, where every construct in the papers is measured with a single item. Most of them coming from this actual assessment compendium. And it’s a pretty complex model of longitudinal wherein we couldn’t measure all these long items multiple times. So, I would agree with what you’re saying for sure.
[00:07:15] Ben Butina, Ph.D.: Switching gears a little bit to the actual writing of the paper. Laura, earlier in the interview, you mentioned that, yourself Russell and YH had never actually met face to face. So I’m just curious, how do you get started and figure out who’s going to do what on a paper like this. How did that go?
[00:07:31] Laura Pineault: Yes. So I suppose there’s a correction. Russle and I have met once in person, which was the impetus for our collaboration back at a work family researchers conference in Alabama, actually in March, but not with respect to this paper at that time. It wasn’t a collaboration at that point. And so, when we decided to undertake this effort, we first started with this series of studies.
And then when it came the writing process, I would say, this is a true example of apprenticeship, Russell really leading both YH and I on what it means to write and design a very comprehensive test of this question that we had, which was can we measure a series of constructs in the organizational sciences using single items?
And then how do we craft the most compelling narrative and series of studies to test that overarching question? And throughout the review process, there was an equal set of learning. Being responsive to reviewers in a way that was: I am learning from what they are saying and what is it that we are going to do to address those comments from a place of humility.
So all of that Russell exemplified. As I go into my future as an academic will bring that with me forever. So, Russell, I suppose this is an opportunity to say thank you truly for this opportunity and the excellence with which you write and you lead those that you collaborate with.
[00:08:58] Russell Matthews, Ph.D.: Well, Ben, I’m going to just butt in here real quick and say this paper would not have happened without Laura and YH. I have a tendency to go, oh, let’s write something. This will be fun. And then the context with which they both brought and grounded this paper. This paper, it is because of the two of them.
[00:09:14] Ben Butina, Ph.D.: That’s fantastic. And I’m going to include a link to this study, as well as links for each of the authors. And I’d like to thank each of you for being on the show and sharing your paper.
[00:09:24] Russell Matthews, Ph.D.: Thanks, Ben.
[00:09:24] Laura Pineault: Thank you, Ben.
[00:09:26] Yeong-hyun Hong: Thank you very much.