Podcast

Sy Islam and Gordon Schmidt on Debunking and Testing Business Practices

Should we should spend less time testing our own theories and more time testing business practices as they’re actually implemented in organizations? Can I-O psychologists improve Lean Management? What’s the difference between testing and debunking? All this, and more, in my interview with Dr. Sy Islam (Talent Metrics Consulting, Farmingdale State College) and Dr. Gordon Schmidt (Purdue University Fort Wayne) about their paper Getting in the Game: I-O Psychologists as Debunkers and Testers of Business Practice.

Links

Transcript

This transcript is AI-generated and may not be completely accurate. Please do not quote myself or any of my guests based on this transcript.

Ben Butina, Ph.D.: [00:00:00] Hello, everyone, and welcome back to the Department 12 Podcast. I’m your host, Dr. Ben Butina. My guests this evening are both returning guests to the show. Welcome to Dr. Sy Islam, Associate Professor of Psychology at Farmingdale State College and Vice President of Consulting for Talent Metrics Consulting. Nice to have you on again.

Sy Islam, Ph.D.: [00:00:21] Thank you, Ben. Glad to be back.

Ben Butina, Ph.D.: [00:00:24] And welcome also to Dr. Gordon Schmidt, Associate Professor of Organizational Leadership and Chair of the Organizational Leadership Department at Purdue University Fort Wayne. Thanks for being here, Gordon.

Tonight we are talking about your paper, Getting in the Game: IO Psychologists as Debunkers and Testers of Business Practice.

It was published in 2019. Just came on my radar. And when I saw it, my eyes  lit up. This is the kind of paper I really like to sink my teeth into.  Let’s start here with Psy. Lean Management. This is one of the most influential business ideas of the last three decades, at least that I know of, especially in manufacturing or I spent a lot of my time, but when I search IO psych journals, I enter a parallel universe where lean never happened.

It’s like, I don’t know, WandaVision or something. What is going on here? Am I on drugs or is there a better explanation?

Sy Islam, Ph.D.: [00:01:19] So this was actually one of the first conversations that Gordon and I had about this paper and about this topic, because the structure of the journal basically the way that the, the journal works for this paper is there’s a focal article.

And the focal article in this case was about lean management, which was a topic that Gordon is really into. You know in terms of practice, right? Like understanding it, reading about it and in the business world is a very well known topic. And the focal article in this issue of industrial organizational psychology perspectives on science and practice try saying that five times fast you know, they they’re, the big pitch in that article was Hey, lean management is a thing.

People do this thing. How does it fit into the world of IO psychology? Because as you mentioned, Ben, Didn’t really study it. We haven’t really looked at it and as a management practice, it doesn’t appear in our research literature at all. If, if at all, it’s very, very rare to find it. And so. You know, lean basically says you try to do things as inexpensively as possible.

You use a technique called six Sigma to kind of approach that. And it’s a technique that has been adopted quite widely. Using a, and it has really cool setup where you can get different belts. It’s like being a martial artist. So of course it took off because it’s like being a business martial artist.

I’m sure somebody is going to be listening to this lean management and my description of it. And then was like, that’s, that’s a very, very simplistic way of looking at the management, but for our purposes I think, I think that that works. So the idea is that you’re going to increase profitability, improve your organization through this technique.

And when Gordon and I read this focal article, we want to do expand the scope. Of what the focal article had to say by looking at it, not from just saying like, well, there’s a lot of stuff that we don’t cover in IO psychology that exists in the world of business and lean was just the kind of tip of the iceberg.

And even if you read the focal article, which I recommend by you know, sicker at all you know, the. You know, lean has existed for many, many years for decades. And we, for whatever reason as a field haven’t really touched it. But, you know, we set ourselves up as being the scientists within the world of, of business, you know, but specifically, you know, our work focuses on selection and you know, kind of predictive analytics within this space.

But what we felt was after reading the focal article was there’s a lot of space where we can really flex as industrial organizational psychologists. And we’re not really capturing that. Part of it which is where this paper in, in its response comes from.

Ben Butina, Ph.D.: [00:04:18] I’m going to have a link to your article into the focal article in the show notes. And I would encourage you if you haven’t read this journal before you should be reading it because it’s a really cool format. As I described, there’s a focal article. That’s basically an article that’s shared with readers before publication, and then readers can decide to respond to that focal article with articles of their own.

And it’s all peer reviewed. And, and what’s really fascinating about it to me is seeing all the different perspectives and all the different jumping off points that people can take based on that focal article. So this was in response to lean, but sign Gordon. Broadened it and said, Hey, we meaning IO psychologists. We need to be debunkers and testers of real world stuff.

So Gordon, why. Why do we IO psychologists need to get in the ring on this one?

 Gordon Schmidt, Ph.D.: [00:05:13] To my mind, it does come down to what is the value of psychology of understanding, the psychology of people.

And I think the biggest thing that we see in technology often is. There’s some cool tool or technique it’s gonna fix everything. But we don’t look enough at the psychological aspects of why or how that’s gonna work. And I think things like lean can be applied in ways where you just do it to whatever the things Toyota did was, you know, you get your Kanban board and you mentioned Kaizen a lot.

And therefore you’re going to be successful in what you do. So to me, it’s an area where the tools I think can be very powerful and useful, but they also can completely be pointless when we don’t consider the effect of people on what’s going on. And so to me, it’s a very natural fit for IO psychology to kind of fill that needed.

Right is because we’re often looking at the, what do individuals do? How does their psychology affect what they’re doing? When that’s a big part of, you know, why are people motivated to do lean? How do you get people to buy in? How do we do teamwork related to lean? How do we do leadership that influences people?

A lot of times we talk about the tools and I would say this is true for things like artificial intelligence as well. Things like machine learning. Where we get focused on the tool is going to fix our problems, but ultimately people need to use the tool correctly. They need to understand it. They need to be motivated for it.

Otherwise it’s just, you know, another tool that’s potentially wasting our time. Right. Yeah, it’s kind of my feeling, at least on it.

Ben Butina, Ph.D.: [00:06:55] I’m a green belt  from ASQ in Lean Six Sigma and, well, before I ever took a stats class, I learned how to do statistical process control and hypothesis testing through lean methodology it was really just about throughput on a production line. Human aspect of it was never addressed in the I at the time wouldn’t have even thought to do it. And if I had, I wouldn’t know how to do it. So it makes a lot of sense that, Hey, maybe the people that study the psychology of work in the workplace might have something to say about this process that intimately involves human beings.

Sy, bounce it back to you. Debunkers and testers. So testers I get, and I don’t have any heartburn about testers, but. In the article, we kept coming back to also this word debunkers. And isn’t that like tattooing  the phrase confirmation bias on our foreheads. Like, isn’t that saying? We already don’t think this is true.

And we’re just looking for data to support that.

Sy Islam, Ph.D.: [00:07:56] There’s already things in the workplace that exist that are used, that we know based on our scientific evidence that really doesn’t  get supported at all, but it still continues.

 Our favorite example is something like the,  Myers-Briggs type indicator, right. Which is extremely popular. But it doesn’t have the sort of psychometric, reliability and validity that we focus on in the world of IO psychology. We agonize over those values over those predictive capabilities of our tools and  tool like Myron Springs.

It’s been under fire for a really long time. But it continues to be used. And so that’s where I think  the phrase  debunking comes from because there, there are these sorts of Products and tools that continue. So another one for me is learning styles,  Ben you’re, you’re in the world of talent development.

And, and so you’ve probably heard the phrase, you know, I have this learning style, right. You know, audio, you know, I’m an audio learner or a visual learner, and there’s no good evidence for that technique to be used in the development of training programs. It there’s, there’s no data showing that people have a primary learning style.

It doesn’t really seem to help design the programs more effectively. And so there’s lots of things. Types of programs that are out there and different types of, of topics that really need to be  debunked, or we need to kind of highlight the fact that the scientific basis for this is very limited.

 I was psychology as a rigorous academic discipline and and the fairly rigorous practice , needs to,  touch upon topics like this because, if we allow things that don’t really work or approaches that don’t really work. To continue. We don’t know what sort of harm might come from them.

And there’s lots of very iffy, very concerning methods that might be out there that actually harm the field overall, you know, in the past 15 to 18 months, we’ve been talking quite a bit. Things like diversity training. And one of the interesting conversations for me, is watching people talk about diversity and realizing that,  when people think about diversity programs, they primarily think of  unconscious bias training or some sort of diversity training.

And over the past year to year and a half things like unconscious bias or other types of diversity trainings have come under fire. Not being effective, but  we need to be in that conversation so that we can show both the value  of,  diversity and inclusion training. And also if there is a problem.

We are good enough to say like, Hey, this is the weakness of the program. This is what we have to work on because right now organizations are making choices without that kind of information, without that, without making that kind of distinction. And I also ecology You know, a hundred years of, of, you know, rigorous academic research and, you know, slightly under a hundred years of practice experience really needs to step forward as a field and say like, Hey, this is this, our jam.

This is what we can do. And this is very closely related to the fact that  many people outside of our field don’t really understand what we can do. And when Gordon and I were talking about this, it seemed like an easy framework and an easy lens. By which to talk about what IO psychology fundamentally is.

 Ben Butina, Ph.D.: [00:11:30] There’s let’s say two buckets of things that we could look at. One is here. Here’s the stuff that we were pretty sure doesn’t work. We have a good amount of evidence or lack there of to show us that this common practice is not a thing that works. We don’t have good evidence for it. And to that one, we need to wave the debunking wand and say, Hey, you know, we need to figure out ways to convince.

You know, the rest of the world of something that we already know, but in the other bucket is  less certainty. So lots of practices. We don’t have much evidence about them either way, at least generated from within our own field about whether they work or not. And so our role there can be testers.

 And I guess the, central yeah. Idea of your paper, as I understand it is. We are driven for the most part right now, internally. So we’re building theory within our own field.

And although we’re an applied science, it’s still kind of inward looking at least in terms of the published peer reviewed research. And your argument is, Hey, we need to also be. Studying business practices in the real world as they’re done. So whether or not we created them or we generated some kind of theory that relates to them.

If this thing is being done in the real world, by a lot of people, then it’s worth it for us to test it. And that’s exciting to me, but I do wonder what we lose. So let me ask you this Gordon. I suppose I got another magic wand and I wave it. And about half of the researchers in IO psychology are now focused on testing and debunking real world stuff.

Theory be damned. And the other half are, are kind of going on as business as usual. I think it’s clear what we gain. What do you think we lose if we do that? Or what are the risks of doing that?

Gordon Schmidt, Ph.D.: [00:13:38] Well, I think it’s impressive to get 50% of people convinced to do anything.

Ben Butina, Ph.D.: [00:13:42] So I got this magic wand Gordon

Gordon Schmidt, Ph.D.: [00:13:45] for that one that will borrow it sometimes. Well, I, I think that, you know, I think that’s always a question with, with the research of what, what are we creating a value and how does it affect the world? And I think that impact is something that we don’t look at very well.

So, you know, we’ve got journals that we think are good. We look at things like citation rate. But, but the idea is anyone using this in the real world, I don’t think we have a good handle on. And there’s a lot of people that are mostly making stuff up that have a lot more impact in a lot of areas.

And so to me, I think if you, if you legitimately had 50 per cent of IO psychologists focus on testing business practice and publishing things on these issues, I think you just have a lot more impact to the field. I think people that really want to do in-depth rigorous, theoretical work. We’d still be doing that.

It would just be giving a mission that I think might be greater impact for a good part of the field, because yeah, we can chase a lot of those same studies with lab studies and looking at niche issues that I don’t know, some of that value is not much value add, frankly, ultimately. Hmm. So I, I don’t know whether we lose something, probably there’s a mix of who who should be in which camper who’s good at which that might take some time to figure out, because I don’t think we want to be necessarily doing this work and publishing it in impenetrable scientific way that no one can understand it outside of IO.

I don’t think that it’s are the mission well, and so I think that is part of the struggle too, is that people don’t come to our journals and know what we’re talking about necessarily. Unless they already have our perspective. And it’s part of the reason things don’t get out as well. So I think it’s worth a shot using the one ban is what I’m saying.

Ben Butina, Ph.D.: [00:15:46] I’ll take it under consideration.  We’re an applied science and we have been trying to communicate what we know to practitioners, you know, to the real world, but it’s mostly been a one-way conversation where we  sit on the mountain top and say, foolish mortals below with your, you know, psychometrically invalid assessments and whatnot.

Now share our wisdom with you, and we’ll try to break it down to maybe an 11th grade reading level. So you will understand it, but the conversation hasn’t gone both ways. We’re not also  listening to what those businesses have to tell us about their real world conditions, the stuff that they’re actually doing, the context in which they’re doing it.

Now. I’m going to give you another hypothetical sigh. So you can rest easy for a little bit. Gordon I, a grad student in IO, psych sends you an email side and they say, I really, really loved this paper. This is it for me. This is, this is what I want to build my career on is doing this kind of work because I think it has so much potential.

I also need to build some kind of career for myself as an academic, as a researcher. What do you think the chances are I can get this kind of thing published anywhere. What do you think the chances are? I can, get tenure somewhere based on a body of work that that is this kind of testing.

Sy Islam, Ph.D.: [00:17:08] Okay. So as the As the crusher of hopes and dreams. So we went from Gordon in his magic wand to you know, PSI and the destroyer of worlds.

Ben Butina, Ph.D.: [00:17:19] So it did kind of make you the bad guy. Sorry.

Sy Islam, Ph.D.: [00:17:22] That’s okay. I’m you know, the villain always has more fun. So I would say that there, you’re probably not, if you’re planning on going into academia and if your goal is to have an academic career.

And you want to focus on this type of work and you plan on going into an IO psychology, you know you know, whether it’s teaching undergrads, whether it’s in a masters program or PhD program, I do not believe that you’re going to have that much success from a publishing standpoint, just because the nature of academia is such that you need to have a body of work that plays into it.

And existing theoretical framework, right? So you need to be building towards that scientific theory and just kind of taking this sort of debunkers approach. I don’t think works particularly. Well, especially if you’re planning on working with organizations, if you think that you’re, you’re really interested in looking at the science and you’re saying like, Hey, I’m interested in looking at what works, what doesn’t in our scientific field.

I would say that what you’re really talking about, there is something called meta science and the people that you may want to follow. In that area are people like Dr. Brian? Nosek who’s a big name in like the open science world. There’s a gentleman by the name of you know, James Heathers who has a great Twitter feed is very funny.

But he’s also devoted a lot of his time and energy towards looking at mistakes and errors and error management. Science. But he he’s written you know, Heather’s has written quite a bit about this on, on Twitter. You know, that is very, it’s very hard to create an academic career out of looking at  here’s the mistakes that I’ve noticed in others.

 The area of, of research where I think you could possibly become an academic and do a lot of this type of work may not be IO psychology at all.

It might be. Evaluation where in, in evaluation, the primary focus is on looking at and understanding how well programs are working, what works, what doesn’t and what the impact is overall in an area. But it may not be in a traditional sort of industrial organizational psychology department. Because in those departments you need to be contributing to the theory.

You may need to, you know, be trying to get grant money. And that is all based around. Traditional academic markers. And you know, if you look at the focal article that we’re talking about, if it took, you know, 20 odd years to get to lean management there may not be a journal that’s interested today in the hot new business topic.

The other way to, to approach this might be looking at and talking to existing business journals. So one of the interesting things about IO psychology is as a field, if you’re planning on going into academia, you can teach in an IO department, but you can also go into business school. I think that there may be a place, but even in business school, you may need to focus more on developing that theory rather than.

Trying to publish papers that are focused on these, you know these areas, if you did, you’d need to find a specific set of business practices, maybe that’s lean and then look at it through an academic lens. That’s sort of already established. And as Gordon mentioned, it could be motivation. It could be a number of different approaches, but it wouldn’t purely be this work of, well, I’m going to evaluate these things.

And then I’m going to say whether they’re good or bad or. Medium, right? Like that, that probably wouldn’t happen on the practice side. Depending on where you end up landing it’s really hard to tell companies and organizations sometimes that’s the thing that they’ve invested in. Isn’t working as well as they thought.

It’s an incredibly dangerous conversation. And as it is, it’s a, it’s a, like a, a political minefield. So, you know, one of the areas where talent metrics tends to work. Is we help organizations figure out if their programs are as effective as they might be or how effective they happen to be. And if an organization has invested quite a bit into a technology platform or, you know, something else.

Sometimes stakeholders in the organization, don’t want to hear that this thing is not working the way that they expect. And in that situation, sometimes we have like off the record conversations where we, , communicate some things, some of our concerns, but what happens with that information? Depends on the organization.

Some cases they say, okay, we’re going to make a change and others, they say, no, we’re, we’re sticking with this. There’s a political reason to continue. And we’re going to continue with this, you know, this technique or this approach, even if you are telling us that it’s not really working. Yeah.

Ben Butina, Ph.D.: [00:22:20] The, the irony here is.

It kind of relates back to exactly the topic, you know, we’re saying, Hey, we think that, you know, the testing and debunking is a great idea. But Hey, there’s all kinds of reasons that the field of industrial and organizational psychology right now doesn’t necessarily have a journal for you to publish that in.

You’re not necessarily going to be able to build a career much less a tenure application out of this. There’s all kinds of reasons, even though it seems like a really good idea, there’s all kinds of reasons that. We’re not doing it. At least we’re not doing it yet. And very often that’s exactly what we run into as IO psychologists.

When we’re trying to persuade a business client, for example, to implement one of our ideas is it seems very clear cut to us why you would want to do this but there are some reasons there are political reasons and tradition  and all kinds of cultural things.

That mean, no, we’re not going to do that. We’re going to continue doing the thing that we’ve sunk our cost into. Back to you, Gordon. This will be the last question. But you do get the hard one.

Since I gave Sy a hard one I’m going to give you another hypothetical. I’m a business owner. And you want to work with me to test some practice that I’ve implemented? What’s my incentive. To do that.  If you find out that what I’m doing, doesn’t work, then I look like a chump.

And if you find out that it does work, then I’ve kind of published my secret sauce for all of my competitors to see. So why, why am I going to cooperate with, with a tester or a debunker?

Gordon Schmidt, Ph.D.: [00:23:57] So, you know, we have in business things like benchmarking. As well as BEC best practice related things. I think some of these things are not used well and properly, not actually best practice.

Ben Butina, Ph.D.: [00:24:10] But, but the base common practices though.

Gordon Schmidt, Ph.D.: [00:24:13] Yeah. Yeah. They’re really common practices. Sometimes

Ben Butina, Ph.D.: [00:24:15] Omar going to get in trouble by picking this is what that means.

Gordon Schmidt, Ph.D.: [00:24:19] Yeah. But and I think that that’s, that’s part of the question is. Do you want the practices you use in your organization to actually work?

Are they worth the effort and time that you put into them? And I think that that’s a question that I think is actually very important that we don’t think about as much. But I think if you’re using lean in your organization, I think there’s an argument for us to test how it’s going as well as see how.

It functions in various ways. For instance, for this, this special issue, I also did a commentary related to motivation constructs and lead. And so there’s a lot of things you would predict from goal setting theory that would suggest that Lean’s not going to work cause it’s not impossible goals and things like that, that you’re building in.

You know, you might think people self-advocacy would be low if you’ve got such high goals and said changing over to. But I, I think that you can apply a lot of these things to be like, well, what we really need to do is build the self-efficacy of the people in our company towards this lean initiative, or we need to send proximal goals that will help get us to these huge waste reduction goals that we have.

I think, I think there is an argument of that our testing isn’t just vote up, vote down like this as gladiator or something. Right. I think it is what is what works and what’s going to help you do what you’re doing. And I, so to me that is the argument of this, because I think in a lot of cases, it’s not good to just say things don’t.

So that’s what we’ve heard a lot with uncon unconscious bias testing, is it doesn’t work well, what should I be doing then? Well, having a, a better environment for diversity being supportive, having policies, but that sounds like a lot of work. And to some degree it’s vague when I can just do a training and we’ll feel better about ourselves.

Right. And so I think you do need people that are actually forward thinking. In what they want to do and really do want to make practices better versus make it look like things are okay or doing all right. I think that’s a lot of applied issues we have with things like training is I do the training program.

If I evaluate it, it doesn’t work. It seems like you guys just wasted a bunch of money and I did a bad job. Well, in fact, evaluation is how we tell what works and potentially modify it better. So to me, this fits actually really well with the continuous improvement perspective. What’s working. What’s not in our lean initiative by science, how we can do it better.

So I think there’s probably a way to frame it. Maybe we need some type of qualification or lean six Sigma IO, black belt, five level or something to make this sound more appealing. But I do think continuous improvement and all these things suggest we should be trying to get better. This fits very well with those concepts.

 I think there is an argument for it. I don’t think it’s necessarily easy. But I do think these perspectives in theory, do overlap with what things like lean are actually trying to do. Maybe that’s naive that someone wants to actually do that to some degree. We’ve got the tools, we’ll just use the tools.

But I think how we frame it, I think could. It to be a better sale than we’re going to show you the things you’re doing are dumb. And we’re not going to tell you anything else to do. You’re just going to look. Yeah.

Ben Butina, Ph.D.: [00:27:57] It’s like you go to the doctor and the doctor either says you’re healthy, great. Or you’re in trouble, go home and die. There, there, there might be something in the middle. Yeah. And so for those of you at home, what I did and with my hypothetical. I took a continuous variable and I dichotomized it unnecessarily. And you got to see how much richness of that data we lost by doing that, by making it a yes or a no, or a pass or fail thing.

Gordon Schmidt, Ph.D.: [00:28:28] There is a marketing thing here, Ben, cause I think debunking is more interesting to people in academics while testing or improving is probably more interesting to people in industry to make their thing better.

So I think scientists feel cool and we debunk things. But I don’t know whether that’s the best framing for people. We’re going to show your practices. Don’t. Rather, we were going to help those practices to be better because there’s parts of lean that work there’s parts of things that work. We just need to improve the things that don’t is, is how I’d frame it at least.

Yeah. That’s, that’s really real bad news for my plans. 20 year campaign of making fun of the Myers-Briggs type inventory. I got to rethink the whole thing now. Thanks to you guys. So. I want to thank you both for being such good sports,  I took sort of a hostile approach with my questions, but as a practitioner, this is exactly the kind of paper that I want to wave in the air, like a flag at the next PSYOP.

You know, so I love it. I think it’s so valuable to think about this stuff and I hope. if we talk about this again in five years or 10 years, that some of that infrastructure that we’ve talked about, that’s not in place now is maybe in place then to support more testing and debunking of real-world business practices.

So I am going to include  a link to the paper itself and to the physical article, as well as links to your social media and business accounts signed Gordon. Thank you very much for writing this paper and thank you very much for talking to us about. Great. Thanks. Thank you, Gordon. I don’t think you’ve been, we had a great time.