The Hidden Causes of AI Workslop—and How to Fix Them


ALISON BEARD: I’m Alison Beard

AI: I’m Adi Ignatius, and this is the HBR IdeaCast.

ALISON BEARD: Artificial intelligence promised to make us faster, smarter, and more productive at work. So why does it sometimes feel like it’s doing the opposite?

If you’re trying to figure out how to use AI without undermining your culture, your collaboration, or your credibility, this episode is for you.

In today’s episode, we’ll explore why AI…

ADI IGNATIUS: Stop stop stop. Alison, did you even write this?

ALISON BEARD: Good catch Adi. No, our producer Mary and I decided to give it to ChatGPT. And that’s not because we were busy with other stuff – though we were – but it was to try to prove the point of today’s episode: that the rise of AI has also meant the rise of what our guests call AI “workslop.” And that is bad product, that kind of passes but actually creates a lot more problems than it solves.

ADI IGNATIUS: So I know who you’re talking about, and your guests have written a couple of articles for us on AI workslop. They’ve been among the most popular things we’ve published in the past year. Workslop is a problem. AI is achieving a lot for us but it is also creating content that is problematic.

ALISON BEARD: Exactly and this issue is resonating, the coinage of the term workslop has really taken off. But one of the key points that we’ll dive into is the idea that this isn’t just laziness, there’s structural pressures causing workers to use AI to create junk.

Two coauthors of those articles you mention are Jeff Hancock, a professor of communication at Stanford and Kate Niederhoffer, chief scientist at BetterUp, and they’re going to explain how this trend hurts teams and organizations, and outline the changes that leaders have to make to ensure our AI use is working for us, not against us. Here’s our conversation.

ADI IGNATIUS: That was really you right.

ALISON BEARD: Yes Adi.

So let’s start with the basics. How do you two define workslop?

JEFF HANCOCK: I really like the definition we started with, which is it looks like it does the work, but actually doesn’t advance the task. And actually even more, I like the word masquerade that we use to define it. So it’s sort of like masquerade captures that, looks good, but actually isn’t what it says it is.

KATE NIEDERHOFFER: I think the most important part about the definition is that it is interpersonal and it shifts the burden of the work onto the receiver. But it’s really important to know that we did not define it like that in talking about it to others to measure the prevalence, for example. So the classic definition of it is low effort, low quality AI generated work that appears to fulfill a workplace task, but doesn’t really have the substance necessary to do that. But when we talk about workslop, it’s really important to think about that interpersonal shift of burden.

ALISON BEARD: Obviously there have always been people who phoned in their work, but how has the rise of gen AI made that problem worse or more prevalent?

KATE NIEDERHOFFER: I think people ask us this question all the time and are failing to understand that with low quality work prior to AI, there was really no question about it. But with AI, it has this special way of decoupling effort and quality. And so the signals are almost deceptive now. When you receive it, it’s low effort, low quality, and it’s trying to pass for something other than what it is.

JEFF HANCOCK: The other thing that AI allows you to do is just do more of it. I think one of the things that we were shocked by is how pervasive it is.  We asked people not only how often have you received it? Fair number of people said that about 40%. But we also asked if they had sent any workslop. And this is a pretty negative question. And so social psychology tells us social desirability bias should really lower that. 53% of our participants said that they had sent some workslop or that some of the AI work they did was sloppy. So if anything, that’s an undercount, and it really gives you a sense of how pervasive people are worried that even they are doing it themselves.

ALISON BEARD: Yeah. So people are admitting to the bad behavior.

JEFF HANCOCK: Right.

ALISON BEARD: I think the premise certainly that AI companies are putting forth is that you can use these tools to effectively automate a lot of your routine work. That’s the promise and they can free you up for more complex tasks or deeper thinking. So why isn’t that happening now? Is it because people are lazy or they don’t know how to use the tools properly?

KATE NIEDERHOFFER: I think we tend to think it’s not that they don’t know how to use them. The responsibility is not so much on the individuals, but they’re put in situations where they’re already feeling tired or disengaged or don’t really have the fuel to operate at work in the most powerful way right now. And they have these mandates to use these powerful new tools on top of everything that’s on their plate or in order to do everything on their plate. So it’s not so much that they haven’t had the time to learn the training, but it’s a new class of tools that requires an agentic mindset. We would call it a pilot mindset to really figure out how to have ownership in the work that you create, how to edit it, to discern what voice you want it to have, and really work with the tools to make sure that the work is meaningfully advancing the task at hand.

JEFF HANCOCK: One of the things that became clear is that it’s easy to blame the person, say, “Oh, look, this is just a person being lazy.” And that’s not really what this is. I think workslop, and we argue this in our paper, is more of a symptom that there’s a problem in the organization. And so if that’s true, then it’s a leadership problem. And when we look at all the factors that lead to workslop, it’s really a recipe with two main ingredients. One is AI mandates that are really general, “Hey, you need to use AI. We’ve just spent all this money. You better use it.” And the second is, “Because we have given you all this AI, you should be able to do more work.” So if you overburden people and you tell them they have to use AI, the likelihood that they produce this workslop goes way up.

ALISON BEARD: What have you learned about the costs of this prevalence of AI generated workslop? I think first, let’s talk about what hits are you seeing to productivity, decision making, performance.

KATE NIEDERHOFFER: The costs are multiple. So I think the first cost is just that cognitive effort that’s required to understand what’s going on. The time that people are spending is not just figuring out what’s going on here, if it’s missing context or what it does contain that feels a little inaccurate or unlike the sender who produced the content, also what to do about it, whether it’s egregious enough to say something to the person who produced it or even to initiate some sort of gossiping behavior to talk about how absurd it is that somebody is missing the important context that you know exists and needs to be in a document or that they would use a style that’s so unlike them.

So the first is just that cognitive effort. But what we found in the research and what really was remarkable to us was how emotional it is, how annoyed, frustrated, even angry people are when they receive it. So there’s sort of that first wave of emotional experience that you have that’s like, look, I’m just trying to get my work done here and this is costing me more time and it’s clearly not authored by you and it’s not advancing this task.

And then it creates this interpersonal phenomenon where it actually makes you judge the producer of the content to be less competent, less trustworthy, like you don’t want to work with them anymore. So all of that is a cost. I think as Jeff mentioned before, people really focus on the productivity, the time that it’s spending and draining from the workforce, but I think the more toxic cost is really one that’s emotional and interpersonal.

ALISON BEARD: Yeah. So that knock on effect hurts collaboration, trust, engagement, and wellbeing.

JEFF HANCOCK: Totally. We start seeing that people are… If Kate had sent me something, I literally judge her as less creative, less capable, and less trustworthy. So it just kind of gets right at the core, the foundation of teamwork. And so it has these huge interpersonal costs. I think Kate’s right that they’re probably the biggest cost. They’re a little bit invisible. For the more visible costs, on average, people said it took them about two hours to deal with the instance of workslop. You got to detect it. And then you got to be like, “Oh crap, what am I going to do? Do I ask Kate to redo this? Do I just do it myself?” And so it’s two hours of work. We did some back of the envelope calculations where we asked people what their salary was and how much time they estimated they had to deal with it. And for a company that’s about 10,000 employees, that’s $9 million a year. So it’s non-trivial, even the sort of hard productivity numbers.

ALISON BEARD: Which is ironic because companies are paying for these tools in order to save money.

JEFF HANCOCK: That’s right.

KATE NIEDERHOFFER: One thing that I just wanted to add to that too is we found that managers are actually reporting, spending more time, more time and effort dealing with it. So there’s something about this being a more effortful process for more senior resources, which I think is ironic. So people who take more pride in their work and managers tend to say that dealing with workslop is a more effortful procedure.

ALISON BEARD: I guess the question is, you’re asking people to find the right balance between using AI input and their own critical thinking. And it’s hard to figure out right now where that sweet spot is.

KATE NIEDERHOFFER: I think what we’ve realized in the evolution of our research is that this is a leadership challenge. And so the onus is not so much on us exclusively as individuals, but it’s a leadership challenge to set up the talent infrastructure to ensure that we have the right conditions in talking about how to introduce these powerful tools and how we want to achieve any sort of productivity or innovation gains.

So I think we do need to start figuring out the balance between our voice and the expertise of these tools, but I think we have to move away from a tool focused or even tech focused conversation and into what type of organizational changes do we need to make, everything from the communication of this opportunity and privilege to introduce these new tools to creating the culture that is really connected and trusting and ripe for people to lean in and engage in their work.

ALISON BEARD: Okay. I want to dig into all of that because solutions are why we’re here. I guess the first question is, how do you diagnose how big a problem this might be in your organization before you even start trying to tackle it?

KATE NIEDERHOFFER: Well, there’s one really diagnostic thing, and that’s just to understand, is AI mandated in the organization? That’s like the single biggest predictor of workslop. And so it’s pretty telling if people feel as though their organizational strategy is one of mandates. And then from there, you can go on to measure the prevalence. We’ve had quite a good success rate in picking up on a reliable prevalence rate across different organizations, situations, geographies even.

So I think that’s the first thing is like visibility into the problem and thinking about the root cause. I would also encourage leaders to think about their culture and to measure things like engagement or Jeff and I have done a lot of research on optimism and agency, and that’s something that you can also measure as well. You can measure simply how agentic and optimistic people are about their work, but you can also measure their mindset toward AI. And so each of those are important things to think about as diagnostics today.

ALISON BEARD: So what are some steps that leaders might take to creating a more positive, productive culture around AI use?

JEFF HANCOCK: Well, I’ll talk about two. I think that leaders could adopt right away. Number one, as Kate said, move away from general AI mandates and instead think about how AI can function within their firm. And I think the team level is a really important way to think about it. So one thing I think that I’m seeing, I saw this as a theme at the World Economic Forum and Davos this year was how to get teams to rethink and redesign their work together in the context of AI. So instead of like, okay, I’m going to use AI differently or learn and become more literate, it would be, well, how did Kate and I and our team rethink how we do research now that we have this tool?

And that would change the way the team works. It also surfaces how AI can play a role, but it keeps my agency involved. I’m the one that’s working on redesigning our teamwork together. And so I think that’s a really powerful approach. The other one is trust. I think there’s a lot of people who aren’t dumb and they’re thinking, “Wow, everyone in this company signed up AI, that’s great. Am I going to get laid off because of this?”

I think that the way that leaders talk about AI and the vision that they have inside their organization is heard loud and clear by employees. Everybody knows AI is this big, powerful thing. It’s really ambiguous. Nobody really knows what’s coming. And so if leaders are talking about automation all the time, if they’re using these general mandates, I think employees are detecting the signal and will start looking for exits.

ALISON BEARD: And maybe that’s part of why they start to generate workslop, because they feel like they’re going to be outsourced soon anyway.

KATE NIEDERHOFFER: That’s exactly right. We’ve recently been researching that exact thing and we find that employees are really perceptive. They’re really picking up on subtle cues and making powerful judgments about organizational AI strategy and its implications for the extent to which an organization is resilient and can modernize and can really approach the future in the right way.

I just wanted to add a couple things to what Jeff was saying, because I think we did really hear a different conversation at Davos this year, which was about re-imagining the future of work. And you can see in our second article, we have some suggestions about ways that people can reimagine and think about a new model for leadership as well as management. There’s a really interesting distinction that I actually heard on your podcast long ago about the difference-

ALISON BEARD: That’s good to hear.

KATE NIEDERHOFFER: Yeah. About the difference between leadership and management and that in times of stability, it’s really about management and what type of systems are going to help scale, but in times of volatility and uncertainty, it’s really a leadership challenge.

And so I think that’s what we’re seeing now, is there is a leadership challenge as We’ve discussed and it’s an opportunity to really reimagine what leadership looks like in the organization. And that includes the design of the org as well as new roles. So Jeff and I had suggested a new role as something like an AI collaboration architect. And this is somebody who’s fluent in what will lightly refer to as the human and the tech, who understands what are some of our collaboration issues here? What are some of the challenges that we’re really struggling with that need to be solved, whether with AI or other tools? And then how can I really think about embedding AI into this workflow to solve the problem? Instead of just throwing these tools in, it’s thinking like an architect and how you can embed this very powerful technology to solve some of the real challenges that you have. And that’s when you’ll see the real productivity and innovation gains.

ALISON BEARD: And it sounds like, Jeff, you were saying that it’s really important for C-suite leaders, for that new AI collaboration executive to listen to the teams on the ground to figure out the AI processes that will work best for them.

JEFF HANCOCK: Yeah, that’s right. I mean, if you think about how Toyota redesigned the manufacturing of cars, they did it by having the employees engaged in the redesign of the way the factory would work and led to massive success because the employees were engaged and they had agency. And I think the same thing happens here. Trust in your teams, if you think that they can rethink the way they work and make themselves more efficient or even better, create new capabilities, that would be amazing.

But there’s a risk for leaders with that. Our colleagues have talked about the J curve, our economics colleagues have talked about the J curve of these kinds of new technologies, where initially there’s a decline in productivity. I think we’re in that sort of dip of the J curve. We have to invest in our people and give them time and space to rethink how they’re going to be able to work with these tools. And that’s an investment. And it won’t be until those teams are able to rethink things and redesign that you’ll see those massive payoffs. And I think the massive payoffs come from augmenting teams, allowing them to do new things, which is risky. We don’t know what those new things are, versus automating everything that they did.

ALISON BEARD: It strikes me that right now we’re in this phase where everyone is experimenting on their own with AI tools provided by the company or not, and with vastly varying degrees of success. And so it really does need to be this sort of information coming up from the ground, but then a top down approach to figuring out how we’re going to make humans and the technology work best together.

JEFF HANCOCK: That’s right. It’s the weirdest thing. And we see this actual hiding of individual use and that just crushes innovation. And part of it is the reaction to our workslop article. People are like, “Oh, look, see when people use AI, it’s bad.” A lot of firms’ initial reaction to AI was a risk approach, which is totally fair, but led everybody to be worried about using AI. And then look, if you’re a young person in your 20s, you’ve already seen that your job outcomes and employment are impacted. They’re the first generation to see a decline in employability. So we have all these reasons for why people are staying at the individual level, using it privately, and the firms that figure out how to surface it and make it part of a team’s work, those are the ones that are going to thrive.

KATE NIEDERHOFFER: One thing that we thought was really interesting when we were modeling predictors of workslop is that the self-reported reason for creating workslop is because everything feels urgent and important. People feel overburdened. I don’t think people are consciously thinking about it as, I am doing this in a conscious way to have a nefarious impact on others. Instead, it’s more like, I have no time and I have a million people to manage and a million different tasks to complete.

Everything feels urgent and important. And here’s this really frictionless way for me to just get by. So I think there is something to acknowledge at the individual level that has to do with how easy this shortcut behavior is right now and how blameless the individuals are at the same time.

ALISON BEARD: So what organizations are figuring out a way to do this well and prevent workslop?

JEFF HANCOCK: So we know there’s a couple from colleagues that are investing in people and in AI and seem to be getting their employees really excited about the possibility. One is Lego. You wouldn’t think of that necessarily. It doesn’t seem like a super high-tech. And that’s probably where you’re going to see a lot of the advances is not in the super high-tech who are already kind of adopting at high rates, but Lego is investing in hiring people and in providing them with AI tools and giving them space to create and innovate. That comes with a cost, right? So this is a privately owned firm that has a fair amount of resources. So we’re seeing those kinds of companies are able to do that kind of investment required for the J-curve.

KATE NIEDERHOFFER: Yeah. I mean, I’m not at liberty to speak about any of our customers. I think that there are companies that are doing incredible things, and I’ll tell you the themes that I see horizontally across them. And one is that they’re building the right mindsets. So initially there were a lot of training tools available to help people develop AI literacy. And I think what makes somebody stand out is providing the training tools bundled with mindset training.

So we have done a bunch of work on this idea of the pilot mindset. It’s people who have high agency and high optimism about AI. It’s their mindset toward using the tools. And so we have customers who are providing that type of training so that people can approach these tools with the right mindset, to be curious and experimental and confident in their usage as opposed to just literate per se. So that seems to be making a really big difference.

The other is we have a lot of organizations that are investing in the talent infrastructure of the organization by providing, for example, coaching to address things like low agency, low optimism, low motivation, a low sense of mattering, and providing opportunities to get the workforce to feel like they can not only be engaged in their work, but they can compensate in some ways for the opportunity cost, which is collaborating with others, like using tools instead of interacting with other people or aligning and managing others. So we see big investments in the people within the organization.

And then I think the last is I see some organizations identifying less than 10, for example, seven priority areas where they can really embed AI into the workflows to solve particular challenges. And then they measure those particular outcomes that they’re trying to solve for. And they’re not just looking at blanket productivity, but instead thinking about, is this technology solving the problem that we have and leading to productivity in this specific place? So it’s sort of like precision development and then also targeted measurement.

ALISON BEARD: So that’s the very high level. Let’s bring it down one level. If you’re a manager on a team that is struggling with some sharing of workslop, what advice do you have for that person?

KATE NIEDERHOFFER: Compassion. I think that’s what we’ve learned first, is to catch yourself rolling your eyes, and instead offer some compassion and try to understand why is it that you’re feeling like everything is urgent and important right now and you need a shortcut, what can I help take off your plate and how can I see you and invest in you as a person? So thinking back to classic management, what is my job? It is to see and hear and grow and develop my team. So I think that’s the first thing to think about is like, how can I approach this in the most human way possible? And then I’ll let Jeff speak to more of our research on the human side.

JEFF HANCOCK: Yeah. Well, I think it’s related to that, which is what Amy Edmondson calls psych safety. We call trust. So teams that are willing to critique each other and to do that in a really positive and critical way that’s constructive. I mean, it’s like anytime you’ve been on a team that’s really great, it’s not all like, “Hey, let’s just be nice and fluffy with each other. Let’s really, I respect you enough that I will critique your work and I’ll pay attention to it and help us do something great.” And so psych safety or team trust was one of the biggest predictors of reducing workslop. So if somebody was on a team where they felt like really good that they were going to be critiqued in constructive and positive ways, they just didn’t produce as much workslop.

KATE NIEDERHOFFER: One of the issues that we’re struggling with right now, and the reason why we have such cognitive effort expenditure when we receive workslop is we are not in practice of providing constructive feedback to our colleagues. So that’s something that we need to do regardless of whether they’re creating workslop. It’s something that just – it leads to growth and development within the workplace.

ALISON BEARD: Yeah. It seems like an important part of the feedback would be coaching employees and colleagues on how to better evaluate what they’re producing with AI and maybe giving some tips or sharing knowledge on how to get better results from the gen AI tools that you have at your disposal.

JEFF HANCOCK: We often think of generative AI, that’s sort of the period that we’re in, but we’re moving away from where it’s just generating content like workslop. Workslop is essentially a generative thing to where AI is being used much more for analysis, for decision making, where the output isn’t just a bunch of text in an email or a slide deck. And so I think the only way to make sure that we’re going to be using these tools and these new ways in a high quality way is for teams that trust one another to use these tools and check one another, if that makes sense. It’s almost like it’s going to become more embedded and less about the thing I wrote and more about, “Well, here’s this giant analysis that I did and here’s how I did it using these tools. Will you check this for me? Or how does that sound to you?” And so I think that’s going to be really important why teams and collaboration is going to matter more than ever.

ALISON BEARD: Yeah. And remembering that all of our brains are still a valuable input too. We’re the ones who are coming up with new ideas, not just gathering and regurgitating what’s already existed. So I think emphasizing to everyone that AI is a fabulous averaging machine, it’s a fabulous aggregator of existing information, but your brain and your input really matters. And so we need you to be applying your brain to everything that you’re working on.

KATE NIEDERHOFFER: We will still need to use human judgment and discernment, and we will still need to figure out the best ways in which to collaborate with others, even when AI becomes masterful at context, for example, and is embedded in everything. No matter what, we still have to be really good decision makers and problem solvers. The tools are not going to do everything for us no matter how powerful they become.

ALISON BEARD: Well, I think that’s a perfect way to wrap up. Thank you so much for helping all of us figure out how to not generate workslop and do a better job making ourselves more productive and engaged with these new tools.

JEFF HANCOCK: Thanks, Alison.

KATE NIEDERHOFFER: Thanks so much for having us.

ALISON BEARD: That’s Kate Niederhoffer, chief scientist at BetterUp, and Stanford professor Jeff Hancock, speaking about AI workslop. You can find their articles on the topic by heading to HBR.org.

Next week, Adi speaks with Yale professor Jeffrey Sonnenfeld about the changing dynamic between government and business.

If you found this episode helpful, share it with a colleague and be sure to subscribe and rate IdeaCast in Apple Podcasts, Spotify, or wherever you listen. If you want to help leaders move the world forward, please consider subscribing to Harvard Business Review. You’ll get access to the HBR mobile app, the weekly exclusive Insider newsletter, and unlimited access to HBR online. Just head to HBR.org/subscribe.

Thanks to our team: Senior producer Mary Dooe. Audio product manager Ian Fox. and Senior Production Specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Alison Beard.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top