BRIAN KENNY: What does it take to get tens of thousands of employees to fundamentally change how they work? Today’s case examines a global sales organization that rolls out a powerful AI assistant, seamlessly embedded into everyday tools. Early on, excitement is high, but adoption stalls when employees don’t know how to use the tool in practice and frustration sets in. The organization experiments with new ways to drive adoption–focused training, peer champions and habit building, eventually transforming how the sales team operates and creates value. But just as progress takes hold, a new challenge emerges. The next generation of AI doesn’t just assist, it acts. Autonomous agents begin handling customer interactions, raising deeper questions about trust, control, and the future of work itself. This is a story about scaling AI and the human systems required to make it stick.
Today on Cold Call, we’ll discuss the case, “Microsoft Customer and Partner Solutions, The Deployment of Copilot” with Professors IAV BOJINOV and SHUNYUAN ZHANG. I’m your host, BRIAN KENNY, and you’re listening to Cold Call on the HBR Podcast Network.
BRIAN KENNY: IAV BOJINOV’s research looks at the process by which AI products are developed and integrated into the real-world. You are a repeat customer. Welcome back, Iav.
IAV BOJINOV: Thank you. It’s so good to be back.
BRIAN KENNY: It’s great to have you back on the show. And SHUNYUAN ZHANG uses machine learning to address marketing problems that have arisen within the nascent sharing economy. You can explain to us what that is as we get into the conversation. Thank you for joining us.
SHUNYUAN ZHANG: Well, thank you, Brian. Great to be here.
BRIAN KENNY: It’s great to have you both here. We are talking all the time. Everybody is, I’m sure our listeners are about AI this and AI that and the implications that it has. And this is a case about how it’s being applied in the real world in a very tangible way. So I’m glad that you wrote the case and I should mention Raffaella Sadun was your co-author on this as well. She wasn’t able to join us today. But I think our listeners will be really interested in hearing how this is being applied at Microsoft, a legendary firm, with their global sales force. So this is not a small undertaking. So why don’t we just dive right in. Shunyuan, I’m going to start with you and ask, what are the central issues in the case? And I’m wondering why you and Rafaella and Iav thought this was an interesting case to write.
SHUNYUAN ZHANG: I think what really drew us to the case is that it exactly flips the usual AI conversation on its head. And what I mean by that is if you listen to the AI debates right now, most of it is about whether the technology works. Now here, the technology, the AI model works in a company that builds AI with a sophisticated workforce. And yet you look at the results, still adoption collapsed from 22% to 5% after initial excitement, very quickly dropped within a month. And what we found is that the deeper issue wasn’t about the technology. It was about the human organizational and behavioral layers underneath. And that is what people are asked to change, what they are incentivized to do and to learn, and what their roles mean to them and how you really plan the AI rollout design. And what’s interesting is that Microsoft didn’t do anything unusual. They ran their standard playbook, and that’s what’s making this, I guess, stumble really interesting.
IAV BOJINOV: And just to add to this, I think what’s fascinating about this case is that it’s about Microsoft, the people who build the tool.
BRIAN KENNY: Right.
IAV BOJINOV: And this is the salesforce that is actually selling this tool. And so if these are the people who are trying to convince you to buy Copilot, and yet they’re not using it. Well, there’s something interesting happening here. And so we had to dive in and try to really understand what’s going on here and try to understand what were the challenges and how you can actually start to overcome it. Because frankly, if these guys can’t overcome it, what hope does another organization have?
BRIAN KENNY: Yes. Yeah. I’m old enough to remember when Microsoft introduced Clippy, which I think was like a first attempt early on at doing something AI related. So I couldn’t help but think about that when Copilot came out. But for the benefit of our listeners who maybe are where I am on the learning curve on AI, there’s been so much happening. It’s been developing so quickly and we’re hearing lots of new terminology come out, things like vibe coding and in the flow. And this case is very much about AI in the flow of work. Iav, maybe you can just level set for us what that means in this context.
IAV BOJINOV: Yeah, absolutely. And just to back up, I think AI has been growing at an exponential rate. The models are getting more and more intelligent. I actually did a fun calculation from the day that my MBA required course on “Data Science and AI for Leaders” started, which was end of January, early February to when they finished, which was about two weeks ago, April. You can look at how much more intelligence the models have gotten since the start to the end. And when you calculate it sort of across a variety of benchmarks, there’s about a 2% week-over-week improvement in intelligence, compounding.
IAV BOJINOV: And so things are really moving very, very quickly. And it’s not just one type of model. It’s we went from large language models to predict the next work to reasoning models that actually think before they speak. And now we are really at the agentic systems that can think, but can also actually act and achieve goals. And in this case, the case begins and it sort of builds, there’s A, B and a C case. It begins in the early days of Copilot where it’s just an assistant that you can access through Teams, through Outlook, through Microsoft, Docs and Excel, et cetera. And there, the hope was that people would start to use it and start to integrate it into the work. And this, Brian, what you’re saying, which is in the flow of things, the idea is that it doesn’t just sit on the side like Clippy, but it actually starts to get integrated into the work. And when they release this, the hope is that there will be this really quick, fast improvement in productivity and quality of work, and that’s just not what happened.
BRIAN KENNY: Yeah. Shunyuan sort of referred to that before. Microsoft has rolled out these things, I would imagine, many times over the years. They’re good at this, yet for some reason, this one didn’t take when they used the traditional model. Why do you think this broke the traditional model?
IAV BOJINOV: Yeah. I think this is the biggest mistake that most organizations make is they treat generative AI like every other technological deployment. The reason why it’s different is that this technology is a general purpose AI tool. They can do lots and lots of things, but the problem is it can do lots of things, but no one tells you what you should do with it. With prior technological waves, if you were deploying SAP, it’s very clear what you should do with SAP. And it’s very clear when you go from an analog system to a digital system, you’re just mapping things. So people know what they’re doing. But with this technology, what we learned is that you need to create space for people to actually experiment. And so there’s a whole experimentation phase where people just need to play around with this technology so they can figure out what are patterns that emerge that can actually create value for the organization. And it turns out not everyone is good at that experimentation. Not everyone has that mindset. A lot of people just want to do their work. And so that’s why it wasn’t that surprising that lots of people tried it. They didn’t find any valuable use cases, then they stopped using it. But a small handful of people need to persevere and keep experimenting. And so step one in deployment now is creating space for people to experiment and find those high-value use cases.
BRIAN KENNY: But to your point, I think it raises the interesting question about, salespeople in particular, their time is really valuable. They’ve got to be on task all the time. They’ve got goals they’ve got to meet. Their managers are putting pressure on them to meet those goals. So the idea of taking time to sort of play around with something is probably an anathema to those folks. Can you talk a little bit, Shunyuan, about maybe the gap that exists between how people perceive the potential of these AI tools and what it means in their day-to-day reality?
SHUNYUAN ZHANG: There’s definitely a gap. Initially the adoption was high and the excitement wasn’t fake because people genuinely expected the AI to come in and do the parts of the job they hated. But the problem exactly as Iav was explaining, you need experimentation. And as a quota-caring seller, I don’t have time to sit there and just try 14 different 15 different prompts just to get a usable output. I think the gap is really between a great demo and a real workday. In a demo, in a lab, AI looks magical, but in seller’s daily life, in their workday, they simply ask this question, “Does this help me right now in the middle of my workflow?” If the answer is only, “Well, sometimes depends,” then people quickly go back to their old habits. That’s the drop we saw, but I wouldn’t say it’s the people rejecting AI. It just means that the early value is uneven and really hard for people to assess.
BRIAN KENNY: Yeah. And I think change is hard just generally speaking, right? So changing anything in your workflow is hard. Iav, you were going to say something.
IAV BOJINOV: Yeah, I was just going to add, we know from 20 years of research on technological adoption that when the new technology comes out, this is sort of Erik Brynjolfsson’s work, there’s a productivity J-curve, which is in the beginning when you use it, you actually lose productivity. And if you think about Copilot and any Gen AI tool, and I’m sure all of our listeners can relate to this, the first time you use that tool, you weren’t certainly more productive. In fact, you were probably much slower. And then maybe two days later, you were even slower, but then maybe after a couple of weeks of working, you may have finally got back to where you were in productivity, and then a few weeks later, you may actually be somewhere better. And what we learned from this case is that actually there are three things happening here. One is you have the product.
The product tells you how good you can actually get with this technology. How high can your productivity be? The second thing is the process, which is how quickly can you learn from the bottom to the top? Meaning, do we have trainings in place? Do we have playbooks in place so that you can really find those high-value use cases? That tells you how quickly you get out of the productivity slump. And then the third one is the people. And this is what Shunyuan was saying, which is not everyone puts in the same effort. And so what we’ve seen is the people who put in a little bit of effort, they don’t have that dip, but they also don’t get the benefit. And the people who put in lots of effort, they have a huge dip, but eventually they get out of it. But the thing that fewer people talk about is the competency trap, which is the people who are really, really good, they may not actually be better off using this tool.
BRIAN KENNY: Interesting.
IAV BOJINOV: Because they may have all of the shortcuts, they know everything, how to use every single system perfectly. And when you give them this new system, which is powerful, but at the time, this was sort of early days of Copilot, it was still very new, very experimental. They may actually be worse off in using this tool. And so it’s not surprising that people tried it, didn’t see the value, saw the productivity dip, and then just quickly moved on to what they were doing in the past.
BRIAN KENNY: Yeah. I know that people like you who study AI are quick to tell us that we’ve been using AI for years. You didn’t know it, but you’ve been using it for this and for that and this other thing here. And I’m sure all of that’s true and maybe that’s what made it easier to use because it was like a single use thing. It had a very specific thing that it was trying to do for me. These are general purpose tools, right? You can do anything with them, or maybe not anything, but you can do lots of things. How do you have to think differently maybe about deploying a general purpose tool versus something that’s designed to do something specific?
IAV BOJINOV: Yeah, I think it comes back to that experimentation. And after you’ve done the experimentation, then you need to figure out how do you actually scale this and share it with people. And what’s really interesting when we saw in this case study is the people who are really good at experimenting are not the people who are really good at getting everyone else to use the tool. So typically, if you were to describe the experimenters, they tend to be a lot younger, they tend to be very sort of tech-savvy, but these are not the people who are going to convince everyone else to drive adoption. The people who are actually much more likely to get other people to use these are the people who’ve been there for 20 or 30 years.
And one of the fascinating things that we saw in this case study was, when we’re speaking to various leaders, one of them mentioned that what they did was they placed their sort of most senior, most experienced, most anti-AI, anti-tech person as like the adoption champion and partnered them with one of the young kids that was willing to experiment, try all these crazy things. And the reason was that, okay, this one person is experimenting, they’re finding the new thing. If they can convince this person who’s been at the company 15, 20 years, really knows the organization, knows the business model, or if they can convince them that that adds value, then that person, the second they switch, everyone else switches.
BRIAN KENNY: Yeah. Because it has sort of instant credibility, I guess, when you do it that way.
IAV BOJINOV: Exactly.
BRIAN KENNY: You both talked about the fact that this is a tech-savvy workforce that this is being rolled out to. It’s also, as a sales organization, one that’s got all of their processes in place, I’m sure they’re very buttoned up on the process side. And this is a departure from the process, which is hard. We talked about the fact that change is difficult. What do you think, Shunyuan, were the most important levers that they activated to try and get people to change?
SHUNYUAN ZHANG: I think what we learned from this case and also what Microsoft did eventually, there are different, several levers, and Iav actually covered most of them, there’s a training lever, you need basic skilling, people need to understand what AI can do and how to ask for it. And then there’s a management lever, which is when you experience this J-curve, this dip, people are suffering at the bottom. They are taking on extra work today for uncertain payoff tomorrow. Now, if they don’t hit the quota this month, what does it mean to them? Who’s going to take the responsibility? Management needs to stand up and say, “Hey, this is a learning, you’re not going to be penalized because of the learning curve.”
BRIAN KENNY: Did they do that? Did they step up to the plate that way?
SHUNYUAN ZHANG: Well, I’m sure there are some conversations that have happened in the company, otherwise you wouldn’t be able to get people comfortable experimenting and go through the dip in the learning curve. And there’s a peer or social lever, which is this, and they call it finding the cool kids, which are the highly respected people who would have already used AI and you want to make sure they are winning with visibility so people see how they’re winning with AI. So training, social management levers, I think social perhaps is the most important and might not be the most obvious one because when you’re facing something this new, it’s a general purpose AI that is supposed to be embedded in a specific role in a workflow, and you’re asking people to take the learning cost for something tomorrow, which is uncertain, they want to look sideways before they look up and see how their peers are using it and how the results are. And that would be the most evident thing to convince them to use.
BRIAN KENNY: Speaking from my experience at Harvard Business School, Iav, you participated in a thing we call the AI Academy, which is something that we’re rolling out across the whole school. It’s fabulous. We’re putting 2,000 staff through this program to get everybody to at least recognize ways in which they might be able to use AI. So that’s my sort of point of reference for this, but it became real for me when you had us do some exercises and it almost felt like a game, like we were competing in a game. So this idea of gamification, which I think comes up in the case too, is maybe another lever that they tried to use.
IAV BOJINOV: Yeah, absolutely. And I think this is another failure point, which is a lot of organizations, and Microsoft did this as well when they first started, they wanted people to focus on value. But if you’re thinking about experimentation and learning, you need to do it in a safe environment where you’re almost pulled out of your work for a little bit of time in the AI Academy. And I was actually just teaching in it this week, Monday and Tuesday, we had two sessions on this, but it allows us to pull out our staff. And by the way, the faculty are also going through a version of this as well. It gives us a chance to pull them out, gives us a chance to explain what AI is, and then to create a safe space where they can learn and explore the capabilities. So then at the end of it, they can figure out, “Okay, we saw these nine different use cases of AI, which of these apply to my everyday work and how can I go out tomorrow and next week and start to use these capabilities in my work?” And we’ve written cases on other companies. We have a case on Moderna and one of the things the students always get caught up in that case is someone built this thing called a CatGPT. They basically generate images of cats. And all of our MBA students are always like, “There’s no productivity here. Why would you do that? That sounds so silly.” But if you sort of scratch at it, it’s like, it actually created space-
BRIAN KENNY: No pun intended.
IAV BOJINOV: Yeah. It created some space for people to play around and it made it safe, it made it fun, it got rid of the fear, it showed them a different way of working. And to me, that’s just such a critical part of it that many people just overlook and they just focus on productivity, productivity, productivity, without realizing that that just creates fear, anxiety and doesn’t create this excitement, which is what we should want around this new technology. It’s so amazing. It can do so many things, but I think if we just anchor on productivity, we just fall down in the first hurdle.
BRIAN KENNY: Yeah. And another benefit that does come through in the case too, particularly in a sales organization where maybe a lot of the things that you do are perfunctory in sales and you don’t feel so much like a consultant, you’re more like you’re doing transactions with people, right? So one of the benefits that comes through in the case is that this frees up the salespeople to be more consultant-like and less transactional. Is that something that you saw when you were talking to them?
SHUNYUAN ZHANG: Yes, absolutely. And I think this is not just happening to Microsoft in this specific case in some of other companies that I’ve worked with, like this B2B selling company in Turkey, to selling these big machines. The moment when you have AI to assist salespeople, because 70% of their time are spent on admin stuff, are not customer-facing, but salespeople are supposed to be customer-facing. So when you have AI to really free up their time, and you have AI to provide data-driven insights about what the customer needs and wants, not just asking salespeople to, I don’t know, maybe by memorizing what customers really like, you’re transforming, or the word that I like to use is you’re doing this a role rechartering, you’re upgrading the role from doing transactional selling into this relational building space with customer and serving a consultative role. You’re helping customers, what they really want and what they could benefit from a product that maybe they have not been exposed to, but they could really benefit from.
BRIAN KENNY: Yeah. It gives you a chance to step back maybe and think differently about the relationship you have. Iav, you were here earlier on to discuss with me, Pernod, the case about how they adopted AI within their organization. And we talked a lot about the mentality of salespeople, right? And salespeople are very relationship focused. They value those relationships. They hold onto them very closely and tightly, right? I want to talk a little bit about the shift here to agentic AI that was happening at Microsoft and maybe how that could play into some of the fears and concerns that a sales person might have as they see this moving into their realm.
IAV BOJINOV: Yeah, absolutely. And just to add, we didn’t close out the adoption story, which is after they redefined how they approach this created space for experimentation, they were able to push the organization into sort of 80, 90% daily active users. It took a little bit of time. It did raise a big question, which is what do you do after two years of the laggards, which is a really interesting conversation to have. But at the same time, where we left off in the C case is this notion of autonomous sales agent. These are AI systems that can basically handle the end-to-end sales pipeline from first conversations with clients to recommendations, potential negotiations, executing the trade, the sales, sorry, executing the sale, and then monitoring how that sale goes through. In the case, we talk about how they had done an experiment where they deployed these sales agents to the SMB space, and you have to know the Microsoft has an enterprise-first mentality where they really just work with large organizations. And so if you’re a two, three person company, you’re not speaking to a sales rep.
BRIAN KENNY: Right.
IAV BOJINOV: It’s self-service. And so the idea was that this could basically go into this white space, a place that they’re not covering-
BRIAN KENNY: So they’re not threatening.
IAV BOJINOV: They’re not threatening them.
BRIAN KENNY: The enterprise salespeople, yeah.
IAV BOJINOV: Where we leave the case is with this really interesting question, which is, “Well, this has been really successful in early piloting and experimentation.” What do you do with these agents now? Do you start to expand their scope and breadth and start to give them real clients, real enterprise customers to interact with? Who manages these? Do they stay sort of centrally managed by a core agent management team? Or do you empower the sales professionals to actually deploy their own personal agents to go and interact with their own personal customers, right?
BRIAN KENNY: That sounds scary.
IAV BOJINOV: Well, it’s scary, but it’s also a huge opportunity for them because they know their customers, and they know which of their customers would be okay with getting an instant response from an agent if there’s a quick question that comes up in which customer actually needs to pick up the phone and speaks to a real person. And so that’s kind of where we left of the case is this question of agents are out, they can do incredible things. How do you manage them? Who develops them? How do you check that they’re doing a good job? And how do you deploy them at scale?
BRIAN KENNY: Well, let’s maybe probe some of those a little bit. Shunyuan: How do you think about maybe the possibilities, but also the limitations of automation managing something that is so relationship focused and the nuances that go into relationships. As a salesperson, I’d like to think, “Hey, I really know this customer. I know it’s family. I know all about this person.” Can you have an automated agent have the same level of understanding? I’m putting you on the spot. This is Cold Call after all.
SHUNYUAN ZHANG: I would love to have one. Well, if we take step back, this autonomous agent… Iav talked about this and we had this conversation about AI transforming sales agents, I mean, salespeople by giving them more capability and flexibility. Now, an autonomous agent is sort of pushing the force along the line, right? But we’ve got to be careful because we’re talking about salespeople here. It’s customer-facing, it’s relationship work. And relationship work to me is, I would call it high variance work because there’s a lot of subtle, hidden, but important human judgment in the work.
And my theory is that if we think about this as a spectrum or a ladder, when AI gets more and more and more autonomous, I think the technical or technology problem gets actually easier, but its harder part would be the human component. And it is exactly because of this boundary that is defined by relationship. I mean, if you think about human experts, they can read the room and they know when to push and stress in a tense moment, what not to say and when to stop and when to back off. But that’s really the hardest part to encode with AI. And if we have AI just analyze all the emails, AI can definitely end up pulling all the information in a customer email, but a human expert looks at the email, knows one specific sentence in the email matters ten times more than everything else. So it’s really this, I guess the relationship, which is about trust building, the brand building thing that’s making things really…we’ve got to be really careful. So the limitation I would say in general is that AI works best on the more structured part of the work. But there are more human, judgmental, unstructured parts that we really need to be more careful because whenever it touches customers, relationship and brand building and trust.
BRIAN KENNY: Yeah. And which I guess makes me feel optimistic because as people are thinking about all the jobs are going to go away, we’re hearing a lot of alarm bells going off about AI and the impact it’s going to have on the workforce. The one thing I’ve heard consistently is that AI doesn’t do judgment the way that humans do. This has been a great conversation. I knew it would be. We’ve got little time left. So I have one question for each of you. It’s actually the same question. So maybe I’ll start with you, Shunyuan and say, if there’s one thing you want people to take away from this Microsoft case, what would it be?
SHUNYUAN ZHANG: Technology in terms of AI adoption, transforming business is possible definitely, but the technology is the easier part. The harder part is everything around it, and that’s identity, incentive, management, and human behavior.
BRIAN KENNY: Yeah. Iav, you can’t say that you agree with that, and that can’t be your answer, you need to add to that.
IAV BOJINOV: That sounds great. I will add to that. I think one of my big takeaways from this case is that AI changes job descriptions, but not jobs, meaning the work that people do with AI starts to transform. There are specific tasks they no longer do, but that doesn’t mean they’re out of a job. It really means that people can do more. In this specific example, the fact that AI was completing a lot of the administrative work allowed the sales associates to really focus on the high value work and to really provide… There were these amazing examples of working with organizations to create these great revenue share ideas that in the past they just didn’t have time to explore and didn’t even have the capabilities to do. They would have an internal consulting team come help them. But now with AI, they were empowered to do it. And so AI changes jobs descriptions, but I’m less worried about jobs.
BRIAN KENNY: Yeah. So you’re both bullish on the future with AI, I would imagine.
IAV BOJINOV: Oh, absolutely.
BRIAN KENNY: Is there going to be a D case about Microsoft?
IAV BOJINOV: Probably, yeah. I mean, they’ve done so much since then, and we work very closely with them on several other projects, and it’s just such a fascinating space that it’s just changing so, so quickly.
BRIAN KENNY: Yeah, and they’re an amazing firm. So to hear how they struggle with this and then found a way through it, I think is really interesting and encouraging. So, Shunyuan, Iav, thanks for joining me.
IAV BOJINOV: Thank you.
SHUNYUAN ZHANG: Thank you.
BRIAN KENNY: If you enjoy Cold Call, you might like our other podcasts: Climate Rising, Coaching Real Leaders, IdeaCast, Managing the Future of Work, Skydeck, and Think Big, Buy Small. Find them wherever you get your podcasts.
If you have any suggestions or just want to say hello, we want to hear from you. Email us at coldcall@hbs.edu. Thanks again for joining us. I’m your host Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School and part of the HBR Podcast Network.
