Towards a deeper dialogue with AI: from Co-Pilot to Co-Thinker

Can generative AI be harnessed to benefit management? Can an AI chatbot be enhanced to help managers in sophisticated tasks, such as decision-making and strategy-thinking?

A recent experiment conducted by Capgemini, in collaboration with Thinkers50, studied current AI technology to design and test three prototypes of AI Co-Thinkers. Focusing on three core pillars of management – leadership, strategy, and innovation – the experiment sought to discover if generative AI could help tackle challenges that require reflection, consideration of different perspectives, and meaningful, two-way dialogue.

Find out what happened in this LinkedIn live session, which features Thinkers50 Breakthrough Idea Award recipient, Leon Prieto, Gabriele Rosani, Director of Content and Research at Capgemini Invent’s Management Lab, and Corey Crossan from The Oxford Character Project.

WATCH IT HERE:

TRANSCRIPT

Stuart Crainer:

Hello. I am Stuart Crainer, Co-founder of Thinkers50. Welcome to this special webinar in partnership with our friends at Capgemini Invent. This promises to be an intriguing and practical discussion on the future role of AI in management.

First of all, thanks for joining us. Please let us know where you are today. And please also send in your thoughts, comments, questions at any time during the session, and I’ll be sure to relay them to our trio of experts. Today’s session marks the publication of a new report from the Management Lab by Capgemini Invent in partnership with Thinkers50. It provides exciting new practical insights into the likely future of management. Welcome to the world of ManagementGPT. The report is available now from thinkers50.com. It can be downloaded free of charge.

We believe that we’re now entering the next phase of our understanding and application of AI. In the last few years, we’ve heard a lot about how AI is going to revolutionize the workplace. Mostly the emphasis has been on how technology is going to replace repetitive jobs. Very little attention so far has been paid to how AI can and will transform the work of management and leadership. Indeed, for all the talk about generative AI, the work of management remains largely untouched by the new tech wave. Research by Katim Lakhani of Harvard Business School suggests that less than 10% of executives use generative AI tools in their daily work, and that figure is inevitably set to change.

So how can and will generative AI change the world of management? To explore the potential answers, our friends at Capgemini Invent have been involved in some unique experiments and research. In June last year, Harvard Business Review Italia published the e-book, Generative AI for Strategy and Innovation, and the new report, ManagementGPT, builds on this initial work. We are delighted that we have a great trio to talk us through this work and explain its potential ramifications for the world of management.

First, an apology from Paolo Cervini. Paolo should have been taking part but is unfortunately ill and unable to join us. Best wishes to you, Paolo. So let me introduce our starring trio. From Capgemini Invent, we are joined by Gabriele Rosani. Gabriele is Director of Content and Research at Capgemini Invent’s Management Lab. He has more than 15 years of strategy consulting experience. His most recent article in the Harvard Business Review was ‘Good Judgment is a Competitive Advantage in the Age of AI,’ which came out last September.

With Gabriele, we have Corey Crossan. Corey is a research and teaching fellow at The Oxford Character Project. Check out the project’s website at oxfordcharacter.org. At The Oxford Character Project, Corey develops and facilitates character development programs for students, industry, and university partners. Her research examines how character can be developed and its impact on performance and wellbeing. And finally, but definitely not least, we are joined by Leon Prieto. Leon is a professor of management at the College of Business at Clayton State University, where he is Director of the Center for Social Innovation and Sustainable Entrepreneurship. Leon is a Thinkers50 Award winner and the co-author of African American Management History, the second edition of which comes out soon. Leon is also a research fellow at the Cambridge Center for Social Innovation.

So, a great trio. To set the scene and kick things off, over to you, Gabriele.

Gabriele Rosani:

Thank you, Stuart, for the introduction. Let me first say a few words about the concept of ManagementGPT or the idea behind it. We wanted to explore how generative AI chatbots can be used by managers for their managerial tasks in their daily job, so talking about decision making or strategy thinking or leading teams. We’re talking about complex managerial tasks, we’re not talking about summarizing a document or writing an email. We are focusing on complex topics that require thinking, reflection, and pondering pros and cons, different perspective, so having multiple dimensions to consider. So where there is complexity, you need to think, you need to reflect. You cannot simply press a button and get an answer.

That’s why also we call this experiment AI Co-Thinkers. You may heard the word co-pilot, which is used also on Microsoft, which is basically something for more basic stuff. But here really we talk about thinking together with the machine, having a conversation, a dialogue with the machine, and helping you reflect on something which is a more sophisticated task. So considering AI more as a thought partner, as a sparring partner rather than an executor or a mere assistant.

Our hypothesis that we wanted to prove was to see if it is possible already today with the current technology having this meaningful and value-added dialogue between a manager and a generative AI chatbot. This sort of dialogue, it should not be considered just one way, so you ask a question and you get an answer. It should be more a conversation with the machine so that there is an exchange, input and then feedback, and then back and forth. So the machine waits for your comments, asks you questions, challenge your assumptions. So it’s really a conversation rather than simply pressing a button to have an answer. So it’s something more like a dialogue.

Here you see the page of the report that we have just published. We call this ManagementGPT: Prototypes of AI Co-Thinkers. We’re talking about prototypes because it’s basically a chatbot that we have designed and then tested with some testers, and we have collected their feedback. Because, of course, there are many management tasks that you could map and try to have a dialogue with the machine on, but we decided to focus on three for practical reasons, and also we wanted to cover three main buckets of management. One is leadership, one is strategy, and the other one is innovation.

If we can move on to the next slide, you see here these three prototypes that we have designed and tested. As I said, we cover leadership with the first prototype that we developed together with Corey Crossan and Edward Brooks from The Oxford Character Project. This is a prototype on leadership behaviors. As you know, one of the challenges in embodying values is basically to translate abstract values into something concrete and actionable and measurable. And thanks to this chatbot that we have developed, the manager can have a conversation with the chatbot, and then as an output it gets some more concrete actions and metrics related to a specific value that he wants to explore. And of course, here the content injected in this chatbot is based on the fantastic academic and practical work done by The Oxford Character Project.

The second use case that you see in the middle is about strategy, in this case platform business models. We know that in platforms, one of the issues is about network effects, which is something that managers struggle with, so how I designed the network effect at the corner store for successful platform business models. And so with this chatbot, a manager can have a conversation, and based on the input, which is the business problem that he wants to explore, then with this chatbot you can have a conversation of the different types of network effects that could be triggered and how you can design the network effects with also some concrete examples and actions and how you can measure this. So you see, it’s something very concrete, but it follows structural content and methodological guidance.

The third one is about innovation, and in this case multi-stakeholder problems. We know especially in fields like sustainability that you have many stakeholders and the problems are complex because you have different interest and needs and pain points and you have to have clarity on these different interests and perspectives. So with the machine you can have a dialogue and try to understand the different perspectives so that you don’t miss something important and you try to be more inclusive in framing a problem. And basically, you consider all the stakeholders or their needs and try to anticipate some red flags when you then tackle this problem. This specific case we developed with Leon and Simone Phipps.

Before we zoom in one of these cases, I would like to show a little bit the logic of the experimentation in the next slide. So you see that we tested different scenarios. The basic traditional one is, okay, you have a problem, you go on Google. So that would be the case until last year. But since last year, we know that there are this fantastic new technology generative AI chatbot like ChatGPT or many others, so you can ask in your language some questions. You can start the conversation, but of course, it depends on your knowledge on the topic. So it may be a little bit spontaneous and not very structured in terms of sequence of logic, of flow. That’s why we said we need to have a more structured flow than what you see on the right side of the slide, when you see this sequence.

The point of this sequence is that you have, let’s say, a journey, a conversation with the machine, and you see there are some steps that are done by the machine, but then there are these little icons. It’s where there is the human part, the input or the feedback or the comment or an injection of pieces of context through this conversation. It’s that dialogue along the steps that creates value. So the value is co-created between human and machine. That’s why we say it’s a Co-Thinker, because its program is a custom bot, which is, let’s say, programmed to follow a certain sequence. When I say programmed, I don’t mean coding, I mean having, let’s say, a play script of this dialogue between human and machine. So what is the part from the machine and what’s the part from the human side.

To make it practical, let’s look at one example that I hope will clarify this. If we move to the next… This is the flow, the play script as I like to call it, that we develop with Leon, is about multi-stakeholder co-creation. So let’s imagine that you are a manager, you want to tackle a complex problem with many stakeholders involved. What you could do following this sequence, so the chatbot will start the conversation triggering and asking you to provide some input and context to describe the problem in scope. Then the chatbot will articulate it and say, “Okay, do we want to keep the entire scope? You want to make it more granular? You want to split it?” And then you give a feedback, you say, “No, okay, I want to keep it holistic. Fine, let’s move on.” And so you will then input some potential stakeholders to consider.

The bot will articulate and also suggest additional stakeholders that you may have missed. And so it’s up to you to say, “Okay, good, I missed this one, so add to my list.” Or maybe you want to drop from this list or you want more explanation about a specific stakeholder, and the machine can articulate. Then you move on, so you ask the machine to articulate the pain points, the needs for each of the stakeholders. And the machine is very good at doing that. Then it’s up to you to review this and say, “Okay, I like this. Maybe this is not relevant in my context.” And so if you provide contextual details and then try to have a conversation, let’s say, with the machine, then this content will get deeper, and then you can move on and you can ask to articulate also potential red flags. The machine will wear the hat of each stakeholder and formulate potential flags or potential risk. And then again, it’s up to you to comment, to review, and then to prioritize which is more relevant in your specific case.

And then again, you move on and the machine can evaluate all these analyses against the set of predefined criteria and will highlight which criteria are met and which criteria perhaps are not met and suggest some action that you can put in place. Then it’s up to you to say yes or no or discard or accept this recommendation. At the end of this flow, the machine will summarize all this discussion in a table. So you will have the stakeholders, you will have all the needs, the pain points, the potential risk, and the suggested mitigation action so that you can download the output of this conversation.

Of course, the quality of the conversation depends also on what you put in. That’s why we have these blue boxes, which is the part of the machine, but it’s also important the icons with the human above and below. It’s fundamental to create a value-added conversation. And this is what we did with Leon. We tested with some of his MBA students and also with some of our colleagues and other testers. We got feedback to validate the first version and see how we can change the flow. Of course, behind each box there is a prompt. I don’t go into the technicalities, but basically this is the play script, the flow that needs to be there to ensure a good conversation. I will stop here and will hand over to Leon. Leon, I think that you can tell a little bit what’s behind the scene of this flow.

Leon Prieto:

Yes, yes. It’s a pleasure to talk about the power of generative AI as it relates to multi-stakeholder co-creation. I have to make a confession, at first I was very skeptical about generative AI and the human element, so I have to say that first and foremost. But as I delved further into this interesting tool, it has evolved from an AI assistant to, I would say, a strategic Co-Thinker, and it can really help us tackle problems much better than we have done previously.

I’m a firm believer that the future belongs to those who collaborate, be it humans, nature, and yes, even with AI. So Co-Thinker really helped us develop a roadmap for managers who really want to incorporate multi-stakeholder co-creation in order to gain a cooperative advantage. And by cooperative advantage, what I mean by that is the benefits that a company possesses and accrues based on their people-centered approach to engendering a spirit of care and community dialogue and consensus building, not just with their employees but also with customers, the wider community, and the entire value chain. ManagementGPT could really help us really engage with some of the complex issues that you may encounter when it comes to really thinking about stakeholders.

But first and foremost, we really have to initiate with a clear problem statement at the onset. So it helps if you clearly articulate your challenge, have that conversation with the AI Co-Thinker. In one of the cases that I’ve used, it looks at electric vehicles. As you may know, the production of electric vehicles, it requires a lot of cobalt, and a lot of cobalt is found in the Democratic Republic of Congo and Congo in general. One thing that we need to keep in mind in the quest for electric vehicles and saving this planet, sometimes we may even cause greater harm by extracting things like cobalt. It may result in human rights abuses, environmental degradation, et cetera. With ManagementGPT, it could really help us really examine who are some of the people that may be impacted if your electric vehicle company decides to do business in Africa. It’ll really help you identify pain points and things like that. So you could have that conversation with the Co-Thinker asking, “And also, what’s the core challenge an EV company faces in sourcing cobalt?”

And as a user, you can also create a list of stakeholders that you may want to engage with, and you could have that conversation with the AI Co-Thinker to list any potential stakeholders that you may miss. A lot of times, working among ourselves we may encounter blind spots, things that you may not have thought about. But by working with an AI Co-Thinker you can actually get to really think about other stakeholders that you may miss. So in this chess game of management, AI could really help you see the entire board. This really makes it a game-changer. So you can list all potential stakeholders affected by the decision to do business in Africa, and ManagementGPT can help you suggest additional ones that you may miss, and it could help you get a more holistic view.

For example, you probably might miss local communities, cooperatives that may work as it relates to mining the cobalt or labor agencies or environmental NGOs that you may have ignored. Then it also helps you deep dive into stakeholder perspectives. So it could help you consider each viewpoint, because previously you may have a more myopic approach, but with this strategic AI Co-Thinker, it could really help you really understand different stakeholders’ perspectives. And that may increase your empathy and your understanding. So an AI Co-Thinker can help you increase your humanity as it relates to engaging some of these problems.

It could also help identifying and mitigating risks. So you could have that conversation with a machine to discuss any potential risk and their mitigations. Of course, you have to conceptualize what you think are some of the potential risks, but the conversation with the Co-Thinker could help you clarify some of those things, and it may also encourage you to follow up and really delve deeper. So it’s not about you asking a question and it’s regurgitating results. It’s a conversation with the machine, a meaningful conversation. And it really helps if you ask good questions and you provide good data. It could really help you with this, and it could also help you refine and validate information and feedback. So this thing, it is very useful, and I could see managers and executives really utilizing an AI Co-Thinker to really tackle some of the pressing issues a number of the organizations are facing within the community.

Stuart Crainer:

Thank you, Leon. Before we get to the final stage, thank you, everyone, for joining us. We’ve got people from Morocco, Switzerland, Germany, the US, Denmark, Italy, Ireland, Norway, and elsewhere. Please send in your questions. We’ll have plenty of time for questions towards the end of the session. Back to you, Gabriele.

Gabriele Rosani:

Yeah, thanks. So we applied a similar logic that we have shown for multi-stakeholder problem framing. We also did the same logic of sequence flow and play script, the dialogue between human and machine, for responsible leadership where we joined forces with The Oxford Character Project and with Corey and with Edward Brooks. Of course, we don’t want to replace the human development needed for responsible leadership, but we want to help leaders practice virtuous behaviors. We have proved with this experiment that with the machine they can do that. So we design and test in the flow that start with the manager, the leader, inputting a value, for example, from the company manifesto, a value that this leader want to embody in concrete. Because at the end, there is always a big gap in terms of what the talk. So you have some values, but then how can we translate it concretely?

And thanks to the machine, you can facilitate this translation, because from the value, it can help you think about character features related to the value. And then you can select one character feature, and the machine based on the content that we injected can suggest bad habits or good habits that you can drop or start doing. And also, based on the specific habit that you want to practice, it can suggest concrete actions. And of course, if you provide details on your specific context, the machine can be even more precise in giving some examples. And then you can from these examples select which behavior in practice you can have. The machine can also help you put some targets, some metrics that you can track your progress.

And of course, the content behind it is reliable because it’s something that Oxford studied for many years. And so I will then hand over to Corey, but definitely it’s important to have a content which is reliable behind it. Because this is one of the issues of these models, that sometimes they hallucinate, they can derail. So that’s important to have good guardrails and good content injection.

Corey Crossan:

Thanks, I’ll take it from here. It’s been such a joy for Ed and I to be able to work with the team on creating this on responsible leadership and share our expertise in character. I actually was drawn to the area of character from my elite athletics background in terms of how character can really elevate our performance and well-being. That’s what I’ve always loved so much about this character area, is it has the potential to support just about anything that we’re looking towards, and responsible leadership being one of those very many things.

And so the way that we’ve tried to tackle this responsible leadership piece is, just as was shared a few moments ago, that we start with that value. But as we know, it takes a lot more to actually live and practice those values. We see a lot of organizations with amazing values on their websites and their walls, but how do we actually live and breathe those values?

For example, integrity is one of the really popular values, but what does it actually mean for a person to be a person of integrity? And so we look at the character qualities that we can cultivate and develop to actually be a person of integrity, which means being someone who’s authentic, someone who’s principled, candid, transparent, and consistent. And so these character qualities are all virtues, but they can operate between two vices if they’re not well-developed. For example, if we look at courage, it can operate between the deficient vice of being cowardice if it’s not well-developed, but also it can become reckless. If it’s so strong but unsupported by another character quality like humility for example, it can become reckless. And so really good strong character, this robust set of virtues that work together, ultimately support good judgment, good decision-making, helping us be better, responsible leaders.

The really important kind of end piece to this sequence is that you do end up with a set of action items and metrics. You can actually download an Excel doc to help you track it because, with character, it doesn’t just end in this chat experience, but it actually takes this daily practice to cultivate it. And so that’s the one thing that we really wanted to leave everyone with who’s using this, is actually the tools to continue to cultivate and develop character.

I have an exercise science background, and I often refer to exercise analogies. Just like we can’t get fit by reading about it or learning about it, same thing with character. We can’t just think about it, we have to actually cultivate and do these practices day in and day out. I think that the really wonderful thing about what we’ve created here, and there’s lots more that we’ll do with it, but one piece with character and responsible leadership is we often focus on the systems that either support or inhibit who we want to be. That’s the character piece of leadership. I actually love the quote by James Clear where he talks about how we don’t rise to the levels of our goals, but we fall to the levels of our systems. I really love this piece that we’ve created because it’s contributing to the constructive systems that can support our responsible leadership through that character practice. And so we really see it as a nudge to help us along in terms of our responsible leadership.

I think the other piece around leadership too is that we don’t think about leadership so much as the position of leadership, but more of the disposition to lead. One of the wonderful things about this Co-Thinker is that it democratizes responsible leadership. It makes it so much more accessible to anyone who is interested in developing their responsible leadership. So I think that’s a really exciting thing that we’ve been able to do.

And maybe the last thing that I’ll share here is that there are concerns with AI. There’s a really kind of interesting piece when we’re talking about character here because there’s this symbiotic relationship between AI and character. I think AI has all of these really exciting opportunities and possibilities, but of course with it comes a lot of challenges, and in particular, it does challenge our character. In doing so, it reveals areas that we can focus on. But the really cool thing is that we can actually use AI to develop our character. And so it requires really good judgment that arises from strong character to be able to handle AI responsibly.

The whole humanizing thing, I think there’s a lot of concerns around AI dehumanizing leadership in organizations, but the very thing that we’re trying to do is actually support the humanization of leadership more by actually addressing the character piece. I’ll pause there and hand it back over to Stuart, I think.

Stuart Crainer:

Thanks very much, Corey. Please send in your questions. A number of questions already, so let’s work through some of the questions. Tammy Madsen, very nice to see your name there, Tammy, Tammy asks, “Testing with AI models,” which I think Gabriele referred to initially, and she’s curious about how much variation in responses existed among the different chatbots. Gabriele?

Gabriele Rosani:

Yes, we tested different chatbots. It’s definitely true that we have seen a big difference between, let’s say for example, on the GPT, so the 3.5 versus 4. We have seen really the higher performance of the GPT4.0. So, it’s true, it depends on the model, that’s for sure. But we think that this is not the main point, so it’s not about which model is better … because I think it’s more which structure and flow is a good flow. Sometimes the discussion goes to the, okay, this model is better than this one, but basically if you don’t put your good judgment, you can use the best model and perhaps you underperform somebody else who is using maybe a less performing model but with more thinking. So that’s the point. I just would like to make sure that we don’t focus just on the performance of the technology but we focus also on how to structure the flow, what kind of question we should ask, what kind of back and forth during the dialogue. So the prompt strategy sometimes is more important than the prompt engineering, so the technique itself and the technology behind.

Stuart Crainer:

And can you clarify, Gabriele, is your chatbot, is it publicly available or is it still being refined and tested?

Gabriele Rosani:

No, it’s still in a demo testing phase. We have tested different models. But we did a test with a dozen of people, of course, to get feedback on the conversation, on the flow. We tested the perception of engagement interaction and the quality of output of course, but it is not something that is already available as a product, let’s say. It was more to prove the concept that you can do this already with the current technology if you strategize in a way that makes it … I mentioned the play script. We want to insist on this, it’s not the bot, it’s basically how you design it. And so we wanted to learn how to do it in a way that maximizes the output.

Stuart Crainer:

Henry Stewart, who’s the CEO of Happy, asked the killer question, “Can you give a real example?” I think what you’ve done in this research, Gabriele, you’ve worked through examples of how the chatbot could work in developing character platforms and-

Gabriele Rosani:

No, sure, if you go to the report, there are some excerpts with real conversations that we had, or the testers had, with the machine with a specific case so that you know what the circumstance was, and then you see the conversation. But this is in the full report where you can read the excerpts of these conversations.

Stuart Crainer:

We should say to Klaus Shein as well that the full report can be downloaded from the Thinkers50 website, which gives you much more information than the slides. Leon, it’d be interesting to talk to you. Can you explain more? Your metaphor was in the chess game of management this technology can enable you to see the full board. In your experience where people don’t see the full board, what’s it really adding?

Leon Prieto:

Well, it’s adding that other perspective because you’re having a conversation with a Co-Thinker, so the key is to not look at it as just an assistant or some type of tool. But you’re really having that conversation with a machine to really identify different pain points, stakeholders that you may not have figured out yourself. So working with a Co-Thinker, it’s almost like having the wisdom of 1,000 boardrooms without the boardroom politics almost. I’m just being a little silly with that joke, but it adds that extra element of really understanding the perspectives of different stakeholders and an understanding of who they might be. Because a lot of times we operate with blind spots. We do it daily. But with this Co-Thinker, it really gives us that insight that we may not have previously had access to. And it also opens up a whole new world of opportunities to really engage in not just financial impact, but social impact as well.

Stuart Crainer:

I suppose there’s also an issue, which a number of people alluded to, so somebody says, “How do you decide on the type of content to provide the chatbot regarding responsible guidance?” I suppose the thing about the quality of the conversation always relies on the quality of the input, how honest and truthful and factually accurate it is. That must be particularly the case with something so abstract as character. Corey?

Corey Crossan:

Yeah, it’s such a good question. What we have to do with this, the AI Co-Thinker, is create a database in which we’re feeding the Co-Thinker. And so this database comes from decades of research of understanding what character is, how it contributes to a habit of good judgment, good decision-making that allows us to be responsible leaders. That’s really looking at a robust set of virtues that work together, so courage, integrity, accountability, justice, humanity, humility, those types of things that work together to help us be responsible. So in essence, we’re trying to provide the virtues that we know are associated with the values that someone’s chosen at the beginning of the conversation that will help them be more responsible leaders.

But I do think the other element that we continually need to think about too is, who are we when we are doing the research, when we’re creating these databases? Because it also requires us to have strong character judgment and decision-making when we are putting together these databases. And so that’s what I was talking about with that symbiotic relationship that when we are creating these resources that we also need strong character to be able to do so. And so that’s why it’s so important for us to all be trying to develop our characters so that we can create these reliable resources for other people.

Gabriele Rosani:

Stuart, if I can add. I think here we need to understand also the experiment. We compare what you can do with ChatGPT, for example, just asking a question about responsible leadership, how can I be more responsible on this value, for example. And you get a good answers, but sometimes they are not really convincing because the machine will … look into his knowledge. But here we did a retrieval augmented generation process, so we made sure that the machine looks at the content developed through this decades-long research. So it’s reliable, it’s authoritative. That’s why we joined forces with professors, because we wanted to make sure that the content is more reliable than what you can get if you ask this to the generalist machine.

Stuart Crainer:

Yeah, interesting. There’s a good question from Daljeet Singh, “A huge issue with generative AI is that it doesn’t actually think logically. It just produces things that sound good, but it doesn’t verify anything. It often hallucinates. Does this Co-Thinker mitigate that in some way?” Gabriele?

Gabriele Rosani:

First, this idea to design a flow and also understand what are the points where you ask for comments, you ask for challenge, you provide your specific contextual input, and so it must be designed. Then, of course, you can put in some guardrails. And then as Corey just said, you make sure that the machine looks at the content that you put into this. It is called retrieval augmented generation, technically speaking, but basically to make sure that it doesn’t hallucinate or it doesn’t provide you with an answer that sounds convincing but is not rooted in the research and reliable content, as much as possible because there’s always a chance that the machine will hallucinate. We cannot eliminate that.

But this brings another point, which is the human judgment. You should be conscious that this can happen and exercise your judgment when you look. For example, when the machine gives you a long list of items, you can say, “Okay, this maybe it doesn’t fit with what I do. This looks a little bit strange.” You have to exercise your judgment. You don’t have to take it for granted. It is not a calculator, it’s a statistical model. Sometimes it goes wrong.

Stuart Crainer:

Well, I suppose that goes back to Leon’s comment about it’s like having access to hundreds of boards and their wisdom. You still have to make a judgment on what you know and what you learn.

Leon Prieto:

Oh yeah, definitely. And it can be lazy as well. A lot of times people go in front of generative AI, and as Gabriele said, they approach it like a calculator to give them answers. It’s not about that. So you have to be your thoughtful, rational, logical self and have that meaningful conversation with a machine and, as was mentioned, you have to exercise judgment. So you don’t want to take the humanity out of this strategic conversation you’re having with a machine. We have to bring our A-game. But it’s like this whole saying, crap in, crap out. So it has to be meaningful information that goes into the conversation. At times there may be hallucinations, but sometimes when you have a conversation with a person, you may not get the most logical response every time as well. So don’t just take everything the machine gives back to you at face value. You have to exercise good judgment like anything else.

Stuart Crainer:

Do you believe, Leon, that it will help create better and fairer organizations?

Leon Prieto:

It’s possible. It certainly is possible. But as I said, we just need to be more responsible. If we really work at it, I think we could train a lot of these models to reflect the kind of world we want to live in. But this could really help us get to a better understanding of management. It is a tool that could lead us there, but we need to become better humans in order for this machine to really take us to that promise land.

Stuart Crainer:

That was very academic response to that, Leon. Conditionally optimistic, but with conditions. Yeah, Corey, I’m struck, and there’s a point from Catherine Hall who joins us from New Mexico, nice to see you, Catherine, and she says about the state of the world that we should be aware of the importance of a strong character and really why do we need the technology to remind ourselves? I think the way you painted it, almost the technology here is existing as the conscience of leaders. Or am I misunderstanding it?

Corey Crossan:

Yeah, I’m just rereading the question to make sure I understand … Yeah, and we all do want stronger character. As I mentioned before with the AI Co-Thinker here is that it acts as an additional nudge. Doing work in the character field for a long time now, character is one of the things that slips quite easily for people. We have such busy lives, there’s so many things that are important to us, and it can be really easy for a character to atrophy over time. If you’re not prioritizing it day in and day out, we call it going to the character gym, exercising it with intention each day, while we hope that it grows over time, it can actually atrophy. If you’re not being a person of accountability today, it’s likely atrophying. If you’re not being a person of more courage today, it’s likely atrophying.

And so what’s really great about this AI Co-Thinker is it’s acting as an additional nudge. We talk about these nudges or reminders around us that can support us to be people of better characters so that we are making sure that we’re developing. And so this is an extra nudge. One thing that we’ve considered is how we can actually use this nudge in the best way possible. I do think that it’s great as a standalone, but it would also be even better if you actually had this alongside in-person programs as well. So if you think about weekly or monthly workshops where you’re actually doing this with people, alongside people, that human piece where you’re actually connecting. But the really great piece about the AI Co-Thinker is it actually customizes what you specifically need. You can have this conversation, understand what you really need, and continue to practice this over time between perhaps in-person workshops. I would say, as many extra nudges as we can get to support our character development definitely doesn’t hurt.

Stuart Crainer:

You’ve got an app you’ve developed called Virtuosity. Does that fulfill a similar function?

Corey Crossan:

Yeah, it’s been really interesting learning about the differences between an app and this AI Co-Thinker. I was very new to the AI Co-Thinker area, so it was really fun to look at those differences. The app does the very similar ideas where it’s a character gym and it provides you with daily exercises that you can use to cultivate stronger character over time. I think what I’ve realized … the real difference between the app and the Co-Thinker is that the app is essentially a closed system in sorts where you’re giving it all of the information. So all of the decades of research that we’re relying on is being put into the app, and so we know that everything that the app is giving you is from a research-based standpoint. Whereas, the cool thing too about the AI Co-Thinker is that we are creating a database that is, again, from these decades of research, but what happens when a user actually asks a question that, for example, the app doesn’t have in it, and the AI Co-Thinker can actually be flexible in terms of working from the research by actually providing information that we may not have thought to put into the AI Co-Thinker. So it’s quite interesting to look at the differences and how they could actually reinforce and support each other.

Stuart Crainer:

Thanks, Corey. And thanks to Frank Kahlberg for his comments on the morality behind this and some of the ethics. Gabriele, it seems that ManagementGPT really it’s about the democratization of management ideas in that you are bringing together lots of management ideas, which are locked in different places and not necessarily accessible, bringing them together and then allowing managers to access them as they contemplate a particular problem. Is that a fair interpretation?

Gabriele Rosani:

Yeah. I mean, today a manager will surf online, looks for books or articles, and so it’s a little bit cumbersome. And then, of course, you cannot not have a conversation with the Google. So it’s like talking with an expert or a coach or a facilitator, and then it’s up to you to have a good conversation, because this is also true for human to human, you can have an expert, but if you don’t know what kind of question to ask or how to trigger a conversation, how to explain your context, then it’s not a good discussion. It’s the same with these Co-Thinkers. So they can help, but you need to put some effort in the conversation. You cannot just think of pressing a button and getting an answer with a very naive and simplistic view.

This is something that can help you, but to extract maximum value out of these conversations with the machine, you need really to engage yourself and do your homework, because the value is co-created. And sometimes there is this bias, this perception that, “Okay, so the machine will do this for me.” It’s not that. It’s even slower sometimes because we are talking about quality, we’re talking about depth, we don’t talk about speed here. So we talk about reflection. So I think this is also a mindset shift that managers need to have when they approach these tools.

Stuart Crainer:

Leon, are managers ready for this? I mean, you’ve been talking about the value of cooperation, you’ve talked about cooperative advantage, and Gabriele just talked about value being co-created. Are managers ready to open their minds to this sort of technology and these sorts of resources?

Leon Prieto:

Well, I think some are definitely interested in the potential of ManagementGPT, but of course they’re going to be skeptics, but that’s part of the process. But as soon as they notice some of their colleagues or their peers within industry are utilizing these tools to a great benefit, many are going to jump on board because they don’t want to miss out. But this has a potential to revolutionize this art and science of management in a very real way. I think it could really help managers gain that competitive advantage and that cooperative advantage that they need in order to really impact the lives of their companies, employees, and even communities and the entire value chain.

Stuart Crainer:

What surprised you in the entire process then, Leon? You went in with some skepticism.

Leon Prieto:

Yeah, I went in with a lot of skepticism because at first I was like, “This is going to replace us.” I guess I looked at too much Terminator and TV shows, movies like that back in the day. But basically, it’s a reflection of who we are. If we want a more social-oriented approach to solving problems facing mankind and ways in which organizations could solve those, it’s up to us to become good humanistic managers to tackle those problems and train these models on these approaches instead of just a profit-centric approach. I think this could really help humanize us, even though it may sound contradictory that an AI model could help humanize us. But it can if we really have that meaningful conversation. I’m now a believer, I’m a big believer of the power of generative AI.

Stuart Crainer:

And Corey, have you been converted?

Corey Crossan:

I have. I was really slow to get on the bandwagon with this one, and I am kind of kicking myself for it. I think that a lot of us think that things are pretty good the way they are. Even with all of the challenges that we see, we feel like we’re putting our best foot forward. There’s some level of discomfort with trying something new. I’m really thankful for the team reaching out to us to learn this because now that we’ve gotten over that discomfort hump, I’ve really been able to see what we can do with it. And so if there are any people out there watching that haven’t really dabbled in it too much, I’d definitely say put yourself in that a little bit of discomfort to try it out or lean onto other people that have tried it to walk you through it and be patient with the process. Because learning new things, there’s lots of opportunity in it. So I definitely say try it out and see how it makes you feel.

Stuart Crainer:

And Gabriele, where does the research go next?

Gabriele Rosani:

We’ll expand to other use cases, let’s say. We did three, but management scenarios are so diverse. We can think about root causes. We can think about preparing for a speech. So there are many tasks that the manager does in its daily job that you can try to have into sort of a dialogue so that you can think and reflect together with the machine when you do something. Actually, I think it’s limitless and sometimes we see the boundaries, but we should force ourself a little bit out of the comfort zone, as Corey said. Sometimes we don’t see the opportunities or that you can do things differently. Sometimes we don’t go into this uncharted territory. But we should, and we will continue to, basically. If other thinkers want to join, because we really believe in this collaboration, as we said, we need to have good content which is reliable, which is based on management research.

Stuart Crainer:

Gabriele, thank you very much. Thank you, Corey thank you, Leon, for joining us. We are out of time. Thank you, everybody, who joined in and asked questions. We really appreciate your involvement. As Gabriele said, if you’re a thinker and want to join this process, please let us know. The report, ManagementGPT, is available from the Thinkers50 website, and the link has been provided in the comments. So thank you once again, Corey, Leon, and Gabriele. This is the future of management as we know it. Thank you, and we’ll see you again soon.

ManagementGPT book

Management GPT

To get to the future of ManagementGPT where Co-Thinkers, well-versed in a variety of management practices, engage managers in valuable human-machine conversations on complex topics and trade-offs, work needs to be done. The availability of specific context, related methodological prompts and advanced guardrails are all limited. Working with thinkers from the Thinkers50 Community, this report navigates us into the future, driven by curiosity, collaboration, discovery, and the pursuit of making good, responsible management achievable to all.

Download the report

 

Share this article:

Subscribe to our newsletter to keep up to date with the latest and greatest ideas in business, management, and thought leadership.

*mandatory field

Thinkers50 will use the information you provide on this form to be in touch with you and to provide news, updates, and marketing. Please confirm that you agree to have us contact you by clicking below:


You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at . We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Privacy Policy Update

Thinkers50 Limited has updated its Privacy Policy on 28 March 2024 with several amendments and additions to the previous version, to fully incorporate to the text information required by current applicable date protection regulation. Processing of the personal data of Thinkers50’s customers, potential customers and other stakeholders has not been changed essentially, but the texts have been clarified and amended to give more detailed information of the processing activities.

Thinkers50 Awards Gala 2023

Join us in celebration of the best in business and management thinking.