Thinkers50 Curated LinkedIn Live Session with David De Cremer

 

The Organizational Application of AI

Artificial intelligence is changing the way organizations will be run. David De Cremer brings a human-centred approach to the study and organizational application of AI, focusing on how leaders can be effective and transformative in the new technology era, how building trust is a business asset, and why compliance requires less rule-based and more behavioural integrity approaches. David is the author of Leadership by Algorithm: Who Leads and Who Follows in the AI Era, is on the Thinkers50 2021 Radar list, and has been shortlisted for the Thinkers50 2021 Digital Thinking Award.

Transcript

Des Dearlove:
Hello, welcome to another Ideas with Purpose webinar, featuring Thinkers50 shortlisted thinkers. I’m Des Dearlove, co-founder of the Thinkers50, the world’s most reliable resource for identifying, ranking and sharing the leading management ideas of our age, ideas that can make a real difference in the world. Now in November, we’ll be announcing the all-new Thinkers50 ranking of the world’s leading management thinkers and the recipients of the Thinkers50 Distinguished Achievement Awards. All the information about our event in November can be found at Thinkers50.com, and it would be fantastic if you could join us. We’re announcing the short list for the awards over the forthcoming weeks and kick things off with our short list of eight great leadership thinkers. And today, we’re delighted to be talking with one of them.

Now, we all know that artificial intelligence and machine learning are changing the way organizations will be run. Professor David De Cremer brings a human-centered approach to the study and application of AI in organizations and leadership. David is a world-leading thought leader and behavioral scientist. He holds the Provost Chair at the National University of Singapore’s Business School, where he is also a professor of management organizations and the director and founder of The Center of Artificial Intelligence Technology for Humankind. Before moving to NUS, he was the KPMG Endowed Chair Professor in Management Studies at the Judge Business School at Cambridge University, where he is now an honorary fellow and fellow at St. Edmonds College.

In January, we named him to the 2021 Thinkers50 Radar List as one of the 30 next generation business thinkers to watch. He’s also included in the world’s top 2% of scientists. David is the best selling author with his book, I always have trouble with this, Huawei Leadership Culture and Connectivity, having sold more than a million copies. And his latest book, Leadership by Algorithm: Who Leads and Who Follows in the AI Era, has received critical acclaim and earned him a place on the short list for the 2021 Thinkers50 Leadership Award.

Now we have David for 45 minutes, so he’s going to tell us a little bit about his latest work, and then there’ll be time for some conversation and some questions. As always, please send in your questions during our discussion, and let us know where you’re joining us from today. So David, let’s begin with our congratulations for being shortlisted for the Thinkers50 Leadership Award. Tell us a little bit about some of the issues you’re working on at the moment.

David de Cremer
First of all, thank you Des for having me here and for putting me on the list, because tell us to you guys. So basically what I’ve been working on the last few years, and this is what I would like to talk about during my presentation as well, has been centered around, like you mentioned, a human-centered approach to artificial intelligence. I’ve been doing this mainly in the last three years under the roof, to say so, of the center that I founded, The Center on AI Technology for Humankind at NUS Business School.

I was already working a little bit before that when I was still at Cambridge, as you know, we have Camp Fen, the Silicon Valley of Europe, basically. So many startups, I was working with bigger tech companies, including Huawei when I worked in China and not being a computer scientist, nevertheless, I was very intrigued, by basically how it is that technology, first of all, dominates our thinking in such a fast way, our thinking about society, about organizations. And I started looking a little bit more at how do we manage our organizations and how have we been managing in the past, and in what way would AI actually change that or maybe not change? Because one irony that I would already like to emphasize is we always think about tech innovation.

It’s all about innovation. But if we look at how we’ve managed our organizations in the last 100 years, not much has changed. Since the book of 1911 Taylor Scientific Principles of Management, we’ve believed in the system. And what I would like to emphasize is that we’ve created a lot of closed systems in how we manage our own organization, and because of that closed system, as I will explain, AI will basically dominate if we don’t take another approach. So without further ado, and to the audience, we’re coordinating now between the UK and Singapore. My slides are in the UK, I’m in Singapore.

Des Dearlove:
Actually, David, your slides are in Denmark, and I’m in the UK, and you’re in Singapore.

David de Cremer:
Okay. Well, we just added that complexity. Thanks, Des. So it shows you the power of technology in it. So what I’m going to focus on in the 15, 20 minutes that I’ve got is really on AI in organizations, not AI in terms of what we see on social media, what our business schools are telling us, how society’s going to change, but really what is it doing in terms of what I mentioned. We’re managing our organizations, what is it going to change? And what I would like to emphasize is where our responsibilities lie, what are the opportunities for humans in an AI era to run our organizations, and what are our responsibilities that go along with these opportunities?

Okay, so the next slide, so like I said, I do this with The Center on AI Technology for Humankind, and we’ve defined human-centered AI. So I’m using a lot of clicking, yeah. The way we design human-centered AI, because I’m pretty sure most of you have seen in the last year or two that this approach is booming. It’s showing up everywhere, but what is it really? Because people have different viewpoints on it, different definitions, but the way we look at it is really it’s coming from my background as a behavioral economist, a behavioral approach to understanding how do we design and advance AI systems in ways that serve the needs of people and create benefits for us as a society, because of course, the society I assume that we would like to create is a more efficient one, but that remains a human society, that is a humane society.

So if you look at how can AI help with this, we recognize that human-centered AI contributes and enhances human sense of competence. Basically people need a sense of urgency, we feel competent. We feel that we have control over what we are doing. AI can help. A sense of belongingness, a certain community sense. We make it more transparent. We can connect. All of us are connected here. A sense of control in terms of it’s actually an extension of what we’re doing, but at the end of [inaudible] control, and I’m already is saying, hence, we are also responsible for what’s happening with the AI. And finally, our wellbeing. I think many of you have even written about it, have read articles about how basically tech addictions are also emerging quickly. For example, with my own daughter, we want to keep her away too much from social media, from cell phones, from laptops, but then with COVID-19, in Singapore, everything is now running via the phone so you need your phone. So it’s very difficult to achieve that balance. So you see, everything’s advancing very rapidly. So how do we keep the focus which is humancentric?

Okay, next slide. So when I’m emphasizing all these things, so Monica, yeah I’m just, so what we see is there’s a lot of discussion on, is AI actually safe for humans? If we just let it develop in a ruthless market where innovation dominates, are we being threatened? It’s a question on people’s minds. Is it safe? And so far, most of us have accepted, to some extent, especially with the First Industrial Revolution, blue collar workers, yes, you have to accept your jobs are going to be gone. It’s easy to automate, but surely not white collar jobs we’ve always been thinking, but that may change, and I elaborated on that question in my book, Leadership By Algorithm, as you can see.

So in that book, I really tried to show how basically the introduction of AI and development as I’ve described it, is changing how we think about the way we run our organizations. In many of my executive classes, I actually had senior leaders coming up to me saying, “Hey, why do I learn soft skills? Why do I deal with people still? I need to learn to deal with technology.” And I said, “Well, you’re a little bit too old to become a coder. So I would like to see that you’ve got so much business experience. You’ve got the skills and your responsibility will only grow in the AI era.” And they had difficulties really realizing what is this about? So I really wanted to push it through, what is it?

So next slide. So what I see there is how do we introduce AI in businesses. Monica, you can just click everything so that we have the whole slide. That’s easier, probably. So basically, as we know, organizations like [inaudible] costs. They’re a very rational strategy. And we see AI, especially in terms of augmenting productivity. So once we see that AI is able to do something that we can do, and certain tasks they’re superior, we replace. So this means, and I notice this in a lot of companies, companies and AI have adopted a certain point of view, which I call where people think artificial intelligence and human intelligence, we’re talking about the same kind of intelligence. And if people think this, we end up with a zero sum frame, zero sum game frame. Basically, once AI intelligence becomes more equipped, more efficient, AI will probably take your job. And in a way, in our organizations, as already mentioned, we created closed systems, because we’ve got so many KPIs. We have the matrix cultures. We think system thinking. Those are the kind of systems where a lot of jobs are routine.

Even the jobs that we would perceive as management jobs, where we expect some creativity, they’ve actually become matrix culture driven as well. And there, AI will dominate. And I call these new MBA. So I always tell my MBA students who are administrators. So basically I can easily take your job because MBA is the management by algorithm, what I elaborated on. So in these closed systems, we’ll see that this emerges and the fear is there. When we talk about fear, are we creating a world for machines? Are we creating organizations that are more suited for organizations because we’ve been contributing so much to these closed systems? When I was still working at Cambridge, Cambridge Analytica happened. And I see for very close, how for example, Zuckerberg basically made comments initially to say so well, my main responsibility as an engineer is to develop the best technology possible out there.

Am I really responsible for what happens on the platform? Yes. Mark, you’re responsible for what happens on the platform, but you didn’t see it like a lot of engineers and people bringing in AI don’t see that they’re creating a situation which is perfect for a machine, but maybe less perfect for humans. So if you click, Monica, you’ll see, yeah a few times, this zero-sum game framework actually creates this kind of thinking where we think in terms of replication, because we assume the types of intelligence are the same.

So this goes along with the cost benefit analysis. The implications are, the next slide, that as I said, if we create organizations for machines and everyone knows the favorite example of Amazon now, where people work on assembly lines being supervised by algorithms. To some extent, algorithms can even fire people because they work basically on matrix. And we also see the response, people coming out on the streets who are not robots, because we’re creating a culture. People have a bad day, people have a good day sometimes. Machines don’t. They’re consistent, they’re accurate, and they keep repeating. So you actually suck out the creativity as well. So clearly this is not a culture where people feel well, and we’ve actually tested this in Singapore as well.

So next slide. I tested this first amongst the most difficult audience, among tech startups. So I asked them the following questions. Do you think that development of the new technologies will make people less tolerant, more tolerant, or will not make a difference? As you can see here, most of them thought, so I think the more we automate tasks and we bring it in, in that culture will probably be less tolerance towards human failure, what people in Amazon felt. Also, we’ll probably start seeing people as less reliable than machines. If we have that mindset, of course, we’re creating a different work culture, not one equipped for humans.

Next slide. So I went also add to some senior executives, those who have been asking me, “I still need skills, Professor.” Well, I asked them several questions as well. The more jobs will be automated, the more human employees will be treated like a machine. 59% said yes. The more machine will be used and the degree of automation increase, the more I think the pace of work will go up. Yes, it will go faster, 76%. And my favorite, all these auto generated emails you get, employees don’t bother responding to it. You’re in a closed system. The more communication within the organization is automated, I feel less treated as a human. 77% of them said yes.

So it’s very clear. The fear is there that we’re creating work cultures that are more suited for machine than for humans. And this is because of the zero sum game framework where we consider it both intelligence. Intelligence is interchangeable. But like I said, there’s a lot of room for humans to work with in the era where machine may dominate.

Okay, next slide. So first, so one thing that I want to address already, and I see that many people make this mistake. Once we apply AI in an organizational setting, the context changes, and the risk of using AI also changes. When we develop AI in the lab, this is what you see on social media, doing amazing things. Boston Dynamics and the robots, amazing. Of course, they have recorded 20, 30 times before it’s perfect, but it’s in the lab.

Once you [inaudible] one room, we have different stakeholders. Let me give you a quick example. If you go on Google and you search something and 80% of the time, Google returns a perfect answer. Only 20% of the time it’s not that great, which you accept as an end user, which you say, okay, I keep using Google 80-20. Most people will probably say yes. I mean, there’s no risk in it. There’s no threat. I’ll do it. Okay but just imagine now that suddenly, Elon Musk will, he just comes in into our show, sells his own show, as you know he does and saying today, “I’ve got good news. Worldwide, we launched the self driving cars of Tesla today. Everywhere you are, you have a self driving car on the road now, and I’ve got even better news.”, he said, “80% of the time, the car will not kill people. Only 20% of the time. That’s pretty good, no?”

Again, same question as an end user. Will you accept that? I think this is a little bit different than the Google situation. Probably you’re not going to accept the 80-20, and this is the way you have to start looking at bringing the AI in organizations. I always tell people the best that we can [inaudible] moment is supervised machine learning, where we still like human centered AI approaches indicate. We still have the end control and we teach and I will come back to this later because we have to teach AI because we have to treat it as a tool to identify our own blind spots. And I’m going to mention something about this later, but that’s the first important thing.

The second thing is humans have a lot of room in organizations when it comes down to open systems when you have to deal with the market. With the volatile chain engine environment, machines can’t do it because if we use supervised machine learning, it’s stationary data, it’s historical data. It’s closed system applicability, but not open systems. So a common sense needed a machine at the moment. Doesn’t have that. If I would say three things, following a person, books are reservation at restaurant person orders, a steak in the restaurant, the person leaves and leaves a leaves, the restaurant and leaves a big tip. And I ask you, did the person eat a steak? Most of you will probably say yes. Will AI say that? No, because nowhere it has been mentioned that is an eight stake only asked for stake to be delivered to his table. So it means AI. And this is the latest natural language processing cannot read in between the lines. It doesn’t have that common sense.

It’s not a participant of our society. Another thing in convolutional neural networks, where we look at filters that are over images, you know that, okay, they can recognize images, but it’s all metrics. Basically they have to analyze it with neural networks. And it’s amazing what they, they do, but they can’t, they’re not very good at saying what is the same and what is different? Recent experiments have shown that when they have of a boy playing with a toy car and the assignment is given to the AI, they indicate a toy car. It cannot out reason that it’s the same toy car as the boy was playing with. It can only learn it when you provide that picture. So it cannot connect the common sense of what is same and what is different over time or over context. That’s a problem. If you’re leading an organization, no, because you need to compete.

You need to have your strategy. You need to be ready there. I was going to show a very quick video, but I can’t show it from here. So we just go to next slide, but we need our soft skills even more than ever. So the video that I want to show was at the table, tennis world champion, playing against the fastest robot in the world, specifically trained to play the game. Initially that world champion is losing until something happens. He puts the ball and it hits the net and drops that. So he, he gets a point. He was six one from then on, the human adjusted right away, because he knew aha, if I make mistakes or I look up high risk situations, the AI is not doing this. Of course not because with supervised machine learning or as it has been trained now, the robot arm, they avoid failure because that’s too risky because it’s probabilistic they’re thinking.

Of course they learn. So eventually, what the player started doing was hitting the ball so that it hits the edge of the table, just making these risky moves. He wins the game. This means you have a common sense of reading a situation. You can critical thinking, you have a method thinking. Machine does not have this. And especially in organizations, we cannot apply that yet. So what we are going to enter is what I call the feeling economy. So Monica, if you click twice, then you’ll see two lines. Oh no, no, no, go back. Okay, we don’t have those lines. That’s a little bit weird. So, but basically what the feeling economy means is that over time, we will need less rational skills and we’ll need more soft skills as humans. That’s our opportunity. So your salary in the next 10 to 15 years is increasingly going to be determined more by your soft skills ability, that you have those skills rather than your rational skills.

Okay, next slide. So our soft skills are important because we can deal with change volatility. The second room where we have is because we can deal with open systems. Well, that’s the place where leaders come in. Managers, they sit in these closed systems. Leaders, they have to deal with changes. Leaders deal with change and provide direction. So dealing with change, AI cannot because it doesn’t know what a context means. It cannot work across context like I’ve shown with the toy car. Second thing, it cannot connect with people. AI is not, it can connect, but it doesn’t build an authentic connection. The editor of Nature told me the following in her view, algorithms are poor in directing with humans in scenarios where subtle forms of cooperation are required. Of course, subtle forms is reading between the lines. It means supporting people.

It doesn’t have that, but this is important. Leaders induce change, have a certain idea, connect with people to make this happen. So a third thing is leaders also provide meaning in that. What kind of purpose do we pursue to achieve what? So leaders in open systems attribute meaning. AI is a little bit more difficult because AI has a lack of empathy, no emotional intelligence. You can put a computer in the sleeping mode, but I’ve never seen the computer dream. So there’s also a lack of imagination because that’s what we need in visionary leadership. And finally, when we change and we provide meaning to this, we have a purpose. We define the purpose as humans. AI doesn’t have that ethical judgment. AI has no judgments by the way. So we see that they like this as well. So we have a stronger responsibility than ever to take up these leadership roles and responsibilities even more than ever.

So next slide. So what I studied and also in my book, again comes back, is that what kind of leadership styles are going to be the styles that we need and the focus of the leader in the future when we see that data management is important and AI has become part of our work cultures, because right now I see too many business leaders still having poorly developed business cases for why they want to use AI and how to implement it. That’s why many companies fail in going beyond the pilot stage. It’s difficult to upscale because they haven’t really thought through the case studies. Most of them still are in the ’90s where they said, “I hired an IT department. They tell me which software and hardware to buy. You know what? I’ll hire data scientists. They probably do the same.” Well, no, then we have a problem because using AI and data for your leadership and your strategy is more than just saying okay, organizations, they consist of data. Data scientists, please analyze the data and tell me what the data say. And that’s going to be our strategy.

No, because first of all, I always say if that’s the case, then data scientists are underpaid because they’re basically doing your job. Data scientists are not going to say the data say this, and this is going to be your strategy. It’s really about data analysis needs are there so that they can answer the right kind of questions with the right kind of data. And who provides those questions? You as a leader. So again, going back to my senior leaders, I tell them, that’s your responsibility. You need to be more clear than ever about the purpose of the organization and the questions that need to be answered for your organization in this specific industry. So purpose driven leadership is extremely important in the future. A second thing is we have these data scientists sitting there, business analysts serve as departments in terms of data management, but they hardly communicate.

They’re complete silos in most organizations today because the leadership has never made a business case where and how do we implement those data management practices in the scale of our organization, how we operate right now. So we need to connect different departments because different departments do collect different data and have different goals. For example, I work with several multinationals right at the moment where I see they have a business service internally to help out with the data management and the application of technology, but then they put, because they’re not being asked to help out to different departments because different departments have their own way of working. This is how we used to do it. They don’t have their experts in the domain, but they don’t have the language of someone who works with technology. So there’s no communication.

So leaders also need to become more inclusive. So this is second type of leadership, the inclusive leadership that we need, the embracement of diversity, open-mindedness, the humble leadership that we put there. So the big idea, Monica, that’s the next one, the big idea that we put out here is instead of finding a purpose for data, because we have data, let’s see what the data say and that we define our purpose, you have to find your data for the purpose of your organization. That’s an important responsibility. Okay, next slide. So realizing purpose driven inclusive leaders are important. The business model of tomorrow, or even today, when it comes down to bringing AI into our organizations is going to be a collaborative one. And together with my friend and collaborator, Garry Kasparov, the great of chess, we developed a model that we called Triple AI. For those who are interested, we published it in Harvard business review, and they’re going to, and they’ve also selected the article for an inclusion in special issue on impactful articles because we made the distinction between machine intelligence, which is artificial intelligence, human intelligence, which is we call authentic intelligence, even though AI can, for example, imitate model emotions, it’s never going to be an authentic connection, but both work together to create the third type of AI, what we call augmented intelligence and that augmented intelligence, that’s our focus when it comes down to human centered AI.

How do we augment, with in mind our responsibilities and our responsibility to define how we want to see our organizations, our industries and our societies. Okay, next slide. So this means that we have a transformation only from viewpoint starting from, oh the intelligence is interchangeable to a machine and humans. So we have a zero sum game framework where all the fears that we lose our jobs, no. We have different types of intelligence and it’s a matter of elevating, augmenting. The end user is human. The end user is not machine. That’s a different framework. Now we talk about a framework of implementation, artificial intelligence supporting humans.

Okay, next slide. So in order to achieve this, and this is another thing that we are really focusing on is how do you train and educate them, not only our leaders but also our youngsters already to realize this is the room that we as humans will have in our organizations and this is also our responsibilities. So I love the quote of Thomas Friedman, who said, “The companies that are doing best at creating what I call STEMpathy jobs, STEM, science, technology, engineering, together with empathy, that’s the future.” So that’s why we suddenly realize we need to focus more on our soft skills, even in an AI dominated context. And I see this slowly happening in Singapore. For those of you who know Singapore, well, it’s a city state that likes to control a lot of things. Of course, we’ve been effective in our COVID-19 fight, but it’s pretty intrusive. Not that you don’t do anything without the government knowing it. So the education system also has a little bit of a problem. So Singapore students are the number one in the world when it comes down to mathematical skills, but they’re also the number one in the world when it comes down to anxieties and fear in life.

Because I always say that when my daughter went to a local school, when she was two and a half, they already told her, “Oh, she’s doing well in this and this and this, but not in this and this, you should.” And I said, “Oh, I’m not worried.” And she said, “You’re not worried?” I said, “No. I mean, we speak three languages at home.” Her head is chaotic at the moment. At age seven, eight, all of this is going to come out at once, but that’s how the human brain works. And also soft skills are best trained when they’re little. So let them play. So this is a very difficult thing for them to shift the mindset. So I found this picture and I wanted to share this with you. This is Singaporean school. It says inspiring imagination with creativity. I always ask my students here. I see two things that may be wrong with this picture.

First one is it’s inspiring creativity with imagination. Develop your skill of imagination, you become creative. But the second thing is more where probably most of you see, it’s pretty structured. They wear the same uniform. They have the same posture. Yes, they have a piano in front of them and that’s creativity they say, but it seems like yes, you can be creative, but within the box. So you see, it’s very difficult to leave that mindset behind, the same thing as I see that in the corporate world right now, it’s very difficult to move from the mindset of replication, to the mindset of augmentation. For example, in the financial industry, I see when people define AI and efficiency, because they create value with AI to be more efficient, they have a choice. They can automate, or they can augment human employees. In the financial industry, almost without an exception, the budget goes to automation only, not augmentation.

Okay, one more point, next slide. And this is the last point, but I find it extremely important is we see this room for people, our soft skills, the common sense, our leadership and our collaboration with AI, but also how will we use AI? That touches upon the ethics, the responsibilities that we have on how to use it. So together with World Economic Forum, I did some project and they like this quote of me where I said, “We can use machines for good if we are clear about what our human identities and the value we want to create for a humane society.” The reason why I said this is because I want to put the responsibility with humans still, and it’s not machine. In my view, you can’t say a machine is bad or good because the machine doesn’t decide.

So next slide, so I’ve been working a lot on AI ethics. You can show all the points. And I wanted to tell you a few things on how I look at AI ethics, because I feel that a lot of people, when you say AI ethics, implicitly, the assumption is already there that AI has ethics and actually can decide willingly to do something bad. And that is not the case. So first of all, we talk because AI is rational and it can be superior in certain tasks, let’s use it, but then suddenly we realize, okay, AI has biases. Of course, because it learns, especially in organization, like I said, where it’s supervised machine learning at best at the moment, it replicates basically our data, our mistakes and our good things. So basic, so I always say complaining about an AI bias is the same as complaining about your image in the mirror, because AI uses your data and it amplifies it. It’s a mirror.

But a lot of companies and I blame Google a little bit, because Google has ethics as a service. We’ve started to believe [inaudible] has become a technology issue, the techno solution mindset. Why wouldn’t we also be able to solve the ethics problem with technology? Look, when we look at chess, we’ve made AI a much better chess player than humans ever can be. AI at the moment, Garry Kasparov told me he learned even two or three new moves simply because of looking at AI and not only what he was beaten in ’97, but in the last few experiments. He said, they’re becoming even creative in it. So it’s a simple logic then to assume if AI can grow to become better than humans in playing chess, then AI can also become better in being ethical than humans. So of course we should trust AI in that sense now. But like I said, AI doesn’t have intentions. It doesn’t have a common sense and it can’t make judgements.

Most of our decisions when ethics becomes important are gray zones. If you can’t make a judgment call and you don’t have any cultural sensitivities, how are you expected to do this? So recently, the United Nations published a report where they said a drone decided itself to attack the human. So killer drones are there. But to me, that’s the same as saying a gun pulled itself after I actually pulled the trigger because the drone has learned from humans. First of all, we’ve made the choice to use the drone and we’ve put information in the drone, which then because of a self-learning mechanism, the drone could make a decision. But a choice and a decision is different. A decision is an execution. I can tell Des to do something and he’ll execute, but it was my choice. The responsibility lies with us.

So do bad machines exist? And then we can say the machine did it no matter how much Google may say, “Hey, it’s a technology problem. And if your algorithms produce unethical outcomes, we’ll just come over with ethics as a service.” No, it isn’t. AI is a tool. That’s what we have to understand, but that’s a good thing in a way, because it can help us to identify our blind spots. It amplifies our biases. We can learn. We can use it as a teaching tool. It can actually make us more ethical. And this is the final point that I want to address with those four points where I say there’s room for humans in organizations where AI is present, but it means taking more responsibility. It means that we have both digital upskilling, I call it and human upskilling that we need.

So yes, we need to learn about AI. We need to become tech savvy, but what we need to do even more is to become better at what we’re good at and take up those responsibilities. So for those who really want to know more about ethical AI, you can see in the bottom of the slides, I recently published a paper on AI and ethics, a new journal I advocated. I’m one of the founding editors, but that’s not only for that, but really we’re looking for papers. And this means a lot of interest, AI and ethics, the ethical AI paradox I wrote together with Garry Kasparov, making that comparison between yes, AI can become a better chess player as a machine, but it can’t become a more ethical player at the moment. Okay, thank you. So that’s a little bit my thinking in the last few years.

Des Dearlove:
Wonderful, absolutely fascinating. We have people joining us from all over the globe as usual. We’ve got people from, someone from the UAE, as you would expect, someone from Singapore, Poland, we’ve had, Robert is a bit concerned about your reference to Taylorism. I think he perhaps jumped in before you’d had a chance to develop your argument, but he’s concerned that you even gave any air time at all to Frederick Taylor. But I mean kind of model management does start with Taylor, whether we like it or not.

David de Cremer:
Yes, no. And the reason why, like this is already same, let’s put it in context. I’m not going to, I don’t refer to it if I don’t need to. The only reason why I refer to is look, when we talk about technology in organizations, the buzzword is innovation, innovation, innovation. But actually, if you look at how we’ve been running our organizations, we’re very poor at management innovation because Taylor 1911, bye bye humans, welcome system is still present today, seen in the matrix cultures, in the KPIs, everything is metrics. It’s actually because of him that we are afraid that AI will take over because we’ve created so many close systems of it, which with only metrics, of course, AI loves that. It’s statistics. So that was the only reason.

Des Dearlove:
No, no, I agree. And the organizational form is still very mechanistic, which and it is still to Robert’s point, it is still largely driven. And I say, we’re not saying, we’re not advocating it, but it is driven by efficiency issues rather than creativity issues and innovation issues. I didn’t mention some of the other places that we’ve got people from India. Did I mention Poland? So very global audience. My take on it, David, just if I’m getting this right, is what you’re saying, I think is management in terms of, as we understand management, which is largely keeping the status quo, keeping things ticking along, AI actually, if you’re just a manager, I think you’re making a distinction between management and leadership. Is that right? And it’s the leaders that we need. That’s where the soft skills come in, because AI can’t do some of the things that we would hope that our leaders can do. Is that a fair reading?

David de Cremer:
Yes, I make the distinction between management and leadership. And if we continue with management, as it is, yes. I mean that’s why I called it management by algorithm. So managers today, about 55%, 60% of their daily job involves data management and administration. So you can see already what the consequence would be and why I tell my MBAs, don’t act like an administrator. We need a different mindset. And that’s that mindset of developing your curiosity, interpersonal connectors, which fits with the feeling economy. So this is for our leaders of course, to take a responsibility, but also for our employees, for everybody, actually. Your salary is going to be determined by to a stronger extent by you have those capabilities. So business gurus tell you AI is not a threat. We will create jobs where we can be creative and deal with complexities.

So fine. So you need to build those skills more, but there’s a problem. If you look at the world’s jobs out there today, maximum 10% of the jobs are creative jobs. So there’s a crisis that I’m predicting. I mentioned already in the financial industry, when it comes down to efficiency because efficiency is productivity and that’s innovation in our views. What we see is there, oh we use AI to promote efficiency. We automate/ a lot of it is automating the financial industry with a service bot, but you can also say, “No, I augment. I make my employees better.” Which means you need to redesign or redefine your jobs already. What are the jobs of the future?

So what I see is automation is fine with cost cutting, but I don’t see organizations putting in a lot of money and investments at the moment in really bringing those jobs of the future, which we need to know today already in the picture as well. I always say any cost cutting strategy requires an investment strategy if you want to have sustainable value. So the crisis that I predict in the next 10 years is going to be, we don’t have an overlap of cost cutting and investing. So we’re going to get a workforce that will be running into problems no matter whether we actually promote that creativity, because these jobs are not out there and there’s not enough investments in it at the moment. It’s one of the big projects that we’re pushing now as far as societal responsibility.

Des Dearlove:
Well, I’m certainly too old to learn to be a coder. So it’s good news as far as I’m concerned. I think those of us who lean perhaps towards the creative side, I think it’s good to hear that there is some good news. Tell me about, I saw a concept, and I think it’s linked to this. You were talking about the circle of irony. I saw an article you’d written about the circle of irony, because this gets, kind of goes to the nub of, of the way that AI could be [inaudible] I think.

David de Cremer:
The circle of irony, yeah I think that was in MIT, yeah MIT Tech. So this is a discussion of again, mindset. How do we use data? And this was applied to HR analytics or people analytics, or I mean, people analytics, you can’t escape from it. And there are many positive things. Of course, there’s nothing wrong with managing a workforce in more accurate ways and that we can anticipate things, but as long as it serves humans. So I do see there’s becoming an ethical challenge there as well because research shows, and this is our interplay between management and science research shows the more you so, so some of my research show the following. So HR analytics basically came in to say like, you know, we, we need to make better predictions with respect to people so that we can manage our workforce better. But here’s the thing, manage your workforce better for what purpose?

So that as a company, you can be more efficient or is it that you actually can make people grow in their jobs like we like to see to be supportive. It’s more the first one, what we see, because we basically see, once I had an article saying once upon a time organizations consist of people. Today, it consists of data. So once you start looking at people as data only in line with the efficiency that you outlined, that’s very prevalent, we go down the road of basically people management becomes data management, and then you have a problem because then you see depersonalization. That works in two ways. If people feel depersonalized in their approaches, it may see rationally, we are fair because we’re transparent, but people don’t feel treated as a human anymore so they will push back. So you’ll see negative behaviors emerge, but also the mere experience of depersonalization, it makes people act automatically.

So it’s a cycle that you just start and you have to be so aware of it. And when I see how we train our HR managers, that aspect of how do you deal with that kind of work culture and recognizing that it’s just completely absent from the curriculum. So the only thing I see now is that statistic, basically AI being brought machine learning as a statistical method that can help you to predict, will people leave the company? Can you anticipate? Can you do this? But no, it’s about building a culture where you use these data to build a better motivating culture. One final thing with the people analytics, for example, companies, Microsoft is also advocating, use it in your recruitment. So what does data do as a statistics? We define the problem. We say, we need to find someone who can do this and this and this.

These are the criteria. AI will be great in scanning everything and checking those criteria. But when I hire someone as a researcher, I also look at something else that is not a criteria that I can measure, potential because we want the people to grow as well. I always ask my students, you pay people when you hire them. Yeah, of course. But you still feel like you do their job. Yeah. Well, do you let them grow? Oh no. Actually I cut them short. Well, that’s what an AI also does. An AI only hires based on what you can do today for the job that they require, but I want people who can grow. So the human touch is there and I see that that’s not being addressed really sufficiently. That’s a cycle of irony. We start again, the Taylor, we treat people, I guess we bring in [inaudible]

Des Dearlove:
And Robert, thank you to your point. I see you just made a note. Yes, you have advanced the conversation. It’s been a helpful intervention. And so thank you. The other thing is you’re talking about an investment in soft skills. My son has just completed his degree in philosophy. And he’s now about to take a masters in philosophy. And he said to me, “Well, my worry is what’s the use of that in the real world, in this technology fueled world?” I’m saying, “Well, hold on a minute, you may just be onto something here because in terms of our leader, perhaps we need something closer to Plato’s philosopher kings.” I mean, is there a role for some of almost some of the classic subjects, but some of the very human things that only humans can wrestle with and philosophy is probably a good example of that.

David de Cremer:
Oh, definitely. I mean, we need it. Two reasons. I mean, first look at COVID-19 in my view. So what we see with COVID-19, this is interesting how the big tech, in my view, the narrative of Silicon Valley has basically dominated governments as well. It’s inevitable technology. There’s only one way forward. For example, in Singapore, we see this, it’s in the newspapers today even once the pandemic is over and we’ve opened up because they’re opening up now, you have to live with the virus, the workplace will be different, meaning it’s digital. It’s completely that way, but that’s one way. But on the other hand, we’ve seen that during the pandemic, during the lockdowns, it’s never been the case that so many people online have been looking for courses in philosophy or psychology having existential worries. I mean, if we don’t address that, we’re going to have a workforce in the next few years with a lot of burnouts, people who don’t see meaning in their jobs.

That’s why we see that so many people in service indicate they want to change their jobs, up to 80% basically in the UK, they want to change their job. I think it was 77%. That has not to do because they think, “Oh yeah, I want to earn more money.” No, it’s because they started thinking. So these things are very important. The second reason why I believe this is that we have, and we’ve arrived in an era where we need more generalists. We’ve been training people to be experts. So if you ask for a company, total quality management, oh yeah that’s an old concept, but actually total quality management. If you look at it, people, if you’re not listening, then they will tell you yeah, we have no people anymore who see the whole cycle and really see is this what we need to do?

Do we really want to produce this? We need more generalist thinking. So philosophy in that sense to me has a future, definitely because we need that kind of thinkers and logic. Because during my presentation, like I said, the room for humans is there and we need to take up that responsibility. But I would always say like philosophy of the mind, you’re very strong reasoning person. You have the critical thinking skills, which is also make sure you’re also tech savvy. Tech savviness is old, but tech savvy doesn’t mean you need to be a data scientist.

Des Dearlove:
Well, hopefully all the kids going up now, they are sort of digital natives in the way we weren’t. Listen, we’ve overrun. We did start a little bit late. I find this topic as you can probably tell absolutely fascinating. And I hope it’s one that we will have an opportunity to come back to with you, David, because I just think it’s so important right now. And there’s so many different dimensions to it. So for now, thank you very much. Next week, we’ll be joined by another shortlisted thinker, Lindsey Pollak. So please do join us then. Thank you.

David de Cremer:
Thank you, everybody. Thank you.

Share this article:

Subscribe to our newsletter to keep up to date with the latest and greatest ideas in business, management, and thought leadership.

*mandatory field

Thinkers50 will use the information you provide on this form to be in touch with you and to provide news, updates, and marketing. Please confirm that you agree to have us contact you by clicking below:


You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at . We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Privacy Policy Update

Thinkers50 Limited has updated its Privacy Policy on 28 March 2024 with several amendments and additions to the previous version, to fully incorporate to the text information required by current applicable date protection regulation. Processing of the personal data of Thinkers50’s customers, potential customers and other stakeholders has not been changed essentially, but the texts have been clarified and amended to give more detailed information of the processing activities.

Thinkers50 Awards Gala 2023

Join us in celebration of the best in business and management thinking.