Decoding AI’s Grip on the Workforce

Hatim Rahman is an assistant professor of management and organisations at the Kellogg School of Management, Northwestern University. His research investigates how AI and algorithms are impacting the nature of work, the labour market, and employment relationships. His forthcoming book, Inside the Invisible Cage, examines how algorithms control the workforce by shifting rules without notice or recourse. 

In this LinkedIn Live session with Thinkers50 co-founder, Stuart Crainer, Hatim explains how technology tends to reflect and amplify the values and priorities of the organisations that implement it. Some, for example, prioritise efficiency over employee wellbeing, which can affect how algorithms work.

For workers in the gig economy and increasingly beyond, Hatim says, their experience is like being in an invisible cage, where the rules and guidelines for what is rewarded are opaque and changing. However, it doesn’t have to be like that. Organisations can make AI and algorithms transparent. To foster trust and accountability, they need to bring in diverse stakeholders during the design phase of AI systems.

Listen to more from Hatim on how AI systems can be used to attract new talent, expand the skills pipeline, and provide wider access to the higher-paying roles, including to those lower power groups that have historically been left behind.

WATCH IT HERE:

 


Transcript

Stuart Crainer:

Hello. I’m Stuart Crainer, co-founder of Thinkers50. Welcome to our weekly LinkedIn Live session, celebrating some of the brightest new stars in the world of management thinking. In January, we announced the Thinkers50 Radar Community for 2024. These are the upcoming management thinkers, we believe individuals and organisations should be listening to. The 2024 list was our most eclecticly challenging yet. This year’s Radar is brought to you in partnership with Deloitte and features business thinkers from the worlds of fashion retailing, branding and communications, as well as statisticians, neuroscientists and platform practitioners from the Nordics to New Zealand and Asia to America. Over the next few weeks we’ll be meeting some more of these fantastic thinkers in our weekly sessions, so we hope you can join us for some great conversations. As always please let us know where you are joining us from today and send in any comments, questions, queries or observations at any time during the session. Our guest today is Hatim Rahman. Hatim is an assistant professor of management and organisations at the Kellogg School of Management, Northwestern University.

His research investigates how artificial intelligence, undergirded by algorithms, is impacting the nature of work and employment relationships in organisations and labour markets. His research and teaching have received numerous awards. In 2023 he was named as one of the best 40 business school professors under 40 by Poets&Quants. Hatim’s book, Inside the Invisible Cage: How Algorithms Control Workers, is coming out in September published by the University of California Press. So this is really an exclusive look at the forthcoming book, Hatim welcome.

Hatim Rahman:

Thank you so much, it’s an honour to be here.

Stuart Crainer:

Can you tell us about the genesis of the research and Inside the Invisible Cage? How’s your research evolved over the 10 years previous?

Hatim Rahman:

Sure. So I started my research career at Stanford and I was in the engineering school in a program called Management Science and Engineering. And I love that program because it really tries to bring together research, insights, and practitioners at the intersection of technology in management. And so when I was there a lot of the engineers were working on and with organisations building digital platforms such as Uber, Upwork, Fiverr, Lyft, Amazon, you name it. And there is this current going on that – especially back in 2012, 2013 – that the future of work is going to be controlled by AI and algorithms, and to some degree viewing it as an optimization problem just as you would a product market or other things like that. And I was frankly caught up a little bit in the rhetoric and that agenda and thinking about yeah, this really could be the future of work and I think it is. But as I started doing my research, my advisor and my group have always been centred on like, let’s look at a phenomenon from many different lenses, right?

Because if you look at it through one lens, you could see it as an optimization problem. But if you look at it through another lens you see… especially what I started to focus on high skilled online labour markets like Upwork, Fiverr, Topcoder’s, where they were trying to build an eBay for high skilled work. So the idea was that just like you could find a product by searching the internet, you should be able to find any high skilled workers. A professor, a statistician, you name it, you could find them on these types of platforms. And so that’s what really got me interested in looking at these platforms and more through a different lens and speaking to workers. How are they experiencing the platforms in their day-to-day experiences?

Stuart Crainer:

So really, your story is one of how we view the technology and the way it has changed in the 10 years since you started working in this area? The technology AI in particular was seen as it would replace the mundane and repetitive jobs, it wasn’t seen as having an impact on managerial jobs.

Hatim Rahman:

Exactly.

Stuart Crainer:

But over recent time there’s been increasing realisation that the technology has an impact on managerial jobs as well.

Hatim Rahman:

That’s right, and one insight that kind of cuts across mine and others’ research is that when you look at technology’s impact on management in organisations there’s two central insights that I think are important that come through my research and others. One, is that a lot of technology encodes and amplifies organisational values and priorities. What I mean by that is that if an organisation values efficiency over other outcomes, that usually gets reflected in how they design and implement a technology, versus trying on other types of outcomes like worker well-being, or so on and so forth. It’s not an either/or but again it does reflect priorities. The second thing that I think my research and others also highlights is that technology very rarely will replace an entire profession, which is kind of the fair that is often articulated in the media. There’s a study that I often cite in my talks that since the 1950s only one occupation out of 270 has been eliminated by automation, which is the elevator operator.

What that generally shows, going to your question, is that it’s usually the nature of work that’s going to change for all types of workers. Right? The way we are doing this interview looks very different than even probably what it would have been 10 years ago. So yeah, we’ve seen now with AI that the scope and the scale of the workers and the nature of the work that is going to change has increased quite significantly.

Stuart Crainer:

It’s interesting because they’re kind of profound issues, aren’t they?

Hatim Rahman:

Yeah.

Stuart Crainer:

So forgive me Hatim if I go back over what you said and try and decode it.

Hatim Rahman:

Sure.

Stuart Crainer:

Technology is an expression of an organisation’s values, the way they use the technology is an expression of their values is really interesting in itself.

Hatim Rahman:

Yeah, and that’s… I don’t know if I can connect it back to my book when I’m bringing forward this metaphor of the invisible cage. Whereas in the gig economy and platforms, what I have seen in my research as well as others’, is that there was this priority and value to try to maximise. Again, optimise matches and engagement and interaction which, again, at a very high level it makes sense, right? Why wouldn’t you want to try to match people and so on and so forth. But it overlooks a lot of workers’ wellbeing, how they interpreted and felt about being subject to algorithms and AI that could change without notice, recourse or explanation. And so that’s where I kind of brought forward this metaphor for a lot of workers in the gig economy and increasingly beyond, their experience is like being in an invisible cage. Where again, the rules and guidelines for what is rewarded is opaque and changing. Whereas it doesn’t have to necessarily be like that, right? AI and algorithms, organisations, can make them transparent, they can reveal their data sources.

Of course there’s always competing interests that are there but I want to, with my research and in the book, kind of really highlight what our organisations prioritise. And we may get to this with generative AI as well but I think it’s very much reflective with that technology as well.

Stuart Crainer:

So your research kind of explains that employers can use algorithms to shift rules and guidelines without notice, explanation or recourse for workers? So that’s where they kind of… Well, it is so opaque and ever-changing. So they’re in a no-win situation, aren’t they?

Hatim Rahman:

That’s right, and again one of the reasons why there’s this justification or there’s this rationale to make them opaque and dynamic. So I kind of traced this through in my book as well, is that a lot of really the online world has been heavily influenced by eBay. So going back to the early mid ’90s, they were very much a pioneer in facilitating online transactions and scaling it at a global level. Right now it’s routine that we can buy products and interact with people. But back then people were not sure how could I trust buying a VHS perhaps back then from somebody I’ve never met and so they rolled out the five-star rating system as a mechanism to try to give a sense for people who don’t necessarily know each other. How trustworthy is this person that I’m buying from? And for a certain time period it worked well but almost all platforms, Airbnb, Uber, again Upwork, Fiverr, they kind of copied and pasted that model initially. But one of the lines I have in my book is that, “People are not products.” Right?

You took something that worked well for a certain time period in a product market and applied it in a labour market and you saw the consequences of it. So one thing that has happened throughout time, especially back in the 2010s and so on and so forth, is that these rating systems were very easily manipulated, gamed, by workers, by outside actors and we saw that on social media as well. Right? And so the response of a lot of online platforms was to shift to make these rating systems, evaluation systems, these algorithms, more opaque. So again, it makes sense initially for why they did so but what my book highlights is that it tilted too far into the opacity direction where it created this invisible cage. So in the book I try to articulate how to maybe find a medium behind the initial start when everything was very transparent, easy to understand. But subject to gaming and manipulation and where we kind of find ourselves right now where it’s opaque and no one understands, is your book going to be absorbed by a generative AI system and someone’s going to copy and paste it, right?

I think a lot of people agree, that’s not where we want to be or where we want to go.

Stuart Crainer:

And you had a spell working in data science at Airbnb, and I mean that’s 10 years ago or more. But how did that inform the development of your research?

Hatim Rahman:

That’s a great question. So I think there are two things. One, was that when I was doing my PhD I was doing some computational social science work and so one was just being there was really amazing in terms of the people that they brought in. The data science team I work with to this day are some of the brightest, most curious people that I was interacting with. But one thing that also stood out, at that time there were concerns about how Airbnb was impacting local communities, people who don’t have access to housing. I was looking for that conversation to happen and seeing what type of research was going on and it was there but not necessarily at the forefront. So grappling with… at least to me… I mean, of course I can’t speak for Airbnb or anything like that. But from my observation of being there I didn’t see that at the forefront and a lot of people – well, a lot of my friends and others who have subsequently left – have talked about this as well, is that there wasn’t enough of a reckoning and you see that now with New York and other cities around the world, including in Europe, putting much stricter guidelines and guardrails around how Airbnb can be implemented. But it kind of keyed me into, again we’ve got to be looking at the implications of these technologies and these platforms with multiple voices and stakeholders in the room.

Stuart Crainer:

And presumably the danger is that we are excited and seduced by the technology and don’t give any thought to the implications for a broad group of stakeholders.

Hatim Rahman:

Yeah. So this is going back to the earlier thing that we kind of started the discussion with, that technology’s features or its capabilities very rarely are the main determinant for predicting its impact. And again, it goes back to the organisational values and priorities. And I give this example in many ways, right? So the technology for pilots to automate their flying experience or their work has existed for quite some time. I teach this in my class. Back in I think 2016, 18, there’s an article about how the technology exists to automate a flight but for lots of reasons we haven’t taken pilots out of the cockpit. Instead, even if you look at how pilots are trained, they have to learn how to fly a plane manually and the technology comes in to enhance ideally and we’ve seen that, to enhance what they do. Right? They haven’t made a decision: “Because it can automate what we can do we’re going to reduce our training requirements, we’re going to reduce our safety requirements.”

They haven’t made that, even though again the capabilities of the technology can automate a lot of what they do. So I use that example to say that in any profession going forward, if the capabilities exist to automate, which again rarely happens, it doesn’t mean it has to negatively affect the entire occupation or the industry, right? We can choose as organisations, as occupations, as a society, to figure out ways to enhance the way people do their work.

Stuart Crainer:

There are still a few elevator operators left.

Hatim Rahman:

One of my teaching assistants this year emailed me and told me that they saw an elevator operator in Chicago as well, so I may have to correct that statement as well.

Stuart Crainer:

But yeah, it’s an interesting thing though isn’t it? That professions that have been wiped out is a very small number.

Hatim Rahman:

Yeah. I mean that’s usually to spur conversations to the earlier point that the nature of work often changes, right? The distribution of work or professions changes, right? We don’t have the same distribution and professions in the 1950s as we do today. We have very different types of work, new types of work, new types of jobs and so what some of my emerging research is looking at is that given that technology and aggregate creates more jobs than it displays. A lot of economists and other scholars have shown that. Who is getting access to those jobs? I think that is one of the most important questions that has proceeded in going forward, given the capabilities that we’ve seen with emerging AI and other technologies.

Stuart Crainer:

So that’s the crucial point, isn’t it? That technology in the end creates jobs one way or another?

Hatim Rahman:

Yeah. So in the aggregate, right? When we look at the macro trend, really the question that my research and others are bringing to the forefront is again what is the path that we’re going to go on while those jobs are created? Right? So in the US and the UK and others in the last 30 years we’ve seen a hollowing out of a lot of kind of middle income jobs, right? In ways that haven’t necessarily been beneficial or some of those top jobs generally go towards people like us who already have many advantages with our education and with our networks. And so again, thinking through, how can we have people who have traditionally been left behind? Again, how can organisations help ensure that workers that have been left behind also have access to those new jobs by upskilling, re-skilling, so on and so forth?

Stuart Crainer:

Yeah. Aleksandra from Poland says, “How about customer service workers? Many have been let go and replaced by bots.”

Hatim Rahman:

So that’s a really interesting common observation. So I would also say on that point too that it reflects organisational values. When I’m driving my car I hear this advertisement from Discover I believe – and don’t quote me on this – but one of the major credit card companies, they say that, “If you call our customer service you will talk to a live human being.” Right? And we saw recently with Air Canada, they did try to replace some of their customer service with a kind of ChatGPT generative AI and it backfired. Right? Because it quoted an incorrect policy and so on and so forth. So again, I do think that some of the smartest organisations are figuring out how to ensure and use technology to enhance their customer service. But to the earlier comment, there are definitely organisations that are using it to kind of replace customer service. I would say that I haven’t seen aggregate data yet to understand on aggregate what is happening. But at least anecdotally we’re seeing some of the insight I said earlier, that it’s going around what organisations value.

Just one more comment on this. When I was at Airbnb, one of the most technologically advanced organisations I think in the world, the largest group at that time was their customer service group. Because they wanted to make a distinguishing factor that when you call Airbnb often you’re calling to resolve very complicated issues. I can’t find the key, I can’t get in touch with a host, so on and so forth and they could have made a choice to automate that. “Oh, tell me about your problem?” But at that time they made a choice to really emphasise the ability to talk to a live human being.

Stuart Crainer:

Yeah. So that goes back to your earlier point Hatim, about technology is an expression of organisational values in one way or another.

Hatim Rahman:

That’s right.

Stuart Crainer:

Somebody from LinkedIn says, “How do algorithmic management systems impact the autonomy and decision-making abilities of workers in various industries?”

Hatim Rahman:

Yeah, that’s a great question and we’ve seen it… So in my own work, a lot of it is tied to the employment relationship. So for independent contractors, ones who are kind of generally freelancers and contractors, it significantly decreased their autonomy, my own research has shown. Whereas in organisations where their workers are generally full-time employees, higher skilled, such as in finance and in some degrees even in education and consulting, there has been an attempt to try to allow to use algorithms to enhance autonomy and decision making. So broadly speaking I’ve seen it fall along the lines of employment relationship.

Stuart Crainer:

And a lot of your research focuses on the gig economy Hatim and the gig economy, I mean fundamentally it gets a lot of bad press. Is it a good thing or a bad thing? And are there examples of organisations utilising the gig economy in responsible, ethical and sustainable ways?

Hatim Rahman:

Yeah. No, that’s a great question. And one of the reasons why I feel fortunate to have been studying the gig economy, because it’s been a microcosm for a lot of the issues that have been at the forefront of society. Autonomy, decision making, is it good or bad for society? And as a researcher I’ll say that it is a mixed bag, right? So let me just give you an example that I think is more accessible to people: social media. It has allowed people and voices to come to the forefront in ways that I think we can all agree have been altering for the way that we think about issues. The way that we think about conflicts, the way that we think about business and society and we want to retain that. We want to retain the ability for historically marginalised voices and groups to be able to amplify their voices through social media, so on and so forth. At the same time as we’ve seen with elections and other situations and scenarios, those same systems can be hijacked or can be used to amplify disinformation, hatred, what some research says, “Amplify moral outrage.” Right?

We want to dial that down, right? Theoretically speaking – and that’s where my research and others’ come in – is how do we optimise for some of the benefits that we’ve seen with gig economy and social media while also recognizing – in my work and others’ – that there are distinct harms and drawbacks that we have come across. And so to answer your question, I think we haven’t found the optimal ways but there are emerging solutions that are there. I’ve spoken in my book and others that there are worker cooperatives that are coming up that are trying to provide that flexibility and autonomy that the gig economy provides. While also trying to enhance and maintain the autonomy and decision making that the previous question was highlighting.

Stuart Crainer:

We’ve talked about in previous sessions with Leon Prieto and Simone Phipps, and their work on cooperative advantage and you have done work as you say Hatim about worker cooperatives. Do you think they’re liable to increase in number, what are the trends there?

Hatim Rahman:

So we definitely see a lot more interest and energy around it, the biggest barrier is that scaling these types of initiatives takes a lot of resources. Right? If you look at where a lot of the emerging technology is coming in the platforms. It very highly correlates with the organisations and platforms that have been able to get access to lots of resources. In terms of not just monetary but human capital, investors. So to answer your question directly I do see a lot of initiatives coming up, but one of the biggest questions is will they be able to scale? And it’s not just up to them, right? It’s will be organisations, businesses, governments, given the resources, as others have pointed out, a lot of the largest organisations including Tesla and Google, they have benefited from regulation in the US and other countries. Right? They played a very key role in allowing them to scale at that time.

So I think that’s going to be one of the biggest questions at the intersection of these initiatives, what will regulation, what will investors, what will that mix look like to be able to hopefully scale some of these alternatives?

Stuart Crainer:

One issue we haven’t talked about is the impact of the pandemic, which I presume expanded the invisible cage.

Hatim Rahman:

So to your point that in the pandemic, when a lot of work went remote, many organisations that again if you look at their values and priorities, they also began to implement opaque algorithms that have very large consequences on people’s careers. One of the biggest areas where I’m seeing this is not just in the gig economy, it is with AI hiring systems. In speaking to workers and my students it accelerated quite a bit in the pandemic. But the way that they are implemented, the way that they make decisions is often opaque to workers and some emerging research has shown that many of these systems disproportionately negatively impact – again historically– marginalised groups. People with darker skin, women, for often reasons that almost always I should say that had nothing to do with their skill and capability and the fit with their job, but instead for reasons that are opaque but often are reflected in past biases that exist in organisations and these systems. So yes, I have seen an acceleration industry-wide of using opaque technologies in ways that really, the consequences we’re still grappling with.

Stuart Crainer:

A question from somebody on LinkedIn, Hatim. “How can organisations foster a culture of trust and collaboration amidst the increasing reliance on AI and algorithms for workforce management?”

Hatim Rahman:

Yeah, that’s a great question.

Stuart Crainer:

It’s a killer question, isn’t it?

Hatim Rahman:

Yeah. So let me try to answer it as directly as possible. One theme that I and many other researchers have talked about is that the design in building that trust around technology has been in the hands of very few stakeholders, often people with homogenous backgrounds. Right? Data science, computer science and one thing that I say and again, I was in Silicon Valley, it’s not that – and maybe I’m a little bit naive in this – I do think people have a good intention when building these tools, but there has to be a recognition that no one group or expert can figure it out, especially in the design of these technologies. So to answer your question directly I think it’s essential to bring in diverse experts and stakeholders in the design phase. Often it’s brought in but too late, when the consequential decisions have already been made and so that’s one way to do so.

The other thing that we try to bring forward in our research is that again, it’s very hard for a single organisation or group to figure it out all on their own and so I do think that there is a role for crowdsourced accountability systems. We have seen that really help a lot with sustainability and even in higher education to some degree, if you think about US news rankings and Financial Times rankings, technically an organisation or university doesn’t have to pay attention to them, right? But they’ve become very powerful in making organisations change to try to meet those types of criteria. So one of my ideas I propose in the book, also in other work as well, is trying to get crowdsourced accountability systems, given how difficult it can be to get organisations to bring in diverse stakeholders or governments. By kind of design, governments are reactive in nature and so that’s another potential solution that I see that can help bring forward more trust in these systems.

Stuart Crainer:

I think there’s another question, which I think you’ve kind of answered really Hatim to some extent. “How can educators and policymakers ensure that the implementation of AI technologies in the education system promotes inclusivity, equity and accessibility for all students regardless of their backgrounds or learning needs?”

Hatim Rahman:

Yeah. I mean, as an educator I think very deeply about this and to answer that question and again tying to the earlier themes. One essential question is what are the key learning objectives and goals of the class and the learning? And then figuring out how to ensure… Again, you can use technology to enhance that way. Right? So in my own class for example, I prioritise to some degree in class interaction and discussion. But I recognize that not everyone is comfortable participating live or in the moment and so I try to offer through technology other alternative ways you can submit your thoughts after class. You can send me articles that are in your wheelhouse and also I make the recording of my class available to students because people learn at different speeds or in different ways. Right? So I think thinking very deeply and, kind of what I mentioned earlier, is talking to students, talking to student services, talking to colleagues.

You would think it happens more, but you’d be surprised because we’re all busy, we’re doing different things, it doesn’t happen enough. So creating those spaces where you can encourage diverse voices to come in I think is really important and thinking about again, ensuring that people have the right incentives. Because again, all of these people are busy, they’re doing their own work. So thinking about how we can create spaces and incentives to bring these voices in an ongoing manner, it’s not a one shot thing. Right? So when you mentioned the pandemic, how I was trying to promote inclusivity when everyone was remote looks very different when we’re coming back and recognizing that people still have vulnerability to different diseases that are spreading and how can we accommodate learners that are there as well.

Stuart Crainer:

Another question from LinkedIn, thanks Hatim for that, is, “How can organisations balance the need for efficiency and productivity with the ethical concerns surrounding the use of algorithms in managing and evaluating employees?” And I suppose that’s the efficiency and ethical balancing act and I guess that’s always been the case. And I was thinking as you were talking earlier about organisational decisions, in some ways they’ve always been opaque for ordinary workers.

Hatim Rahman:

Yeah, so let me hit on that point for a second. So you are correct, it is not necessarily new. I mean, one thing that I am trying to bring to light is that with these technologies, with algorithms and AI. The opacity scale is dynamic, it scales such that often people designing the systems don’t even know how they work now. Right? So there is a survey that I show in class, over 60% of execs don’t know how the AI systems that they’re thinking about or using work. That’s problematic, not just for the people who are subject to these systems but organisations themselves and the leaders. Right? But going to that question earlier, as you mentioned there has always been this competing priorities of efficiency and skill versus slowing down and ensuring that we are inclusive and we build trust. I guess my argument looking across industries is that rebalancing to slow down in the long term is beneficial.

So we’ve seen that with sustainability as well, right? If you think about it, if we had been listening to the voices across the world in indigenous communities that have been saying that we need to foreground sustainability in our construction and in the way that we interact with our environment at that time. Right? A lot of the problems that we’re encountering right now would likely be at least mitigated to some extent. So I’m not going to make the argument that in the short term I’m going to be able to convince organisations that slowing down is going to help their bottom line. I mean, we’ve seen that that very rarely works. But I think that if you are able to think about the longer term view, because the implications of long term are not just for the organisation but they reverberate across communities and societies. Right? So my argument is that for organisations in the long term, for themselves as well, it’s much more beneficial to figure these issues out now. Because again, bringing in diverse voices, diverse expertise, will lead to breakthroughs in the long run.

Stuart Crainer:

I suppose a related question is, where is best practice? Where are algorithms actually setting workers free? Are there any good examples?

Hatim Rahman:

Yeah. I think that there is a movement called AI for Good, so I think that’s the way to do it. So for example, in class I talk about how researchers have used AI to detect modern slavery, using AI to try to track… so again, going back to the earlier question, what objectives are we thinking about sustainability, right? To try to find organisations that are skirting emissions rules, right? There are AI systems that are being used to deploy to try to optimise for societal beneficial outcomes. So I do think there are examples that are being forward and, going back to the example that I mentioned for hiring, there are some organisations, and one of the reasons I’m hesitant to name them is because things change so fast, that instead of using it to enhance their current processes, they’re using it to try to expand the pool of workers that they are considering. So organisations and humans are limited in how many people and how many resumes and how many schools and how many cities they can visit.

But turning it around and saying, how can we use AI systems to allow people from anywhere in the world to apply and perhaps even giving them feedback if they don’t end up going in that direction. Right? So I think those are some promising examples of… instead of looking at how we narrow the skill pipeline, but how do we expand it and bring in people who we could never necessarily bring in and assess, given the limitations of organisations and humans?

Stuart Crainer:

So where does your research go now, Hatim? What are you working on at the moment?

Hatim Rahman:

That’s a great question. So there are two projects that I’m very excited about. One is the one that I mentioned earlier, is that especially in the pandemic, we saw that – I believe it was Intel – they said that 50% of their jobs… it was IBM or Intel, would no longer require a four-year college degree. And so there was some momentum and excitement that… again, so in the US for example, 60% of adults don’t have a four-year college degree. And so thinking more about how we can get more diverse voices, particularly those who have been historically left out, access to those higher paying jobs that are created by new technology. So thinking about workforce development, re-training, re-skilling in the age of AI. So that’s one area where I have been very active with my students and collaborators… that we’re trying to pursue. The second project that I also want to emphasise and highlight is that we are going back to the earlier questions of how do we design more inclusive AI and technology?

So we’re trying to study this in the healthcare organisation that is trying to design and build an AI system for ultrasounds for low and middle income countries. So a lot of people don’t have access to healthcare. And going back to the question of AI for Good, to your earlier question, so in the healthcare space I think there are a lot of great efforts to try to design these technologies to allow people to get access to care. But going back to the earlier example, the project that we’re observing is that a lot of complications that happen in birth are preventable if people have access to higher access ultrasound in imaging. And so the project that we’re following is that they’re trying to design an AI system so that really anyone with this ultrasound probe could scan a foetus and then that probe would be able to predict and give you insight into how healthy the foetus is; are there complications occurring?

So that if there are complications occurring, where they don’t traditionally have access to healthcare they could go and access expert healthcare providers and hopefully mitigate the consequences to the foetus and the mother that have that potential complication. And so that’s exciting. Because again, we’re looking at the very early stage at the design phase and trying to understand how they’re able to ensure that the AI system can be used by diverse populations and stakeholders. So yeah, a little bit on the labour market and AI in organisations as well.

Stuart Crainer:

And are you optimistic that we will begin to use algorithms in positive ways and AI will work for good?

Hatim Rahman:

I think you have to have some form of optimism, at least for me in my research to be able to push forward, and I am optimistic in that I do think it’s somewhat a push and pull, right? I mentioned earlier transparency and opacity. I have been encouraged by initial regulation, in the US there’s been a proposal by the Biden administration, Alondra Nelson and others about an AI bill of rights. The European Union is also very proactive in trying to think more proactively in part because of what we’ve experienced with the gig economy and social media. So I am optimistic not just with my nature, but also with some of the initiatives that we’re seeing globally, to try to create a better balance to create mutually beneficial outcomes that allow for greater productivity. But also in ways that are beneficial for workers and lower power groups that have been historically left behind.

Stuart Crainer:

Yeah. I guess one issue from what you said, Hatim. Are these issues universal or are the different cultures and nations interpret them in different ways? I mean, is there consistency in the way that workers’ rights are treated throughout the world?

Hatim Rahman:

Yeah. No, that’s a good question. I mean, so the short answer is there is nuance. Right? There’re nuances that are beneficial, so some large players like the European Union and other countries around the world that do put workers at the forefront and hold organisations accountable for their goals and outcomes… it can reverberate and if you want to be able to play in certain countries, in certain regions, you have to meet these criteria. Well, that can reverberate to other countries that have weaker protections, right? If you make the changes here and it works in ways that are beneficial, then it allows organisations to see the benefits and ideally roll those out in a global way. And I think this is really important, as you mentioned, because with the gig economy, with sustainability, with a lot of issues, they aren’t just local issues, right? They affect workers globally. So I’m not an expert in all the countries, but broadly from my reading of the research there is an important nuance that will hopefully help create those mutually beneficial outcomes.

Stuart Crainer:

Lots of food for thought. Hatim Rahman, thank you very much. I like crowd-sourced accountability systems, an irresistible idea.

Hatim Rahman:

Thanks. And it builds on… again, I want to give credit to other researchers who have also been talking about this as well.

Stuart Crainer:

Yeah, thank you very much. Hatim’s book Inside the Invisible Cage will be released in September this year by University of California Press. So make sure you pre-order that and check out Hatim’s work elsewhere as well. Hatim, thank you very much for joining us. Lots of food for thought, really appreciate the conversation.

Hatim Rahman:

Thanks Stuart, and thanks to everyone for their questions.

Stuart Crainer:

Thank you.

Share this article:

Subscribe to our newsletter to keep up to date with the latest and greatest ideas in business, management, and thought leadership.

*mandatory field

Thinkers50 will use the information you provide on this form to be in touch with you and to provide news, updates, and marketing. Please confirm that you agree to have us contact you by clicking below:


You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at . We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Privacy Policy Update

Thinkers50 Limited has updated its Privacy Policy on 28 March 2024 with several amendments and additions to the previous version, to fully incorporate to the text information required by current applicable date protection regulation. Processing of the personal data of Thinkers50’s customers, potential customers and other stakeholders has not been changed essentially, but the texts have been clarified and amended to give more detailed information of the processing activities.

Thinkers50 Awards Gala 2023

Join us in celebration of the best in business and management thinking.