AI is changing the way humans work. Some traditional skills are becoming obsolete while new skills, such as prompt engineering, are rapidly growing in demand.
However, one of the key ways that we learn new skills is also under threat. In our rush to optimise productivity by using new, intelligent technologies, we are separating junior workers from the experts they would normally have learned from.
The following is an edited excerpt from The Skill Code: How to Save Human Ability in an Age of Intelligent Machines by Matt Beane (Harper Business, 2024).
The Threat
In our quest for productivity, we’re using new technologies in ways that make novices more and more optional in experts’ work. That compromises healthy challenge, complexity, and connection—no matter who you are or what occupation you’re in.
I reached out to researchers to get coverage across two important categories: intelligent technology as a tool to do the work and as the platform for the work. In microchip design, for example, engineers literally can’t lay out a new design without using AI. Self-driving telepresence robots in eldercare, on the other hand, gave doctors a new way to “be there,” but they still interacted with residents using the same skills they always did. Put together, our rich data showed the same dynamics I found in surgery. Across industries, technologies, occupations, geographies, and kinds of work, See one, do one, teach one was becoming See one, and if-you’re-lucky do one, and not-on-your-life teach one.
Why? New tools such as machine learning, sensors, robotics, and cloud computing allow us an unprecedented ability to reconfigure jobs in search of improved results. This has many potential benefits, but one of the main ones is to help extend and scale the value of a single expert’s skill. Let’s step through an example in chip design. A design engineer can run an AI-driven algorithm overnight that searches through many potential configurations for a microchip. Like a “fit the blocks in the square” puzzle, a bunch of components need to fit together—some to do with power transfer, some with computation, some that deal with signals between blocks, and so on. Tiny changes mean better performance and moving a block to improve things on one dimension can make things worse on three others.
Without the AI-enabled software, optimizing all these requirements could take a superb human engineer five to six days, with more opportunities for mistakes. But with AI-produced output, that engineer can use their subtle design sense and experience to make an intuitive assessment as to whether the AI solution is creative enough. The expert makes better use of their highest talent, and speed and quality both improve. Everyone wins there, right? Wrong. You know who loses by now: the junior engineer—and, by extension, the organization and the entire engineering profession, which are hobbling the next generation of talent.
Before that AI-enabled tool came around, senior design engineers relied more heavily on junior engineers to run the simpler portions of the analysis required to inform the chip design process. The senior engineers would help them prepare for that analysis and examine the output, giving feedback all along the way. Junior engineers asked questions during this process so they could improve their work. But as soon as the AI-enabled design tool was implemented, junior engineers were basically cut out of the action, firstly because novices are slower and make more mistakes, and secondly because experts must attend to novices, which pulls those experts away from doing what they’re best at. Removing novices from the work therefore improves speed and quality on two fronts.
But gains in efficiency and error reduction come with side effects. In this case, we are cutting workers who need to learn out of the loop in the name of increased expert productivity. In the short run, this is great—more and better chips, faster. But even in the medium term, the organization and the profession build less skill because the expert-novice collaborative link is broken.
Standing still is not an option if you’re on the tracks
Most corporations have yet to hear the whistle of the oncoming train. They spent just over half a trillion dollars on learning and training in 2022, but almost none of that was spent on the expert-novice connection. This is clear from the fact that we don’t even measure our efforts to foster novice learning alongside experts. And yet most organizations depend heavily on this informal, apprenticeship model. A 2011 Accenture survey revealed that only one in five workers felt they had learned any new job skills through formal training in the previous five years.
Paul Osterman, a researcher at MIT, recently did a far more rigorous survey, representative of the US working populace—only half those surveyed received any training, and that’s including those who went out and found it themselves, without employer help. We’ve just happened to set work up in ways where an expert relies on a more junior member of their occupation for help, and we take it for granted that the junior member learns in the process.
You can find rare exceptions in a few occupations where failure is not an option: EMTs, cops, surgeons, electricians, and so on. They have well-developed apprenticeship and certification standards. But even there, they stop after an initial training period is over, even though experts would tell you that they built critical skill after they headed out on their own. If we spend half a trillion dollars on formal training a year—if that’s what success in formal training is worth to us—how much should we be spending to understand and support the skill code that’s the basis of most of our valuable capabilities? One thing’s for sure: it’s not zero.
But here we are. Basically, giving no attention and devoting no resources to our adaptive lifeblood at the very moment we’re thinning it out through the way we’re handling technology. We can— and must—do better than this.
Fixing a problem takes seeing it clearly. That’s why it’s time to take a careful look at how this is playing out, through the lens of healthy challenge, complexity, and connection. This will give you a crystal-clear picture of this threat and the tools to help you know it when you see it in your work or organization. Let’s start with challenge.
Continue reading:
Matt Beane has spent his career researching work and technology. His 2018 TED talk, ‘How do we learn to work with intelligent machines’ has received almost two million views. Matt is an assistant professor in the Technology Management Department at the University of California, Santa Barbara and a digital fellow with Stanford’s Digital Economy Lab and MIT’s Institute for the Digital Economy. He is a member of the Thinkers50 Radar Class of 2021.