It starts with coal miners
In postwar Britain, the coal industry had just been nationalized. With it came new mechanized methods for extracting coal. So-called longwall mining replaced the old small-team approach. It was faster and more efficient. At least, from the point of view of the owners.
But when researchers from the Tavistock Institute studied what was actually happening underground, the picture was different. The old teams, small, self-organized, built on trust, had been broken up and reorganized around the machines. The results were many: high absenteeism, frequent labor unrest, and morale problems the engineers hadn't even anticipated.
In short: The technology did what it was supposed to do. But the people around it struggled.
In their landmark 1951 study, Eric Trist and Ken Bamforth showed that introducing new technology without redesigning the social system around it could undermine both performance and well-being. From this work grew what we now call sociotechnical systems theory: the idea that technology and the social organization of work must be designed together. Not one after the other.
Why this matters for AI
The Trist and Bamforth study is 75 years old, but it was only the beginning. Since then, sociotechnical research has shown again and again that technological solutions work best when they also support meaning, well-being, and the social conditions people need in order to do their work well.
And yet, in many ways we're still making the same mistakes.
Today, organizations everywhere are introducing AI — and in particular generative AI — on a scale and speed beyond anything we have tried before. And most of the conversation is about the technology: Which tools to use? Which platform? How do we prompt it? How much time can we save? What about compliance?
Fair enough. But talking about generative AI, or whatever technology the future will bring, as just another digital tool is missing the point. Yes, it writes. It analyzes. It advises. It builds. It does whole tasks for us autonomously. It does things that used to be at the heart of professional work. And when technology gets that close to what people actually do, it doesn't just change workflows; it starts to change how people see their own role, their expertise, their relationships at work.
The sociotechnical perspective says: pay attention to that. Don't only ask "does it work?" Ask "what does it do to the people who use it?"
What sociotechnical thinking looks like in practice
In practice, STAIR would mean asking questions like:
- Value: How does AI create value for the people doing the work, or is it just for the people measuring output?
- Relationships: Does this change how colleagues work together, share knowledge, or support each other?
- Autonomy: Is the person using the tool still making real decisions, or has the AI quietly taken over?
- Identity: Does this support the professional's sense of purpose, or chip away at it?
- Learning: Is there room to experiment and adapt — or is this a top-down rollout with a deadline?
You don't need a PhD to ask these questions. But you do need a workplace where asking them is welcome.
Where STAIR fits in
STAIR takes the sociotechnical perspective and turns it into something practical: a structured conversation. Eight principles. A facilitation guide. A way for teams and leaders to talk about AI that goes beyond efficiency and into the stuff that actually matters: trust, autonomy, well-being, professional identity.
It's not about slowing down. It's not even about stopping at every point in your worklife, where you and your team may encounter AI. Let's face it; AI will be integrated into almost every aspect of the way we work.
But that doesn't mean we should stop thinking clearly about these things. That's a meaningful way to invest a small amount of your time — now and in the future — when the technology around you is changing faster than any project plan can ever keep up with.