
The Mayvin Podcast
The Mayvin Podcast
The Research Hub Podcast - Ep3: How to Use AI Without Letting It Replace Human Judgment
Sarah Fraser and Sophie Tidman discuss the varied responses to AI in organisations, noting a divide between those embracing it and those resistant. They highlight the importance of using AI as a stimulus for thinking rather than a replacement for human judgement.
Thanks so much for listening! Keep in touch:
- Email us on mail@mayvin.co.uk
- Subscribe to our mailing list
- Visit our website
- Follow us on LinkedIn and Twitter
Sarah Fraser 00:00
Welcome to one of Mayvin's research hub podcast series. We're imagining futures of people change today. So at Mayvin Research Hub, we're exploring the key trends and advances impacting our ways of working and our ways of organising. As we look ahead into the future, in each podcast like this, we'll delve into future trends and consider how here in the present, we can start to create the future that we want in our organisations. So today it's me, Sarah Fraser and wonderful colleague, Sophie Tidman, and we're in conversation around everything AI related we've been having many conversations with clients exploring some of the issues, and today we want to kind of delve into some of that and think about the implications through our kind of OD lens. What are the issues that we're noticing, the things that we think are most important in the context of the organisations changing in front of us. Shall we start with what we're kind of noticing in our conversations with clients? So basically, I bring up AI whenever I talk to a client, now it's it's always a focus, and I'm just always interested to know how the people that an organisations we're working with are dealing with it, dealing with it as if it sounds like it's a problem already. Some people see it like that. Others are quite excited. It's it's really interesting to notice there's a there's a whole array of responses and experiences which go from kind of complete denial. It's not my thing. I can't deal with it. I've got to wait till someone says I have to to I think it could be useful. And I'm, you know, playing around with co pilot, and I'm a bit anxious about it, so I just sort of tinker with it on the edge, to those who are, you know, both feet jumping in and, yeah, I use it for this, and I'm, you know, makes, makes it so much quicker to write this report, or, you know, use it every day for, you know, life stuff, as well as work stuff. And my organization's, you know, all over it. I'm quite excited about the potential. But there's, there's definitely, there's definitely plenty of anxiety around it, and it feels like, I don't know how you put it, but when I have conversations, it feels like we're all in this race to make sure we keep up with the development of AI and and that's not easy. Does that resonate with the conversations you're having too?
Sophie Tidman 02:37
Yes, there's definitely a divide between those who are excited about it and want to get on with it, and those who are pretending it's not happening, maybe. And also, where do we start? Even those who are very excited, sometimes it can just feel really overwhelming, particularly a lot of work in government, it feels quite risky using AI to do because I think one of the things you've talked about in previous podcasts is important, of experimentation and learning starting small, and a lot of things feel too risky in government. So there's a yes, this could, yeah, sorry, this could transform everything. And there's so many repercussions because everything's so connected, and the dangers around data and security and people's lives, you know, people's livelihoods, that's that feels like it's quite a worry in often, often in in organisations where there's a lot of legacy systems that already don't talk to each other and
Sarah Fraser 03:43
Legacy systems that don't talk to each other?
Sophie Tidman 03:45
Yeah. And where the data, because a lot of the work with AI, is quite boring. It's like, Have we got all the data together? Yeah, have we got the data?
Sarah Fraser 03:54
I remember really, really great piece of advice, as we were starting to use it in Mayvin, was just make sure you're, you've, you're putting really good data into it, so if you're creating your own AI agents. So here we're getting into some of the the ways that different ways of or understanding around AI already about agentic AI, where you're creating an agent, almost like an employee, that might do some of the thinking for you, compared to large language and models, which are, you know, the the AI tool that we use, you can use day to day, which is just interpreting information or questions that you're putting in and spitting out the answers for you. But the the need, but that that raises a point about AI, and something to always remember as we're thinking about or feeling anxious about the potential of how it might change the way we work and the impact it might have on our lives, is the fact that AI is a it's a mirror of our sort of human world. It is interpreting the data that we put into it or that we are sharing with it. And yes, it can extrapolate from that, and it's becoming more and more effective in doing that, but it's like a rear view mirror. What it is not at the moment, although you know lots of development going on, but what it is not at the moment, is creating new and different futures that we have no agency within. And I think that understanding of AI in that sense that we do have, we have some, we have some control over it. You know, I think there's that fear that we have no control, no agency, over what AI might become, and already what it is.
Sophie Tidman 05:43
I saw on LinkedIn, an article where somebody put for International Women's Day asked, asked for potential futures for women, and they were just very depressing and very status quo, very standard. And that's because the internet is just very it's just a huge mass of information which tends towards the middle, whereas change comes from the edges, right from the difference. And we've found it ourselves when we've tried to do some experiments around seeking stories. Whenever I try and do images with AI, I'm so struck by how gendered everything is. And yeah, it's but that's the quality of the question, so that we're asking of it and the data we're we're putting in, we're asking it to look at
Sarah Fraser 06:26
The data it's able to draw on at the moment, yeah
Sophie Tidman 06:29
Yes but there's no reason why we couldn't direct it to different, different data. We often talk don't we that kind of the data we use in organisations? Is it? Is it is it backward facing? Is it real? Is it present, or is it future, like looking at the signals of an emerging future? So are you making your database, your models, based on old data? When in a banny world, a brittle, anxious, non linear, incomprehensible world, that doesn't make any sense, does it? But you know, in really interesting models around how they use data, how they use AI in environmental context, for example, to predict earthquakes, they're using sensors on animals and real time how they're moving. It's fascinating, like what's possible. So I think a really important learning from OD is our energy, our attention goes, where our questions go, and if we're putting it same or Yeah, well, if we if we're giving up our agency and just asking for answers from from chat GPT, then we're going to get the same old answers. But if we're directing to new questions and finding kind of new uses better, it's going to enlarge our sense of possibility, the available, possible future.
Sarah Fraser 07:44
I really like that and that that's sort of come back to by a conversation, and conversations like keep coming back to which is around round, like the power and status that we give AI, like so many people that are work, that are really developing AI tools and AI that they talk about, the importance of really thinking about the problem you're asking AI to solve and really learning how to use your AI tool, whatever it may be, learning how to use it really well. It's it. It is not something that is given going to give you the answer. So I think there's a concern, or, well, I certainly have that concern, that in in our ways of working, we're going to become really reliant on these AI tools. You know, if you're when you ask it a question, that that becomes the answer, but it, it's not, it's the it is another stimulus for our thinking, and that's where AI has the potential. I think, I think this is that's a really exciting bit to augment, not replace, the work that, you know, human beings, that sounds very big and but you know, the work that we're doing, it is about augmenting the way that we work, not not only replacing, I mean, there's, there's some replacement in the the efficiency of like, data analysis that it can do, which is get just has huge potential, like, like the environmental example that you shared, and you just think, you know, the data crunching that it can do, or, you know, drawing out some key themes for a really long report. I mean, those processes are brilliant. It doesn't mean it gives you the rights and the only way of looking at it, but it's a stimulus to right what's interesting here? What does that make me think? What do I want to go back to in the full amount of information that I got from, you know, the report? But it's then that it comes back to, what status, what power do we give AI in the way that we're working? And I think there's a conversation in every organisation needed around, around that, around the status that it has in in our organisational culture, the role it can play as a colleague, as a as a part of our team here, a part of our way of thinking.
Sophie Tidman 10:02
Those dynamics are probably at play in the wider organisation, aren't they? We think that somebody else has the answer, somebody senior has has the Grand View, and can pull the levers. We probably apply that to organisations as well. There's Ursula Le Guin, the amazing sci fi writer. She talked about how technology simply and how we mediate our experience of the world. It's a very wide view of what technology is, and it and it makes you think of, well, there's some technology we value more than others. So technology is also storytelling. It's also how we organise ourselves as organisations. So technology can be how we bring people together, how we decentralise decision making, whereas I think what we project onto AI is this idea of intelligence that is about controlling and dominating and knowing everything, knowing everything and making perfect decisions, whereas no AI can do that generally anyway, like it does. Yeah, the information is just the information overload we have at the moment is just absolutely wild. So that can really shift our idea, because what you were talking about really is how we how we learn as an organisation, and how can AI support our continuous learning? Not getting to the right answer, continually learning together, which very much.
Sarah Fraser 11:33
Learning and adapting
Sophie Tidman 11:34
Yeah, so an organisation is a living system that continually has to adapt to this very volatile environment we face at the moment, and will continue to face.
Sarah Fraser 11:44
And it makes I've just thought back to the to the writing of James, James Bridle and I was listening to one of his podcasts. So James talks about kind of our relationship to the technology and how and the potential. He both talks, I think he both talks about the relationship well, the the way that we that technology has the potential to connect us more to the world around us, which I think is some of what you're you're touching on there. But he also, he also talks about the the way that AI is is almost sold to us at the moment by those companies that are creating the AI tools that are in front of us, which is that it is the all knowing, you know intelligence and but it, it's, it's creating a concept of AI as, as quite, as dominating, because it fits the sort of idea of corporate intelligence, of you know, there is an organisation over there that has gathered all the knowledge that you could possibly need, packaged it up in a nice AI model, and here you go, and now you can pay for it, and we will answer all your questions, whereas there is a more there is absolutely opportunity and potential here for AI to be, for us to be in relationship with AI, and that creative relationship with like the stimulation, the asking questions, the inquiry, with that that isn't dominating. Does that make sense? Have I made sense?
Sophie Tidman 13:18
Yeah, that makes total sense and it's, I think it's really evident in the so in some of the examples from around the world, like contrasting examples. So the UK government is very preoccupied. I think there's lots of interesting thinking going on. But what's in the public eye? Things like the Humphrey package that recently got released. So it's that's making things faster, more efficient, more effective, still doing the same thing that government's always done. So it's, in some ways, it's, let's make the status quo faster. Is that what we really need today? So in I would suggest alternatives, such as the way that Taiwan's gone about it, which, you know, they framed the purpose in a totally different way, which was, how can we develop much greater trust between government and our citizens? And that's how they used AI so much more kind of Ursula Le Guin type, how do we convene how do we develop collective wisdom, rather than this idea that we're going to get to an answer, the AI is going to provide the answer quicker, faster, more
Sarah Fraser 14:26
And in the, yeah, just recently being an away day in the charity sector, the in that sector, they're using AI both, I mean, in humanitarian aid, it's being used to share information fast and succinctly in like real time. So an example shared was around a UNICEF tool that called you report, which enabled real time reporting you. Know, both in and out, for young people to share information about access and updates and what was going on in different regions, so that they knew how to, you know, where they how to stay safe, where they get anything, whatever they needed, in in volatile, you know, scenarios. But the other example that came up was how in the in the knocks research it's being used to really personalise fundraising, you know, conversations or processes around fundraising. So how do you make sure that someone who maybe wants to give money to a charity feels like they are and gets a sense of how they are contributing something that really matters to them. And you know, that's that's not, that's not a bad thing. It's a brilliant thing to be able to personalise it and create a more of a relationship to that person, the cause, the organisation, which there just isn't the, yes, that's a partly to do with efficiency, because there's just isn't enough manpower in a charity to be able to do that for for every person, but with some AI support, you can do that very effectively. It's just simple, but so, yeah, just changing the changing the nature of the relationship.
Sophie Tidman 16:19
Yeah, our assumption often is that AI is about centralising information, whereas lots of the ways it's actually being used really effectively is, is like those two examples, is sort of decentralising, and you can imagine where people are given that information, like, like in the charity sector example you talked about with the first example you gave, where people are given more data about their circumstances, and they it's not, it's not data that's taken from them, it's data that they gifted. And then they can and they become more seen. Yeah, they think about all the services in the UK, stuff going on in pockets, which is not seen, mistakes that happen regularly in public services and in people being missed, and how it takes a long time for that to be seen, and that doesn't necessarily need to be the case when you've got the right kind of data being fed through.
Sarah Fraser 17:19
So it has potential to be a democratisation agent of information. There's potential. Yes, there are dangers to it, as well as in if the information is not, I don't know true the loss of it, but yes, it has potential. So then I go to the thinking, so how, how do, how would we encourage organisations to work with and integrate AI into their ways of working? To take some of this into consideration, because we're talking, we're talking about quite big picture stuff and big picture issues around AI, but I think there's some really practical, pragmatic things that organisations should be or could be considering to help. Have you got a one that you would start off with? What are we using AI to try and solve in our organisation, or to try and improve in what in our in what we're trying to do here. Yeah, nice, yeah. What's the problem? It's, it's trying to solve here, rather than how, because it's the thing, of like, we need to use it because everyone else is using it. So we just need to use AI. And actually, then you end up potentially causing more problems or even more inefficiencies by just trying to integrate it in some way. Okay, so what's the problem it's really trying to solve here?
Sophie Tidman 18:16
Well, I think for yeah, probably summarising what we talked about so far, I think being goal oriented is really, really important. Like, what's the really big challenge you're facing? Like, everything in in organisations, in leadership, you know, what's your vision that needs to be clear how AI is supporting the vision, rather than, Oh, it could do this stuff, yeah, a bit faster Nice I think I would add in there, there's a bit around, Sorry, go on
Sarah Fraser 19:15
Around the status that you give it so really, working with, with the organisation, to to create a culture of the role that it has in your in the team, in the fabric of the culture, like, is it there for efficiency? Is is it there? Are we trying to use AI as a stimulus for thinking like, how does it play a role in your team thinking? Because at the bottom of what you want to avoid is it becomes the answer. It becomes the the the voice that is relied upon to answer questions, because both, it has the potential to like obviously we said it's a mirror looking back, so it's not working on the right information, so doesn't give helpful answers, but also it has a potential to get in the way of of the relationships in teams and your organisational culture. So because, if people do become, I think there's a little way off, but if people come quite reliant on or used to using AI, actually, you know, could we get to the point where people are less likely to draw on each other's thinking? So just go to AI and ask it the question, because it will give us, give it, give you an answer quickly.
Sophie Tidman 20:44
I think this point of AI is not neutral in terms of power dynamics. Yeah, they're not just how people relate to AI but also, does it? Does it further centralise organisations? I think in the last 10 years, a lot of organisations have become more top down, more command and control, and there's a real desire to move away from that, not least because, how can you possibly innovate when you're hamstrung? But how slow things move when it's top down and people at the top don't have a sight, a lot of a lot of things, everyone has a fragment of the truth. So I think there's a wider design principle about, how can you decentralise, how can you distribute power more, which is also, you know, part of the ethical concerns using AI, and that relates to also not, you know, giving treating AI as a partner, not just an instrument, and not as the answer.
Sarah Fraser 21:35
Yeah, very nice. Anything else, any others that you would add?
Sophie Tidman 21:39
So, and I think that also chimes with the Augment, not replace, yeah, yeah. And also transparency and explainability. So there's this idea of things becoming a black box, and not just AI have that in organisations all the time, like, how is this decision made? I get that a lot from clients. We just don't understand that. You know, we've got these really, really belts and braces to start decision making processes, and then it still happens in this room off over here. Yeah, don't understand what's going on. So black box is kind of that idea. You just chuck something in and something comes out. You don't know how that happens. And we've just assumed that AI could be like that. Somewhere somebody created this AI, and knows how it works, like it should be explained, explicable, and that should be a demand, you know. And we don't have to feel like we're asking stupid questions, or we don't know enough to and, you know, we we have the right to ask that, and to get an answer.
Sarah Fraser 22:36
And then I was just going on to say that there's nothing more useful than hearing like just thinking about the anxiety of kind of trying to get to grips with how we can use AI in all our various kind of jobs and roles. But there's nothing more helpful than hearing how others are experimenting with it and starting small, starting simple, so supporting people to learn and think about how they use it really small scale and and sharing those stories or ways of experimenting with it across your teams and between colleagues. Just think that's it's it's kind of that, that process of change, just what's the next step? What's the next thing that we can do to start experimenting with it is just so helpful. And I know that, yeah, you know, we're doing it at Mayvin, you know, different people experimenting with it, using it for different things, and hearing, hearing what's working, what's not, hearing some guidelines around actually, I found that asking the question this way was a lot better. And then if I put this information, it really helped, it to get to a helpful, helpful answer for me
Sophie Tidman 23:46
Yeah, yeah, seeing it as part of just a learning journey, absolutely, for both you and the wider organisation, yeah.
Sarah Fraser 23:55
So we know you can't, can't ignore AI. It's here. It's part of our our lives and our organisations, already, we don't, I think we don't quite realise actually, how much AI there is sitting behind some of the tools we use on our computers. But it's here. But there is. There's real potential in it. There's real potential if we consider carefully, from a real organisational cultural perspective, how we integrate AI tools into the way that we work, but not just thinking about it as a another, I don't know, techno technological tool to add on, it's thinking about it in relationship to our organisations.
Sophie Tidman 24:39
Yeah, I think it's messy, isn't it? How we're going to get there? I don't I think there's a sort of and I certainly, I had this idea that, okay, when it comes I have to learn how to use it when I've got this, when it's when it's integrated into my organisation, or when it's, you know, presented to me, I'll have to get good at it. Yeah, I'll have to take into account then. But actually we it's being co it's being co created in in the moment. And I think there's something about taking your agency, particularly where you don't feel like you are very knowledgeable about technology or AI, I'm certainly one of those people, to just, you know, be a bit clumsy and try and ask the stupid questions, because they're the best ones. Always absolutely
Sarah Fraser 25:23
So let's, let's ask AI what were the most helpful and salient points from this podcast recording afterwards to make sense of our weaving conversation. Thank you very much Sophie, really good to chew over the developments in AI with you and more to come from the research hub between us.