Episode 20

Women in Tech: Jo Stansfield on Breaking Barriers and Building Trust in AI

Join us as we delve into the crucial topic of trustworthy AI with Jo Stansfield, an inclusive innovation consultant and founder of Inclusioneering.

Jo shares her journey from engineering to focusing on the human dimensions of technology, highlighting the importance of avoiding bias and embracing responsibility in AI systems.

She discusses the alarming implications of algorithmic decision-making, particularly in education and recruitment, where fairness is often compromised.

Jo emphasises the need for organizations to implement proper accountability and transparency measures to ensure equitable outcomes for all.

Throughout the conversation, she also offers insights into the role of diversity in tech and manufacturing, advocating for a more inclusive approach to innovation.

Takeaways:

  • Jo Stansfield emphasises the importance of trustworthy AI to ensure fairness and accountability in decision-making processes.
  • Her experience in tech has driven her passion for inclusive innovation, addressing bias and discrimination in AI systems.
  • AI's rapid integration into various sectors necessitates ethical considerations and responsible usage to avoid reinforcing societal biases.
  • Inclusioneering aims to foster diversity in tech and engineering by helping organizations understand their workforce and culture.
  • The use of biometric data in schools raises significant privacy concerns that need careful consideration and scrutiny.
  • AI applications in recruitment and manufacturing must be critically assessed to prevent perpetuating existing inequalities.

Links and companies mentioned in this episode:

Transcript
Jo Shilton - Host:

Hello and welcome to Women WithAI, the podcast focusing on the challenges and successes of women in this rapidly evolving sector.

Today, I'm looking forward to speaking to an inclusive innovation consultant who's a member of the British Computer Society BCS for Humanity Fellow.

AI Summit in June this year,:

Jo Stansfield is the founder and Director of Inclusioneering, a consultancy focused on inclusive innovation.

She has extensive experience of data-led approaches to gain insight and develop impactful strategies and interventions from small to global enterprises. So she helps organisations of all sizes drive impactful change.

Jo has previously held the role of Director of People, Data and Insights as a leading FTSE 50 technology firm and a shared evidence and advised government about inclusion in tech. Alongside her business, Jo holds a number of voluntary positions.

She's a Trustee of the BCS, the Chartered Institute for IT, co-chair of BCS Women, a member of the Industry Advisory Board for TechUp Boot Camps, and a board director of AI Ethics and Audit Charity for Humanity.

Before all that, Jo began her career as an engineer, developing enterprise software for global industries spanning oil and gas, automotive, aerospace and marine sectors.

Having pivoted her focus from the technical to human dimensions of engineering, Jo now uses quantitative and qualitative evidence to underpin impactful change programs.

She holds an MSc in Organisational Business Psychology with the thesis research topic Lessons for Gender and Racial Diversity within Technology and brings deep understanding of the engineering culture and lived experience within it to her work in support of diversity and inclusion in tech and engineering domains. Jo Stansfield, welcome to Women with AI.

Jo Stansfield - Guest:

Oh, thank you Jo. It's lovely to be here.

Jo Shilton - Host:

Oh, it's always great to speak to another Jo.

Yeah, it's fab to have you on the show, so can you please start maybe by telling us a bit more, a little bit about your journey, how you got involved in AI and what sparked your passion for inclusive innovation in tech?

Jo Stansfield - Guest:

Yeah, well, it's a story with lots of different strands that all kind of converged on each other. I don't know in the last few years.

So I started out in my career making industrial software, so was a technologist, very much focused on building stuff that helps engineers do their best work. Over time that kind of became more and more clear to me that there's not so many people like Me doing this kind of work.

And actually there are even fewer people from ethnic minority backgrounds and other angles.

So that really became a real driving factor in what I was thinking about at work and some of the things I was trying to do around the engineering work that I was doing to, like, think about how can we make work more fair and more inclusive for more people.

So I retrained, as my buyer said, I took a master's degree in my spare time, which turns out you don't have any spare time if you do that kind of thing. And it was extremely hard work.

I decided to switch my focus and focus kind of solely on those kind of human dimensions of engineering, changed my work. So I was focused all around the people data, as my bio said as well. But then Covid struck and everything shut down.

And I just remember hearing about how it was impacting students and A levels and the way that all of education was totally disrupted, that kids couldn't go into school. And I had kind of young, young kids of my own at the time.

And, yeah, then they made the decision, right, we're going to grade everyone's exams, not by exams, but by an algorithm, and we'll just fit everyone onto a distribution that looks like the right kind of thing across the country, and we'll, you know, weight people by where they are and the kind of school they're at and all of that kind of stuff, and, you know, we'll end up with our results. And I was totally horrified. It's like, this is an appalling way to determine somebody's grades.

I mean, the distribution might look right, but that gives no room for an individual to show what they can do and excel and to actually put their best into their work and be recognised for what they themselves have actually achieved. So I kind of had this big realisation that, well, more and more of this is beginning to happen, actually.

Algorithms are so involved in a lot of the decisions that are made about people that just aren't fair at all. And they're based on these very lazy assumptions that, oh, if the distribution looks right, then, you know, everything's going to be fine.

But it's not fine. This is determining people's futures.

So at about the same time, I've was already doing lots of volunteering with the bcs, and a couple of friends in the bcs, the British Computer Society, told me about this new charity they just heard about, called For Humanity, which were building independent audits of AI and algorithmic systems.

And they were really working to address all of these kind of Problems that were coming about by this kind of blanket application of like, algorithmic decision making without necessarily the kind of the thought and that human element and interaction around it. So I joined up, got involved and kind of the rest is history from there.

Learned so much from being part of a community that we're applying, thinking not just about the technology of the AI, but about all of the human aspects around it and looking at the implications of ethics and bias and cybersecurity and privacy and trust in these systems. So, yeah, that was.

That was the defining moment, I think that really set me on the path and in a way it was one that made me quite happy that I could put my human focus around engineering together with my technical background as well, and actually be able to kind of use that technical understanding that I'd got from my previous work, along with my new focus, you know, about making technology and engineering much more fair.

Jo Shilton - Host:

That's amazing because so many people, I mean, like you, you know, I listened to all that when they were grading the exams and doing all this, and you're like, well, that doesn't sound very fair. And, you know, you just. From having taken exams yourself, it's, what about the individual?

You can't just say, well, those people that live there or from that place are going to do this and the other. And I think the fact that you, yeah, just didn't just sit there and listen to it, you actually got up and did something is just absolutely amazing.

Jo Stansfield - Guest:

And thank you.

Jo Shilton - Host:

ected because I'd gone to the:

I know it was a presentation you gave, but I guess that's kind of underpins, as you say, like a lot of you do, like the importance of avoiding bias and embracing that responsibility.

Jo Stansfield - Guest:

Yeah, absolutely. So I gave a talk that. I can't even remember what I called it now. You reminded me.

But it was all about kind of minimising the risks of AI and really kind of maximising the benefits. So we've actually got AI that we can trust. And I think trust is really underpinning in all kinds of decision making, really.

And when it comes to AI, and there aren't necessarily people involved, that becomes even more critical. So, you know, as AI is kind of getting used now in almost everything, you know, so it's everywhere.

You Know, every job that you apply for, AI might be included when you go to see your doctor, AI might be involved in some of the way that your appointment gets scheduled, or it might be involved in, you know, some of the ways that the doctors look at results. You know, there's all kinds of applications of AI in every domain across all of life and it's just exploded into being really.

And I think along with that, there's all of the considerations about how do we use it responsibly and do we really trust it to do all the things that we're asking it to do.

Now I feel much more reassured in a way, in kind of regulated domains like healthcare, where, you know, they've got to actually follow certain kind of rules and procedures.

But in things like recruitment, there's no kind of framework in the same way that is going to make sure that the AI is actually operating as it's meant to be doing. But people have a tendency towards what's called automation bias.

So when a computer makes a decision, people are more likely to actually trust it because, you know, it kind of seems really clever.

You know, it's just going to have, I know some, I don't know, you know, magic that's under the hood that essentially means it knows how to make that decision better than a person does do. So people, in a way just follow what it says without necessarily checking that what it's saying is actually trustworthy.

So making sure that our AI is working in the ways that we expect it to be working and that we really can trust it is absolutely essential. Because I think that's, that's what I've.

Jo Shilton - Host:

Been learning as well.

Because, you know, to begin with, you know, you, I was using chat GPT, and you put stuff in, and you sort of, you could be really lazy and just be like, all right, great, copy and paste it. And then if you actually read it, you're like, what? Actually, no, that isn't right, that isn't what I meant.

Or that isn't what our company does or how you do it.

And then the more people you speak to, it's, oh, yeah, well, it hallucinates, and it doesn't know what it's talking about, and it's trying to impress you.

Or it's just because that's the thing, it's not a real living being thing, it is just predicting what something on the data it's already been fed and you, you need the human element to, to do that. But how do we, how do we sort of enforce, I mean, how do we kind of put that into real world scenarios?

Because is it, is it educating people, do you think like the sort of principles of responsible AI? Yeah. How do you, how do you apply that in a real-world scenario? Or is that, that's the that's a difficulty?

Jo Stansfield - Guest:

It is, yeah. No, it's a real, the real key question.

I think that when we think about AI as just a piece of technology, it's just like another bit of software that's very dangerous in a way that it isn't. You know, it does have this interaction with people and there's a lot of responsibility that surrounds it.

And actually, if you remember the Post Office Horizon scandal, you know, kind of earlier in the year, I think that was a real key moment, I guess, in understanding of just how technology can impact people's lives.

And in this case, when it wasn't even especially clever technology, but the outcomes of the way that technology works, combined with the way that people govern it and manage it and make decisions based on what it puts out there, can have these absolutely life-changing impacts for people.

So building those kind of structures around it in organisations to make sure that there is proper accountability and responsibility assigned, that people do have a right to know how decisions have been made and that they can challenge those and have a human actually oversee what's happening. So actually putting in place all of that kind of organisational and social checks and balances and guardrails around it is really important.

But that means we've got to know what we expect it to be doing. And there's lots of dimensions there to be checking on. So there's the bias angle that we talked about.

There's also the privacy and cybersecurity angles. Is it sufficiently transparent? Do people know it's being used? Can they explain how decisions have been made?

Is it valid and accurate, and reliable for what we're asking it to do?

I think there's lots of tools out there that grand claims are made, but there's not really necessarily scientific backing to say that it is valid in that particular scenario.

And I'm particularly thinking about, I know some examples in recruitment tools where more and more people are getting video interviews where essentially an AI will assess their video.

Jo Shilton - Host:

Oh, wow.

Jo Stansfield - Guest:

And there was a news article that caught my attention probably a year or so back where they found that actually they put actors into the recruitment process, and they had the same scripts to follow, and they found that the actor who was sitting in front of a bookcase was rated as more intelligent and a better match for the job than the Actor who was just sitting, you know, in a, you know, a room that didn't have a bookcase behind them. So, you know, we've really got to be careful, you know, does this really do what it is claiming to do?

And there's lots of ground claims out there, lots of, I think AI vendors that are very good at selling all of the great intentions of what their AI does.

And, you know, quite rightly too, you know, they are aiming to solve real, genuine challenges, but without necessarily sufficient consideration or transparency about the risks and the way that those have got to be monitored and controlled. Yeah.

Jo Shilton - Host:

Oh, my God. There's so many things that now are going through my brain as well about that, because it's just.

You're right in healthcare, where it is regulated, but you still don't know if it's been given all the right data to do it, and you still need someone to check it. But yeah, in something like recruitment, the thing is, we've all got that inherent bias anyway.

I actually interviewed for a different role within the organisation that I'm with during COVID, So it was all done online.

And to begin with, I was like, well, this is great because, you know, I've got the Internet, my fingertips, you know, whatever they asked me to do, I could, you know, you could just do it. But also, the first thing that the person interviewing me said is because I'd move my laptops.

I wasn't sitting here in my spare room, you know, I sort of moved it down to the kitchen where there was a white wall. And she said, oh. She's like, where are you? She's like, it looks like you've just been released from prison or something.

You're just sitting the white wall.

Jo Stansfield - Guest:

And I said, well, I did that.

Jo Shilton - Host:

So it'd be less distracting if only I had a bookcase. Yeah, that's what I need to get. I mean, I've got a few books there, but yeah, I think, but. And I also just.

When you said about people trusting computers, all I can think is from the Little Britain Sketch. Computer says, no.

Jo Stansfield - Guest:

Yeah.

Jo Shilton - Host:

It's almost like it's gone the other way. So. Well, computer says yes. And people use. Right. Like with the Post Office scandal, people just believing it.

Oh, well, it's data, it's tech, it must know. And it's like, no, it doesn't. Because also it's fallible. It's, you know, it's been built by humans and no one's checking it. It's just going off.

Even the people that build it don't really know how it works sometimes.

Jo Stansfield - Guest:

No, indeed. And in the case of AI, we can't know when the current technologies, you know, if the AI algorithm that's been trained is essentially a black box.

We can't interrogate it and know why it suggests, you know, the next word that it suggested. We can, we can make presumptions about it, but we can't directly kind of query it to find out those things.

Jo Shilton - Host:

I mean, it might seem obvious, but for people that don' Be quite interesting if you could sort of like unpack like why diversity inequality is so important. But I'm just thinking back to some of the slides that you did in your presentation. It's, it's that bias that AI has got.

So if you're searching for pictures of certain professions going on that.

Jo Stansfield - Guest:

Oh boy. Well, I usually show a video that was made by the London Interdisciplinary School, which I'm not affiliated with at all.

But I just absolutely love the video, which highlights in a very succinct way just how, you know, image generation can really reinforce some of those human biases.

So it shows this is what a CEO looks like, and it's the white man, and this is what a prisoner looks like, and it's a Latino American looking prisoner, you know, wearing orange overalls. And this is what, you know, your fast food vendor looks like.

And you know, again, it's like each image is just really showing those stereotype texts, expectations we have, you know, those high profile, often high paid jobs, you know, being shown as very much white people, very much men, and the lower paid, lower status kind of roles being more women, more people from ethnic minority backgrounds.

And I think that generative AI is a really fascinating way of kind of looking at what society collectively thinks, you know, because it's not just AI that is making up this problem. You know, it's learned from images and text and understanding that's, that's out there in, in the actual world.

And it's essentially reproducing what society collectively thinks but doing it at a greater scale.

And you know, potentially, if you've got that same application being used over and over again in every recruitment interview that's ever made, you know, in a way that will systematically disadvantage their kind of whole communities of people.

Jo Shilton - Host:

And it's like because I was, I was having this conversation with, with, with a male friend recently or go, and he's like, yeah, but you know, the data that's there, that's the true representation, isn't it? I was like, but no, it isn't.

Like this is what you don't understand, it's like women or any sort of, you know, minority haven't been given the chance or the opportunity to do that, to fit into that role or to get to that place. So you're actually just keeping everyone out if you're not opening up to be an open playing field because you know it.

Because I also think it's not sometimes about, oh, you've got to have this many, you know, white people, black people, men, women, you know, everything, how you're splitting it up, really, it should be the best person for the job.

But unless you've been given the opportunity to find out if you can do that job, and then you're never going to get the opportunity if someone's already decided, well, you live here, you're this gender, you're this, you know, your working-class background, you're, you know, you're going to do terribly. So you haven't even got the opportunity.

Jo Stansfield - Guest:

Yeah, that's right.

Jo Shilton - Host:

To do it.

Jo Stansfield - Guest:

Yeah.

People, we just don't have the historic data to learn from, and the things that we're trying to kind of move forwards, you know, it's like the AI will pull us back to, you know, how things have always been, rather than helping us move to a more inclusive, you know, kind of understanding, open society.

Jo Shilton - Host:

Because that's what you do. I mean, Inclusioneering, such a great name consultancy. I mean, how, I mean, are you, I guess this is what you're doing. Are you trying to change that?

I mean, who is trying to change it? Is there somewhere we can all, how we can get involved? Tell us a bit more about. So what, what work you do.

Jo Stansfield - Guest:

Yeah, so Inclusioneering works with tech engineering, manufacturing firms, essentially sectors that I'm familiar with from all the work I used to do when I was working in industrial software to help them understand their workforce and culture and kind of highlight strategy, really, where is the best place to focus to really make change that's going to help make the culture more inclusive and give people more equitable opportunities.

So that's very much based around kind of data understanding who's represented in the organisation, what kind of outcomes are people having and how they feel as well. So much more kind of on the survey data and interviews and that kind of thing, but then sort of seeing that through the innovation process.

So how do people actually work within their teams?

What's the experience like for people within a team that's maybe working collaboratively to develop something new in a research and development or engineering kind of a context and then the other End of that, what's the outcomes of what they've built?

How equitable is that product, particularly when it comes to AI, is that really going to be giving equitable outcomes for all of the different groups and stakeholders or users of that particular bit of tech?

So yeah, I guess a different slicing of diversity and inclusion as compared to you might see from others, but following that innovation life cycle with it.

Jo Shilton - Host:

That's awesome. We need more people doing this. And just before we started recording, you mentioned you'd been recently to a conference about women in manufacturing.

Jo Stansfield - Guest:

That's right, yeah.

Jo Shilton - Host:

What happened there?

Jo Stansfield - Guest:

No, it was awesome. It was great.

It was organised jointly by the Institute for Manufacturing and the High Value Manufacturing Catapults and some other groups and it was bringing together people with an interest to support and promote women working in manufacturing sector. So much like the tech sector, manufacturing is very male-dominated and there's lots of opportunities for meaningful work for far more people.

If the workforce was more diverse, actually we'd have more people, more output, more productivity.

So yeah, I brought together a team that I was really interested to hear from from the University of Cambridge who'd been doing some work around AI bias in manufacturing.

And it's a topic that I've been really curious about because I talk a lot about AI bias and I also do quite a lot of work in manufacturing, but it's rare that those things have crossed over.

So, you know, the AI applications in manufacturing that you know, you'll often come across are to make processes more efficient or to kind of increase kind of throughput or spot problems, you know, things like this. So, how does AI bias come into play in those things? One of them was, you know, the one that I've already talked about, you know, around recruitment.

Actually AI has got big potential to just keep reinforcing, you know, the fact that manufacturing is a much more male-dominated industry, so making that more hard for women to actually get recognised and brought in. But then, actually, there's also the safety angle on it, too.

So if the AI has maybe been trained to recognise someone's voice, to follow commands, for example, has it got sufficient training with female voices?

Or if it's watching what's happening in a workplace and doing some video analysis, has it got sufficient training data about women to be able to equally recognise the same kind of things it's looking to spot?

So potential there that there's health and safety hazards that arise where AI has maybe not actually had that training that's equal in terms of representation is that.

Jo Shilton - Host:

Yeah, that sort of Highlights the sort of like the way in the book Invisible Bias, Invisible Women, Sorry by Caroline Carlo Perez, which I still haven't quite finished reading. Hopefully, I will have done by the time this podcast comes out, but because it just makes me so mad.

So how like things like PPE aren't even designed for women. A woman has to have a small man's size.

And that is, you know, not just in manufacturing, even like in the police force, the stab vest, whatever, everything, it's all designed for men.

And I just can't get my head around the fact that why has no one got in there and said, right, well, let's actually design it and make these fit women, let's make the pp, let's make the boots, let's make the whatever fit a woman's body, not just make them have a small man's size, which isn't going to be tailored the right proportions wise or anything like that. I mean, was that, was that touched upon?

Jo Stansfield - Guest:

It's astonishing.

It's a perpetual topic in manufacturing, actually, and there are some firms that do make PPE for women, but then you end up with problems with sort of the supply chain and the warehouses to keep the supplies in that there's just less demand there, being less women. So the organizations that come to buy the pen, PPE still might not actually buy, you know, the ppb. Oh dear, my words are all going wrong.

The PPE that has been designed for women, women's bodies, because the availability is less, it's more expensive, you know, they've got to go to a different supplier or whatever.

And so you end up with all of these, I don't know, barriers is not really the right word because they're not real barriers, but these perceived barriers that make it harder to get it even when it does exist.

But there's a group that I'll give a shout out to called Bold as Brass, who are doing loads of great advocacy work in that area, doing lots to raise awareness and kind of highlight the inclusive PPE designers that are out there. Absolutely brilliant. And it kind of intersects with other issues that shouldn't be.

Issues in manufacturing that, you know, often there won't be a woman's toilet in, in a facility or on site because, you know, there hasn't been women there.

One woman who used to work with me in Inclusioneering talks about, yeah, one of her first jobs, they actually built the female toilet when she started working there because there hadn't been one yet, which I thought was lovely. They actually made it. But in lots of cases, actually they just have to make do with using the men's toilets or if there is.

If there are women's toilets, they lock them, or they use them as store rooms, and they can't get proper access into them. And then coupled together with poorly fitting PPE and overalls that maybe need to be taken off completely to actually use the loo.

And going to the men's loos, you can see that the situation is not necessarily pleasant.

Jo Shilton - Host:

Lose. That's another massive thing. Like the way there's never enough ladies toilets compared to the men's just because they've made it fair.

Maybe got the same like floor space. But no, just if you're looking at it to make it. To make it fair, it does not the equal amount of space. It's like, what.

What does a woman need when your right needs to take everything off? It's not.

Jo Stansfield - Guest:

And I think that was one of the ones that was in Caroline Perez's book, wasn't it, in Invisible Women, that you know, the length of time that, you know, each gender needs is different. So having equal space and number of facilities isn't the most fair thing. That's why we end up with enormous queues of women.

Although I have to say, one thing that is good about working in tech and engineering is that you don't often have to join the queue.

Jo Shilton - Host:

It's like the last time I went to a football match, I was like, oh, my God, there's no queue. This is amazing. Oh, my God. Well, I'm gonna put. I'm gonna put another.

I'm gonna put a link again to Caroline's book and also to Bold as Brass and everything else.

But I was just thinking about what you said about privacy and cyber security and like different topics and that kind of things, because you did, you gave an example or you mentioned to me before about how your son's school was using AI for taking lunch orders and like, what can you sort of expand on, like your concerns sort of around. Around something like that and its impact on children?

Jo Stansfield - Guest:

Yeah. So this is one that got me really mad a few years ago and I still haven't quite calmed down yet.

It's like when my son started secondary school, they sent a form for parental consent that they could scan the kids fingerprints so that they could then kind of match the kids lunch order with the kids so that they can get to the front the queue quicker. And apparently this is being implemented in lots of schools. And looking at my.

My other son's primary school, I think the reason this has come about is that more and more stuff is moving online, including how kids order their lunch. So, you know, he has to place his order at the beginning of the week online rather than choosing it just as he goes up to buy it.

But then you end up with problems that there are enormous queues as they try and match lunch to child.

So the tech solution to what never needed to be a problem in the first place was, I know, let's do something so we can recognize the kids and match them up quickly and automatically. Why don't we scan their fingerprints so we can match up child to their lunch order?

But our biometric data, like fingerprints, in a way, is our most sensitive data that we should keep the most private. You know, it's a representation of our identity that we cannot change.

It's not like a username and password that we can just like, fine, if that's got compromise, I can just scrap it and start again with another one. You know, our fingerprint is our fingerprint for life. You know, as is our face scan or anything like this.

Coupled with my level of trust in cyber security that schools might have, I think that's a non starter. And this is, I know, another good kind of point to talk about an AI responsible AI principle of, you know, is this really necessary?

And if it is necessary, is the solution proportionate to what's trying to be achieved?

So like in this case, I would say, well, I mean, it's not even a necessary thing because, you know, you could just let the kids walk up to the counter and choose lunch like they always did. And don't disempower children by having them choose their lunch remotely online without being able to see the food.

But that bit of the rant aside, second, is it proportionate to actually take their most sensitive data to speed up a lunch queue? You know, again, I would say no, that seems highly disproportionate.

You know, I'm happy to use biometric data for, you know, passports or things that are going through the utmost rigorous processes, but you know, for application in a setting like a school, it seems totally inappropriate to me. And it then kind of raises the question of, well, what else might the school do with that data?

Like, is that data only ever going to stay with the school, even if it's not a cybersecurity issue? Say there was a break into the school or some crime committed on the school site? Well, they've got a whole database of children's fingerprints there.

You know, could the police request it? I mean, this is obviously complete conjecture, but what's going to happen to those fingerprints?

You know, does that become a resource that can get passed on to somebody else? Or is it only ever going to stay with the school?

Jo Shilton - Host:

Like what? And she always had tomato soup. So all those cans of small soup have been stolen. It must be her. Or, you know, he only had sausage rolls. Oh, yeah.

It's because, I mean, I know, I mean, you're involved in ethics and everything. What do you, what does anything excite you about the future of AI?

Jo Stansfield - Guest:

Oh, definitely, yeah. No, I think AI is absolutely amazing. And I, you know, I make use of AI.

I use AI in the talk that you saw to, you know, generate some of the images for the concepts I wanted to illustrate. And I use it to help me, you know, do the things I find hard, like writing. So, you know, it's really, really helpful.

But what's important is that we make sure that we're using it responsibly and I guess consciously of the risks and the problems that we need to be kind of prepared for to counter those things. I think the things I'm most excited about for AI, to be honest, are kind of from my former engineering domain.

So all of the possible applications in industry to help make things more efficient, more safe and actually, I think there's going to be a huge role in transforming industry to be much more sustainable, much better environmental footprint as a result of being able to do all of those things. So that is where I'm most interested, actually, to see how AI evolves for those.

Yeah, those real world applications, I think, can make huge amounts of difference. So as an example, the cement industry, cement and concrete makes up huge amounts of the world.

The built environment is obviously largely cement and concrete. Globally, that accounts for 8% of the world's CO2 emissions. All of the CO2, 8% comes just from those industries.

I think digitization, automation, AI is going to play a huge role in the transformation that those industries are going through now to become far more sustainable and have far, far less of that environmental footprint.

Jo Shilton - Host:

So how do you just. If you've got advice for everyone that's listening, how do you keep on top of all that?

Like, what advice would you give to women or to anyone aspiring to kind of that wants to learn sort of about safety and responsibility in AI?

Jo Stansfield - Guest:

So there's a few things I recommend. One is a book that's been on my desk for the last week, so I can even show it to you.

So this is called the Algorithm How AI can Steal your future and hijack your career by hilke Shellman, which is really, really excellent.

It's a fantastic expose of the kind of way that AI is being used in recruitment and hiring, and she gives quite a good balance of what is working really well, but then also highlights some of the problems that come about from it as well. So I found that book absolutely fascinating and yeah, definitely recommend it.

A movie that I suspect a lot of your podcast audience will have already have heard about is called Coded Bias. So it's a really great movie that highlights, well, all kinds of ways that AI can be biased. I won't say too much, but it's very enlightening.

manity as a community of over:

You don't need to know anything about AI. Coming into it. I didn't.

Everything I've learned has really been through the work that I've done with for Humanity and for Humanity's building independent audit of AI systems.

So thinking about all of those things about the system, but also about the governance around the system that need to be in place to really make sure that it's trustworthy and responsible. So there's a learning and training center.

You can learn about algorithm ethics or risk management, or you can learn about some of the laws that are coming, like the EU AI act, and also how, you know, GDPR and other laws like that apply in this case. So depending on what depth you want to get into, you know, there's a whole spectrum of different things that you can get involved with.

Jo Shilton - Host:

Oh, that's brilliant. I'm going to put links to all of those in the notes. So finally, Jo, where can our listeners find you?

How can they stay connected with all the work you're doing?

Jo Stansfield - Guest:

Yeah, please stay connected. You can Find me on LinkedIn. My name is Jo Stansfield and my profile name is Jo Stansfield too, so it should be easy to find.

And you can find Inclusioneering's website as well, and there's contact forms that you can get in touch with me through there if you prefer not to use LinkedIn. So www.inclusioneering.com is the place to find me on the web.

Jo Shilton - Host:

Brilliant. Thank you. Well, it's been an absolute pleasure to speak to you. We could probably speak all afternoon about this.

There's so many things I want to talk about, so I'll have to have you back on. Definitely. Jo Stansfield, thank you for coming on. Women with AI.

Jo Stansfield - Guest:

Oh, thank you very much, Jo Shilton.

About the Podcast

Show artwork for Women WithAI™
Women WithAI™
How is AI impacting women in the workplace and how can it be used for good

Listen for free

About your host

Profile picture for Joanna (Jo) Shilton

Joanna (Jo) Shilton

As the host of 'Women With AI', Jo provides a platform for women to share their stories, insights, and expertise while also engaging listeners in conversations about the impact of AI on gender equality and representation.

With a genuine curiosity for the possibilities of AI, Jo invites listeners to join her on a journey of exploration and discovery as, together, they navigate the complex landscape of artificial intelligence and celebrate the contributions of women in shaping its future.