Episode 7
Women Leading AI Innovation with Catherine Breslin
Welcome to Episode 7 of Women WithAI, where we are thrilled to have Dr. Catherine Breslin, a leading AI consultant and machine learning scientist, join us to unpack the complexities of AI technologies and the gender disparities in the tech world.
In this conversation, we'll examine how biases enter our AI tools, examining everything from voice recognition to language models.
Dr. Breslin will share insights from her remarkable career, shedding light on her hands-on experiences and leadership roles. She'll discuss everything from algorithmic biases to the potential of multi-modal AI models. She'll also highlight the critical need for diversity in tech, emphasising strategies to encourage more women to enter and excel in this field.
By the end of our discussion, we'll better understand the significant role human oversight plays in shaping ethical AI and the importance of challenging norms to make technology inclusive and equitable for everyone. Stay with us.
Transcript
Hello and welcome to Women WithAI, a podcast focusing on the challenges
Speaker:and successes of women in this rapidly evolving sector. Today, I'm thrilled
Speaker:to welcome Doctor Catherine Breslin, who is an AI consultant and machine learning
Speaker:scientist with over two decades of experience as an AI scientist building
Speaker:voice and language AI models. Catherine is the founder and director of
Speaker:Kingfisher Labs, where she works with business leaders to bring cutting-edge technologies to
Speaker:market. Her previous roles include AI scientist and manager at the
Speaker:University of Cambridge, plus organisations such as Toshiba Research,
Speaker:Amazon, Alexa and Cobalt speech. She's also an AI
Speaker:advisor and coach and has been named one of Nesta's twelve women shaping AI
Speaker:as the expert in machine learning. She was on the 2021 list of
Speaker:Computer Weekly's 50 most influential women in UK tech.
Speaker:Catherine Breslin, welcome to Women WithAI.
Speaker:Thanks Jo, thanks for having me. It's lovely to have you here.
Speaker:So how I'm going to start off and ask you how you got into doing
Speaker:what you're doing. And for those that don't know, when it comes to
Speaker:AI and machine learning, what is it? What's the difference? What is machine
Speaker:learning? Fantastic. I'll start with maybe
Speaker:talking a little bit about what AI is, maybe, and what differences. You hear a
Speaker:lot of terms being thrown about right now, so maybe we can
Speaker:dive straight into that and start talking about those. So AI,
Speaker:obviously a term that's been in the media loads lately, and I think you'd have
Speaker:to be hiding under a rock not to have read something about the technology
Speaker:lately. And AI is a term that's been around a long time. It's gone in
Speaker:and out of fashion as the technology has evolved and then lived up to
Speaker:its promise or not lived up to its promise. And we're going through a phase
Speaker:right now where AI technology is really making leaps
Speaker:and bounds in performance. And so people are really excited about the
Speaker:potential. So AI is a term which doesn't really
Speaker:have a really great crisp definition, and people
Speaker:use it to mean a lot of different things. And so in general, maybe we
Speaker:can think about it as technology that is
Speaker:trying to sort of emulate some sort of human decision process,
Speaker:human decision making. So we're trying to automate things
Speaker:which people can do which require a little bit more intelligence than just
Speaker:following a list of instructions or a list of rules. So if you've done
Speaker:any computer programming in the past, you'll know that what you do is you sit
Speaker:there and you very carefully write out a list of rules, instructions for
Speaker:computers to follow, to do something and so you can get quite far with that
Speaker:sort of technology. You can make computers do quite a lot of things
Speaker:by sitting there and writing down the rules for how to do it. But
Speaker:there are some things that you really just can't write down the rules for. So
Speaker:something like understanding speech, everyone's speech is different,
Speaker:and everyone says different things. And so just writing down the
Speaker:rules about how to understand what someone is saying, you know, it's just
Speaker:impossible. There's so much context and variation that goes
Speaker:into it that it would be an impossible task to write it down. And so
Speaker:that's where we start to talk about machine learning. Machine learning is
Speaker:a subset of AI, a group of algorithms that really
Speaker:learn what to do from looking at data, from looking at examples.
Speaker:So in that example of speech recognition and understanding
Speaker:speech, we show the machine lots of examples of people
Speaker:speaking, and we also show them the words that that person said, so they can
Speaker:learn the patterns, learn the correlations, and understand, and
Speaker:maybe learn a bit more about how people speak so that they can then go
Speaker:on to transcribe other speech. And this idea of learning from
Speaker:data, machine learning, and it's really what's been
Speaker:driving this past decade of progress in AI, I think. And then when
Speaker:you hear about AI, a lot of that is down to
Speaker:machine learning and improvements and changes in machine learning.
Speaker:Okay, great, because that explains it well, because I
Speaker:guess as a human, you see how someone says something, you can
Speaker:see the look on their face or the way they say it. So
Speaker:can machines learn that as well? What about sarcasm?
Speaker:Of course. Yeah. When you're talking to somebody, you can see a lot more. You
Speaker:can see, like you say, their face. You have a shared conversation
Speaker:history. You have some context and some cultural knowledge in there as well.
Speaker:And whether machines can learn all of that at the
Speaker:moment, I think they can't right now. I think
Speaker:the amount of data it would take to sort of understand all of that cultural
Speaker:knowledge and all of the context and nuance that goes into
Speaker:speech and language. We're not really at the point that computers can get all
Speaker:of that just yet, but we have made some strides in the past few
Speaker:years in being able to build these
Speaker:systems from bigger and bigger sets of data. And those bigger and bigger sets of
Speaker:data do have a lot more sort of context and nuance in them.
Speaker:So we're making steps in that direction. We still got some way to go.
Speaker:I guess it's like being a child, isn't it? As a young child, you're not
Speaker:going to get all the nuances or understand if someone isn't really
Speaker:meaning what they say. So I guess maybe we're at the, at the beginning,
Speaker:although maybe, and children. Are very good at
Speaker:understanding speech. Yes. I've got two nieces, and it's
Speaker:amazing. You know, they're seven and ten at the moment, and it's amazing. Yeah, they
Speaker:definitely can pick up now while they're getting better and better, what it
Speaker:means. But so how did we. Yeah, how did we get here, do you think?
Speaker:Because OpenAI hasn't been an overnight success, as you say. But is
Speaker:it that sort of, that exponential, that curve? Do you think it will suddenly start
Speaker:to move a lot quicker? I think what has
Speaker:happened in the past, maybe 15 years or so
Speaker:now is that we've seen a few things that have come together to
Speaker:make this technology more able to build the
Speaker:capability that we're seeing today. So one of the first things has
Speaker:happened is that the cost of computation, the
Speaker:amount of computation you can do, the processing capability for
Speaker:computer chips has got much better in the past decade or so, which allows
Speaker:us to do a lot more on those chips. We also have the
Speaker:Internet. The Internet has provided a place for people
Speaker:to write a lot of text data, which is readable by
Speaker:computers. So 1020 years ago, we didn't have just the
Speaker:sheer amount of writing and audio and video on the Internet, as we
Speaker:do now. So the amount of data available to companies
Speaker:needs to build their models from has got a lot larger in the past few
Speaker:years. And, of course, there is lots in the press right now about sort
Speaker:of copyright and consent of using data. But a lot
Speaker:of the large companies are using quite a lot of data from the web to
Speaker:train their models, and that gives us much, much larger data sets.
Speaker:And that's been another thing that has fed into the models, being more
Speaker:capable as they are learning from more and more data, they will
Speaker:understand much more of the nuance and learn many more,
Speaker:much more context than when you had training these models
Speaker:on small data sets. And I think the other thing that's happened
Speaker:as well is that we've had some improvements in the underlying algorithms that
Speaker:we use. So if you're following the field, you might have heard of sort of
Speaker:deep learning and transformers diffusion models. Some of these techniques
Speaker:have come along more recently, and they're able to model some of the
Speaker:language and speech and audio better than our previous generation of
Speaker:models. So these three things have really come together. So amount of data, amount of
Speaker:computation, the Internet holding it all together, and the improved
Speaker:algorithms, and that's really driven what we've seen in the progress in the
Speaker:past, probably 15 years or so. Because it
Speaker:is all about the data, isn't it, as you say? And I suppose it's learning
Speaker:from the data that's already there. So, I mean, people, how can we
Speaker:go about making sure that it's the right data? Or do we need, is there,
Speaker:is there bias in the data? And in AI voice technology, for
Speaker:example, are there any biases that you've come across?
Speaker:Exactly. And bias is a big topic right now as well, I think, because
Speaker:we're starting to see that if you do train
Speaker:models on some of the larger data sets that we
Speaker:see machine learning models, because they're learning patterns in data,
Speaker:they learn whatever is there. They're not making conscious
Speaker:decisions about whether something is biased or not, and they should
Speaker:use it like humans sometimes do. They are just learning. Everything
Speaker:is equal in that data set. And as you can imagine, quite a lot
Speaker:of the writing, quite a lot of the speech on the Internet does exhibit certain
Speaker:kinds of biases, and those biases then do just
Speaker:sort of transfer straight through into our machine learning models that we are building.
Speaker:And companies are putting a lot of effort into mitigating some of these biases now.
Speaker:But I think some of the ways that you see it play out are with,
Speaker:especially when we're thinking about voice and language technology, we see
Speaker:a lot of different accents in the world.
Speaker:So everybody, even here in the UK, we have so many different
Speaker:accents, but only some of those accents are
Speaker:better recognized by computers than others. So we sort of see
Speaker:this uneven distribution of performance across different accents
Speaker:is one way we see this. We see
Speaker:this technology being developed much more for languages
Speaker:like English, Spanish, Mandarin, for which there is lots of data. And
Speaker:of course, there are something like six and a half thousand languages in the world,
Speaker:and very few of those have enough written data
Speaker:to be able to build the same level of model from. So we
Speaker:see an uneven distribution in the languages that we're
Speaker:covering as well. So something like English,
Speaker:very much more capable technology than some of these,
Speaker:what we call sort of low resource languages. So we see
Speaker:different aspects like this in voice technology,
Speaker:moving on to language technology as well. And people talk about the
Speaker:biases. If you've played with any of these language models, say chat,
Speaker:GPT or Claude, or any of
Speaker:these models, they are trained on data which exhibits the
Speaker:views of the Internet, which is also very
Speaker:western biased, and exhibits a lot of racial and gender biases
Speaker:as well. And those can carry through into the models. When you, you
Speaker:start to train on them. So I think there's different ways,
Speaker:different in these different places, that some of that bias comes
Speaker:through to the technology, different challenges. And that, I suppose,
Speaker:leads me on to thinking, you know, I mean, I know other voices are available
Speaker:and you can choose the voice of your AI, but most AI voices, well,
Speaker:to me, anyway, you know, including Alexa and Siri, are female voices.
Speaker:Why do you think that is? Yeah, this is another
Speaker:way that I think we see some of society's bias play out
Speaker:in technology. So when these systems were
Speaker:built, probably, I don't know, 1015 years ago, Alexa series,
Speaker:a lot of these voice assistants were built. It was much more
Speaker:difficult to build a synthetic voice. It took a lot of effort
Speaker:to record audio from one
Speaker:person and convert that into a synthetic version of their
Speaker:voice. So it was very time intensive, very expensive to build multiple
Speaker:voices. So a lot of these
Speaker:organizations, a lot of these projects started with the idea of offering a
Speaker:diversity of voices to people, but realized that practically
Speaker:it was very difficult to build them, and so they sort of settle on
Speaker:one voice. And there is a lot of
Speaker:evidence in the literature that people tend to prefer female
Speaker:voices as well. And so you see this reinforcing cycle where
Speaker:organizations will choose voices that people refer, which reflect the biases
Speaker:in society and sort of embed and entrench those. And
Speaker:so the cycle sort of continues. Now, I think we're in a
Speaker:situation where companies, synthetic voices, it's
Speaker:much easier to make them in a variety of voices, and companies are starting now
Speaker:to offer a lot more variety in the voice that they do. But some of
Speaker:this people still associate the voice assistance with
Speaker:female voices. I mean, it could be. I mean, I've
Speaker:done a bit of reading around the subject, and is it because female voices are
Speaker:maybe less threatening because it's sometimes it's
Speaker:easier to sort of have, I don't know, a female in the role
Speaker:of assistant. I mean, I don't know, they're the biases that you don't want to
Speaker:encourage, do you? Is it, was it
Speaker:someone, a friend said to me the other day, is it due to Star Trek?
Speaker:Is it because Star Trek, when they had the computer, it was a female voice?
Speaker:And it's just, it sort of all started from there. Or then you look onto
Speaker:films and tv and. But generally,
Speaker:robots tend to look female. But is that because
Speaker:they're being designed by males, or are they being designed by
Speaker:females? Or is it because they're. They're just less scary than a, you know, a
Speaker:terminator, like the male version of the robot?
Speaker:I'm not sure. Like, how do we make them gender neutral? Or should we?
Speaker:Yes, an interesting question. People have tried, I have seen sort of a
Speaker:gender neutral voice that people have developed. But one of
Speaker:the things that even, no matter how neutral, you try and make a voice, you
Speaker:know, people. I still found myself making assumptions about the person
Speaker:behind that voice. So every voice. There is
Speaker:really no sort of neutral voice. Every voice has some sort of
Speaker:cultural or, you know, associations with it that.
Speaker:And therefore. So I think my view is that we want to offer variety
Speaker:rather than, you know, try and build a neutral voice,
Speaker:because there is no such thing as a neutral voice. Like, there's no such thing
Speaker:as a neutral accent. Although a lot of people feel like, I don't have an
Speaker:accent. Yeah, I was. I don't have an accent.
Speaker:Yeah, but it depends where you are. But I found as
Speaker:well, my Alexa, I hope
Speaker:she's not listening. She might start speaking. But I have it slightly
Speaker:speeded up. And I know that's something that when other people come around to my
Speaker:house and they. They ask something and what's wrong with her? And I said, oh,
Speaker:I just had it speeded up quickly. You know, if I need to know what
Speaker:the weather's like in the morning, I haven't got time. I need to know straight
Speaker:away. Tell it to me quickly. And I've tried
Speaker:different, you know, having the different voices, but I quite liked it, you know, playing
Speaker:with it when I had Siri to begin with, and I said, oh, I quite
Speaker:like watching neighbours, maybe I'll have the australian voice. And I
Speaker:found that I preferred the australian woman to the australian man because
Speaker:it was easier to understand. But, yeah, I've gone back to the british version
Speaker:now, but talking about bias and
Speaker:women in industry, I know that you're keen to get more women and girls
Speaker:interested in STEM, and you co founded the Cambridge branch of the British
Speaker:Science association and Robogals. So can you tell our audience a little
Speaker:bit about that, please, and how we can get more girls
Speaker:interested in STEM? I mean, we do
Speaker:have. There's a big gender problem in technology, and maybe
Speaker:the same applies for racial bias and other sort
Speaker:of minorities as well. But there are really a lot of statistics out there
Speaker:that show us that women are not choosing to work in
Speaker:technology, and if they are, they are not sort of rising
Speaker:up the ladder and making it into senior positions and being some of those
Speaker:leaders in the field. So maybe here in the UK, we know
Speaker:that it's quite difficult to get an exact figure, but around about 20%
Speaker:of the AI workforce is women. And we also know
Speaker:that when it comes to
Speaker:funding of startups, and a lot of startups are AI focused at the
Speaker:moment. Funding of startups, all women teams, the last figures I
Speaker:saw got less than 2% of the venture capital funding,
Speaker:compared to all male teams who got something like 80% of the
Speaker:funding, and mixed teams got the rest. And then we know there's a gender pay
Speaker:gap. We know that women don't make it into leadership positions at the same rate
Speaker:as men do. So all of these show us that there really is a problem
Speaker:with women in technology and women in AI. And there are
Speaker:two ways, I think, to think about this. So the first one
Speaker:is that, you know, encouraging young women and girls into
Speaker:the field in the first place, and the second is sort of
Speaker:promoting and appreciating the women that are already there and providing career
Speaker:paths and bringing those up into senior positions. I think both are
Speaker:important, getting girls
Speaker:interested in science and technology. And things are changing. I think, as
Speaker:you start to see the impact of technology in society a bit more, I think
Speaker:a few more girls use a lot of apps, use a
Speaker:lot of social media, sort of start to understand the impact and have a little
Speaker:bit more interest perhaps in the computer science behind that, but still
Speaker:not choosing to go on and study sort of computer science and maths and technology
Speaker:and engineering subjects at university. So I think
Speaker:encouraging women there and showing them from a young
Speaker:age, I think people start to make their decisions about which
Speaker:subjects boys and girls are good at. So really going into primary schools
Speaker:and early secondary school and showing them that this is a valid career for people
Speaker:to take, and there are women already in this field to look up to.
Speaker:But I do think that without the second factor,
Speaker:without sort of encouraging the women that are already there, so companies
Speaker:have, women do not tend to make it into leadership
Speaker:positions in general, but also in technology at the same
Speaker:rate as men. And that's because
Speaker:of various factors. But I think sort of maternity and
Speaker:motherhood is one important place where women of young kids
Speaker:really need flexibility and employers do not offer it to them
Speaker:in the way that they need. And so that forces a lot of women to
Speaker:sort of take a step back or to drop out of the workforce for a
Speaker:little while and really, you know, holds them back. So flexibility, I
Speaker:think, is really important here. And also there's a lot of
Speaker:then bias in what a leader looks like and whether women
Speaker:can be promoted into those positions. And, you
Speaker:know, when you get higher up and there are fewer and fewer women, it
Speaker:becomes harder and harder to get promoted up. And so I think
Speaker:paying attention and really noticing
Speaker:the pay gaps that you have in your company and the way that you are
Speaker:treating your women leaders and the way that you are bringing up their careers, I
Speaker:think all of that is really important in equalising
Speaker:the field. Yeah, definitely. So, yeah, it's not just getting people
Speaker:interested in it, it's getting people coding, it's keeping them doing
Speaker:it. Yeah, because I think one of the, like,
Speaker:young girls and women, it's great to get them into the field. I think it's
Speaker:a great career, but they are not the ones with the power to change it
Speaker:in the long run. You need to, you need the leaders to be the ones
Speaker:changing the field. And so getting more women into leadership positions
Speaker:is really the only way. Exactly. Getting them in there so they can make the
Speaker:change. And it's flexible working. I mean, that's the
Speaker:hope, isn't it? I mean, with AI, lots of it, you know, it's talking about
Speaker:these tools and how can they make everything easier. So we needed to, to really
Speaker:start doing that and making sure if women are leading it, then
Speaker:using it for how we can get people to stay in industry.
Speaker:Because do you, do you build any products? Because I've seen you
Speaker:talking for about large language models and advising people how to use it in business
Speaker:and risks and opportunities. Have you been involved in actually kind of
Speaker:building those products?
Speaker:So I have. Earlier in my
Speaker:career, I was a sort of hands on computer programmer, sitting down,
Speaker:writing the code for some of these things, building the models that went into
Speaker:some of these products, and some of the research that we have
Speaker:been making up the field.
Speaker:That was my early to mid career. And
Speaker:about a few years ago, I moved into more of a management position,
Speaker:so I started managing people. I moved a little away from hands on coding
Speaker:more onto some of the strategic and leadership
Speaker:thinking around AI, what we should be building. Now I
Speaker:work with companies who are building technology and
Speaker:I'm hoping to form the bridge between what's going
Speaker:on in the research world and what the technology, how the technology is developing and
Speaker:keeping abreast of all those and figuring out how that can be used and how
Speaker:that can be incorporated by companies in their work. Okay,
Speaker:great. And I've seen you do a lot of sort of best practice and
Speaker:that kind of thing. What does best practice in AI look like? What does that
Speaker:mean? Oh, I think we would need a whole other
Speaker:podcast for best practice in AI. I think we're
Speaker:still really trying to figure this out a little bit because AI is such a
Speaker:new field and we're kind of making up what the best
Speaker:practices at the moment. A lot of people think that building
Speaker:an AI product is about building the thing and getting it out into the world,
Speaker:but really that's only a part of the job. We have
Speaker:to then look after that product and improve it and make it better over
Speaker:time. And that ends up being a much bigger part
Speaker:of the job, I think, than people realize when they start out. So putting
Speaker:in place ways
Speaker:to monitor what your product's doing, that's really
Speaker:important to know how well it's doing in the real world and
Speaker:not to put something out there which kind of works in the lab, but doesn't
Speaker:really work in the real world. We talked about bias already,
Speaker:so looking out for biases, trying to mitigate them at
Speaker:the early stages, we didn't touch on another part of
Speaker:bias, which I think is the sorts of decisions about what you build and the
Speaker:products that you are building, who they're aimed at, and whether
Speaker:they actually fulfill the needs that people have. So
Speaker:choosing what you're going to build, I think very thoughtfully, is also a big part
Speaker:of this. And then I think a really
Speaker:big part of best practice is just testing, properly testing and
Speaker:evaluating. It's really such a big part
Speaker:of building an AI product is making sure that it works like you think it
Speaker:does, being able to justify it and being able to know
Speaker:when it works and also when it doesn't work so that you can mitigate some
Speaker:of that and you can have a person in the loop to
Speaker:work with the system when it's not working or to understand
Speaker:its strengths and weaknesses as well, because. I guess that comes on to sort of
Speaker:like regulations and stuff. Like is anyone regulating it or is it very much up
Speaker:to each organization or whoever's deciding to build it? Is there
Speaker:any kind of regulation out there? Is there regulation?
Speaker:Yes, yes, regulation. Another big. Like, lots of the topics around
Speaker:AI are really big right now. Regulation is another one.
Speaker:Maybe a month ago, the EU signed into being the EU
Speaker:AI act, which I think is the first real
Speaker:regulation of AI technology that splits
Speaker:AI technology into different risk categories. So there's a sort of
Speaker:unacceptable risk, a high risk, a medium risk, low risk categories,
Speaker:and there are different obligations at different levels of that technology.
Speaker:So unacceptable risk technology is banned,
Speaker:whereas high risk technology has more obligations
Speaker:associated with transparency and reporting with it. And then low
Speaker:risk technology is, has a much lower
Speaker:bar in the regulation. So that's one example of regulation which has
Speaker:come into being. Other companies are thinking, other countries are
Speaker:thinking about regulations and of course there are
Speaker:associated regulations that are already in place. So things like
Speaker:the EU's GDPR and
Speaker:some of the health regulation that exists
Speaker:in Europe and the US for health devices and health data,
Speaker:some of the financial regulation, if you're working in the financial.
Speaker:So there's sector specific regulation as well, which does touch
Speaker:on AI technology. And do you think it's always people,
Speaker:isn't it, that are checking that and doing the regulating. So
Speaker:AI is not quite taken over the world and deemed as useless yet still.
Speaker:Well, it needs us because we're building it. It's our product, isn't it? So
Speaker:got to keep an eye on it. And how do you use AI, like in
Speaker:your personal life? How do I use AI in my
Speaker:personal life? That's a good question.
Speaker:I tend to use it at work. You're like
Speaker:a chef that cooks and goes home and doesn't want to, doesn't want to do
Speaker:any of the cooking.
Speaker:I think we have AI weaved into our everyday lives in
Speaker:ways that are not always noticeable. So
Speaker:things like I'm a photographer as well,
Speaker:so I have a large library of photos. You can search photos now
Speaker:for, you know, objects or people
Speaker:and we all go on our phone and hopefully, you know, you can
Speaker:see the phone has categorized all the photos that have a picture of me
Speaker:or a picture of my family in. So
Speaker:those things are really helpful. You can, you can search within your image library
Speaker:to find a particular picture. If I know I took a picture of, I don't
Speaker:know, a rainbow five years ago and I want to find that picture, I
Speaker:can search for rainbow and it will bring up the pictures in my library. So
Speaker:one way that AI is used that, you know, we sort of maybe start to
Speaker:take for granted now, but where we have behind the
Speaker:scenes companies building models to understand what is in images, to help
Speaker:us look through them. And, you know, this is really important. If we
Speaker:have smartphones that take lots of pictures and we start to take many, many more
Speaker:pictures than we used to have and they're just sitting there on our phone,
Speaker:difficult to work through. So I think that's one
Speaker:way that people see AI in their daily lives.
Speaker:A lot of people included use chat, GPT for various
Speaker:different things. It's a great helper, brainstormer and rephrase
Speaker:things sometimes, things like that. I know
Speaker:there's a lot of people in the AI world now looking
Speaker:at some of these new code tools, so
Speaker:understand tools that will help you write computer code
Speaker:proving to be quite useful. I think if you know what you're doing and if
Speaker:you know how to use them, you can be much quicker
Speaker:at writing and brainstorming and getting your computer code written. And that can be
Speaker:really helpful for just day to day work
Speaker:if you're doing that. So I think there's lots of different ways. It's not like
Speaker:a sort of big, it's just every thing that you're using,
Speaker:but just little things in your life where AI crops up. Yeah.
Speaker:Exciting. And I read some, I heard something the other
Speaker:day that, about the sort of the picture learning thing. I don't know if you
Speaker:can, if you know if this is true or not, but you know, when you
Speaker:have the, the captcha thing, so you go on a website and you've put in
Speaker:your data or it's asking for something and it checks, you know, are you a
Speaker:robot? And so you have to choose, you know, how many of the six or
Speaker:eight squares or however many squares, nine squares have got a picture of a traffic
Speaker:signal or a bus or a cat or a dog or something like that? Is
Speaker:that helping the machine learning? I mean, is that, or is that just,
Speaker:I don't know, is that just something that was
Speaker:invented and we all do? Or somehow does that data get fed
Speaker:into AI? Yeah. So
Speaker:at the beginning we talked about how to teach a machine
Speaker:to understand speech. And I said that you had audio with the
Speaker:transcription, so you know what was said in the
Speaker:audio. To build a speech system, if you're building a system which is going to
Speaker:know something about images, you need the same sort of idea for images. So you
Speaker:need images and you need associated text or
Speaker:labels or something to tell you what's in the
Speaker:image. And so usually what we have
Speaker:is people who annotate those images, annotate the audio,
Speaker:maybe annotate the text, data, whatever it is that you're looking at with the
Speaker:correct answer, what's actually in that image or piece
Speaker:of audio or what's in that text file.
Speaker:And so getting those labels
Speaker:for different images, people have very creative ways to do it. And so
Speaker:capture the, you know, click which images have a bridge in or which
Speaker:images have a traffic light in is one way to get some
Speaker:of those human verified labels for images.
Speaker:Cool. And how do you see AI?
Speaker:Sort of. What are you excited about over the next, you know, like
Speaker:2345 years, what do you see as the sort of big advancements or what
Speaker:would you like to see happening?
Speaker:So what would I like to see? I think we're at a really interesting point
Speaker:right now because we had sort of chat
Speaker:GPT launched two years ago. This was a big turning point
Speaker:in public awareness. They think of AI technology and what it could
Speaker:and couldn't do. And now we start to see similar
Speaker:models that are able to deal with not just text, but images. I
Speaker:mean, GPT nowadays will deal with images as well if you pay for the
Speaker:subscription, audio, video. We're starting to see all these things come together,
Speaker:which I think is really interesting because that's going to allow
Speaker:many different capabilities. I think that's one of the useful
Speaker:things people find about chat GPT, is its ability to use image as well as
Speaker:text now. So this sort of multimodal models, I
Speaker:think are really interesting. And we're in a world
Speaker:where open source technology is also
Speaker:building many of these things. I think open source is really interesting because
Speaker:then we can build these models and
Speaker:lots of people can try using this. It's very expensive to build the model in
Speaker:the first place. To build GPT, chat GPT or to build.
Speaker:There's a model that Facebook or meta launched called Llama.
Speaker:There's other models that other people have launched. These are very expensive to build,
Speaker:but once they're built, people can take them and run with them and try
Speaker:them in their own domains and own fields. So I think open source is really
Speaker:helping with this. And we see, you know, in
Speaker:scientific research, for example, researchers come up with really creative
Speaker:ways to try and use these, what we call foundation models or
Speaker:frontier models to build
Speaker:on for their own specific domains. So I'm really interested to
Speaker:see what people do with them and what
Speaker:they find they're capable of. Also trying to just figure out
Speaker:what these models are and aren't capable of. We've had a couple of
Speaker:years of experimentation and people have found some really interesting things that they can
Speaker:do, and I think that will continue as well. So I think we will see
Speaker:a big explosion of this technology used across a wide
Speaker:variety of domains, where you've got domain experts who know about their
Speaker:field, working with these models that have been built by open
Speaker:source communities or big organizations with the money to do
Speaker:so. That's really exciting. Fantastic. So there's so many
Speaker:opportunities out there, aren't there? So yeah, just need to embrace
Speaker:those. So where can our audience find out more about everything you've
Speaker:done? And are you available to go into primary schools? And if people want you
Speaker:to do that, how can they get in touch? I
Speaker:have been into primary schools and I'm very happy to do so. I
Speaker:think LinkedIn is the best place to find me. If you're interested to know, LinkedIn
Speaker:and I have a website, we'll put links. To that in the show notes. Because
Speaker:you've got. You do a newsletter as well, don't you, on substack? And another
Speaker:writing is because. Yeah, you. I like one of your
Speaker:other accolades that you had, was one of the hundred coolest people in the
Speaker:UK tech world. So it's been
Speaker:fantastic to speak to you. Katherine, thank you so much for coming on. You've got.
Speaker:There's so much more we can talk about. We'll have to get you back on
Speaker:and talk about best practice and regulation.
Speaker:Fantastic. Really great to speak with you. Thanks
Speaker:for coming.