MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Design and Development Principles for LLMOps

Posted Aug 20, 2024 | Views 523
# MLOps
# LLMOps
# Barclays
Share
speakers
avatar
Andy McMahon
Director - Principal AI Engineer @ Barclays Bank

Andy is a Principal AI Engineer, working in the new AI Center of Excellence at Barclays Bank. Previously he was Head of MLOps for NatWest Group, where he led their MLOps Centre of Excellence and helped build out their MLOps platform and processes across the bank. Andy is also the author of Machine Learning Engineering with Python, a hands-on technical book published by Packt.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

As we move from MLOps to LLMOps we need to double down on some fundamental software engineering practices, as well as augment and add to these with some new techniques. In this case, let's talk about this!

+ Read More
TRANSCRIPT

Andy McMahon [00:00:00]: So, my name is Andy McMahon, principal AI and Mlops engineer at Barclays, and I like a latte.

Demetrios [00:00:09]: What is up, Mlops community? We are back for another podcast. I'm your host, Demetrios, and talking to Andy today, we went down so many rabbit holes, I had a blast. I highly respect this man. He is the person that I stole the definition of mlops. I think he puts it the most eloquently. We talk about it and his definition in this episode. You probably heard me say it a few times on this podcast if you listened to any episodes before this, because, yeah, I shamelessly stole it. It's the best way of looking at Mlops.

Demetrios [00:00:49]: He currently is working at Barclays bank, which is a bank in the UK for anybody that doesn't know. And we went down rabbit holes on infrastructure, but we also went down rabbit holes on teams and how to function at a higher capacity, how to be a great leader and so many other things. I'll let you listen now. And as always, if you enjoyed, give it a like or drop in a comment, share it with one friend. And I don't have a song for you today to share with you while we rock into the conversation, but I will give you one thing for your recommender systems. And this was given to me by my lovely wife. I have to share it with the world. For anyone that likes tahini.

Demetrios [00:01:45]: Tahini? You know, sesame paste, as some people call it. Try black tahini. It is unbelievable. That's all we've got today. Let's get into the conversation with Andy.

Andy McMahon [00:02:04]: Dude.

Demetrios [00:02:04]: So we were just talking about your book and how now it is Oxford material. It's on the curriculum for Oxford. Right. Which is. Oxford's no small feat. That's kind of one of the more popular universities in the world, I would say.

Andy McMahon [00:02:20]: Yeah.

Demetrios [00:02:21]: Especially in the UK. Cambridge and Oxford are huge, but in the world, when you think about the top universities, I know that even me, as an american, I knew about Oxford, and I wasn't that worldly before I moved abroad. And so that's really cool.

Andy McMahon [00:02:38]: Super.

Demetrios [00:02:38]: Congratulations to that. I mean, this is machine learning in Python. Is that the official title?

Andy McMahon [00:02:43]: Yeah, machine learning engineering in python. There we go. Yes, the second edition. So it's been adopted as core material for the artificial intelligence, genai cloud and mlops online course. So it's sort of aimed at continuing professional development, retraining a lot of existing engineers who want to upskill, but a lot of leaders and stuff. So I was a guest lecturer on the program, but now they've adopted my book, part of the sort of recommended reading, and you get a copy of the ebook when you sign up to the course. So there you go.

Demetrios [00:03:16]: But then you, you also mentioned that you're a guest lecturer somewhere else.

Andy McMahon [00:03:21]: Yeah, the University of Warsaw. So they have an AI for executives program. So I work, I work there and present on two modules across sort of the ML development lifecycle, Jenny I and llmops sort of stuff.

Demetrios [00:03:38]: Oh, that's cool.

Andy McMahon [00:03:39]: So it's cool. Yeah, it's good. It's different from the day job, you know, rather trying to build solutions and design and all those good things I like, but it's nice, it's nice to share your knowledge.

Demetrios [00:03:50]: And so the funny part was that you mentioned apparently ten out of the first 50 OpenAI, something like that.

Andy McMahon [00:04:01]: Yeah, I've probably missed. Now you've put me in the spot, I'm going to caveat and say I've likely misremembered. I was told that a significant proportion of initial OpenAI colleagues, if you like, came through Warsaw. So Warsaw has got an excellent computing degree program. I know that much. And yeah, they definitely had some of the early people in OpenAI came through.

Demetrios [00:04:22]: Funny thing was, basically those folks probably have left OpenAI already, started their own thing and then got eaten up by big tech or aqua, hired by big tech by now. So it is ridiculous how I'm seeing, I don't want to speak out of turn here because I imagine all these folks that are raising a ton of money and trying as hard as they can to get a product out there in the gen AI space, especially in the model space, and then they realize that things are a lot harder or something is not going to work. And they say, what's the escape hatch? Well, let's go and let's join big tech. And so it's that notion of failing upwards, but spicy, man.

Andy McMahon [00:05:13]: That is spicy. I mean, I can't, I can't, I can't talk, right? I work for a bank, I sold.

Demetrios [00:05:21]: Out before I really have a job. So I don't know if I'm the best person to be talking shit either, but I look around and I'm like, I could have grabbed the bag. I should have started a model company and raised 500 million. And then after a few years of not releasing a product, been like, now's the perfect time to go and join DeepMind.

Andy McMahon [00:05:42]: Well, don't do a model, but do do a rag company or something, though. This was the next wave, you know, there'll be, there'll be another wave soon.

Demetrios [00:05:50]: Yeah. Agent company.

Andy McMahon [00:05:52]: Agent company. That's, that is the next one.

Demetrios [00:05:55]: Yeah.

Andy McMahon [00:05:55]: That is. Jump on it. That is happening everywhere. Do it.

Demetrios [00:05:57]: Can't miss it again, man. That's my ticket to being a cog in the wheel at big tech, which is my lifelong dream.

Andy McMahon [00:06:09]: You want to fail upwards? Yeah, fail that easy. That'd be great.

Demetrios [00:06:12]: Okay, 20 seconds before we jump back into the show. We've got a CFP out right now for the data engineering for AI and ML virtual conference that's coming up on September 12. If you think you've got something interesting to say around any of these topics, we would love to hear from you. Hit that link in the description and fill out the CFP. Some interesting topics that you might want to touch on could be ingestion, storage or analysis, like data warehouse, reverse etls, DBT techniques, et cetera, et cetera. Data for inference or training, aka feature platforms if you're using them, how you using them? All that fun stuff. Data for ML observability and anything finops that has to do with the data platform. I love hearing about that.

Demetrios [00:06:58]: How you saving money? How you making money with your data? Let's get back into the show. Oh, dude. Well, okay, before we jump into what we came here to talk about, which is a little bit of mlops and what you like to call llmops, I personally don't believe in that term because I have a vested interest, obviously, to not call it llmops. I prefer the term AI infrastructure because I feel like it's more suited for what it is. But we can get into that in a second. What I wanted to ask you about is when you're given this AI for executives course and you're talking about the lifecycle, what is the lifecycle?

Andy McMahon [00:07:40]: Yeah, it's really good. So I kind of agree with you. I think I use the term LLM ops of necessity. I still view mlops as encompassing all of this, and infrastructure is a part of it, but there's lots of other things. Yeah, the life cycle is, I like to always think it's the software development life cycle with some extra pieces. And I've said this about mlops for a long time, right. And it was born of DevOps. And all of these things still stand.

Andy McMahon [00:08:06]: It's just some extra pieces. So you still, you know, you still scope out your requirements, you still understand what you're going to do, still go through sprints or scrum or however you're going to do it, but you go through a classic route to live of development, pre production or test and through to production. As part of that, you have to do testing and validation post deployment. You have to monitor and underpinning it all as infrastructure. You want to automate all those processes as much as possible. That's it in a nutshell. The challenges come because when it's not traditional software, it's machine learning, or now Genaii or AI, the whole question of what does testing mean? Automatically has a much bigger answer. It's not just data pipeline.

Andy McMahon [00:08:53]: Exactly. Where does the data come from? Garbage in, garbage out. What does monitoring mean? Yeah, all of those things have extra pieces fundamentally underpinning it. It's not that different from software. And I think that's a big learning many organizations have had over the past few years. They sort of, ML came on the scene and now Jenny, I and AI is coming on the scene. A lot of organizations maybe race to reinvent the wheel, and then they actually discovered a lot of stuff could be leveraged that's out there, and then you could focus on the value added new pieces. So I think that's a journey where we seem to be going through with Genai again, and I'm keen to accelerate that.

Andy McMahon [00:09:36]: I can see us sort of going back to the same place we were at with the first wave of data scientists, or then deep learning, sort of starting to do that again where we think it's all new, throw it all away. Yeah, you know, but actually it's just you're adding some pieces, you're adding some processes. Just everyone. Yeah. Take a breather, take a beat.

Demetrios [00:09:53]: You feel like the, and by the way, you are the person that I stole this from, which I still to this day will quote you every once in a while. Mlops as a term. I like your definition of mlops the best out of any I've ever heard, which is going from n to one, n plus one models. So you said it like, yeah, all right. Basically the standardization or the practice of mlops is when you're taking one model that is in production and scaling it up to n plus one models.

Andy McMahon [00:10:34]: So it's exactly that. So I kind of, I likened it to proof by induction, this mathematical concept. But yeah, fundamentally it's like you say, so getting one model into production, the first model is always a big achievement for an organization, a team, a company. They're always super excited. But that's sort of day zero, day one, day two is, oh, I need to go back and do it again. And then what do I do? I've got, now I've got two models is the second one replacing the first one? Are they running in parallel? And obviously both of these are true. Eventually. Eventually you have different products and services.

Andy McMahon [00:11:11]: They're all having champion challenger sort of logic going on. And that is exactly it. And that totality of all that is mlops to me, because it's very easy. Well, it's not very easy, but I think it's easy to get one in production and monitor and maintain it. Just throw everyone on that. When you start really thinking, industrializing this process. And, you know, there's different analogies. People talk about, you know, going through the sausage machine or whatever, the conveyor belt, but it's exactly that.

Andy McMahon [00:11:40]: And I think, like I said before, it's going from n one to n two. And then eventually you know how to manage what you've got, but you know how to increase the estate scalably.

Demetrios [00:11:50]: Yeah, that's the ideal state that you're at. One other piece on this, since you were mentioning DevOps and I, and how, and I fully agree, like Mlops is heavily inspired from DevOps, but I think what I've seen, and it's almost like every iteration it gets progressively worse in that thinking that DevOps is just tooling and it's just the tools that you use to accomplish these things. And then mlops, I saw it happen a lot, and you probably saw it happen a ton, especially in those basically 2020 to 2022 era where it was more conversations about the tooling and less about the processes and the cultural side of it.

Andy McMahon [00:12:38]: Yes.

Demetrios [00:12:39]: And I 100% see that with the AI infrastructure space right now, where people just care about the tools and less about what is it that we're actually trying to do? What's the process, what is the culture and how we can make this be that n one style of execution.

Andy McMahon [00:13:00]: 100%. This is something I've seen for years. As you said, it's still around, right? It's silver bullet thinking. It's sort of, you know, oh, if I just find the perfect tool or the perfect architecture, the perfect combination of tools, I'll be sorted. It doesn't work that way at all. I maintain, I've said this in plenty of times and bullshit me for this, but you could do any and all of this in excel, really, if you had a, if you had an awesome process and really good people. Right. And that's the point.

Andy McMahon [00:13:31]: You're saying it's about processing culture first. Those are the sort of, and like you said, solving the real problem. So those are the first order problems, the sort of second order or maybe a first order problem is data and, you know, the underlying sort of the ecosystem, how rich is the soil you're sort of planting in third or fourth order is tools. Absolutely. And like, you can do so much with open source. You really, really can and you should. I always encourage teams and people I speak to like push the boundaries of open source first and that then you then become what I call a kind of a more competent buyer or a more informed buyer because you can very much, you know, you get sales pitches and this is happening with the Jenny I stuff because it's so new. People blow you away with amazing decks and cool demos and everything's really slick and you just think, I need that.

Andy McMahon [00:14:24]: What you actually need is end products that work. You don't need that tool necessarily. So go and try it other ways first and then you find out where the gaps are. Because if you actually find out that, you know, I can't get Jenny I working because provisioning AWS accounts is so hard for me. You don't need the fanciest rag application as a service, then you need something more in the infrastructure. Just really, really do the groundwork, the homework, push the bounds of what's available and open source and give back to open source. If you're all doing that.

Demetrios [00:15:00]: Yeah, contribute back. That's huge. That's so true. And even knowing and trying open source first will educate you as a buyer because you're going to know what features are actually valuable and what you wish that open source tool had or potentially was able to do that for some reason it's not able to do. And so then when you go on the market and you look for a tool, you're more capable of saying, let's find something with these requirements, because if everything is new in the beginning, you're going to not know what is really important for your use case.

Andy McMahon [00:15:42]: Yep, no, that's it exactly. And the other thing as well is you really start understanding what the reality is and this sort of links to what you're saying. But it's very easy from a great example is like the demos that came out around, say, Devon and all the other stuff, you know, the auto, auto developer.

Demetrios [00:16:03]: Yeah.

Andy McMahon [00:16:04]: And very quickly, I thought this was funny. Everyone said if they've automated development, why are they hiring developers? But that's, that's a side point. But like that was a demo video. That was some, you know, snippets and all of this stuff, all of this hype. But really, if you're the people playing with, you know, blank chain or whatever it is. And you're just, you know, Olama and you're playing around with this stuff, you go, that would be incredibly difficult to do. And you sort of, that's you becoming more informed right away to your point, because you're sort of, you can smell, you can smell crap a bit easier. Yeah.

Andy McMahon [00:16:38]: And you understand where the sort of research activities are. So you know, for example, how hard it is to build agents that actually perform very well and robustly. Right. So if someone comes to you and says they've solved that problem, if you've played around with it, you're going to be, you're going to have a far higher bar placed on them before you give them your cash, which is so important. Right. Because otherwise you're going to make crazy decisions. Yeah.

Demetrios [00:17:04]: Which is easy to do and expectations aren't going to be aligned. You're going to be thinking that you can do one thing and probably get a rude wake up call at some point in time. And the piece that you talk about too there is, I heard it put wonderfully yesterday. I think it's like a Hashicorp fundamental within the way that they build. And it is something along the lines of workflows over technology. And so they really, the whole philosophy there is saying all we care about is the end goal. However we get there, that can change a million different times and we can optimize it for the better as we find better tools out there, or we recognize that we can build a piece, a pointed solution that will service our needs better. But really the important thing is that end state that we are going for.

Demetrios [00:18:06]: So how we get there, it doesn't really matter.

Andy McMahon [00:18:09]: Yeah, yeah. And they. Yeah, exactly. The implementation details aren't the important piece necessarily. So a great kind of way to, I think about this in my head is when I see an architecture diagram, it's generally the case that I could swap all of the boxes for a competitor tool and it would still work exactly the same. And that's good.

Demetrios [00:18:31]: I thought you were going to say.

Andy McMahon [00:18:32]: I could swap all the boxes for Excel. I'm maybe going to have to retract my excel statement. Right. So do that to a point. But the point, the point roughly stands, right. Philosophically. But, you know, if you're, you know, if you're, because if you're. And even at the level of the cloud provider or something.

Andy McMahon [00:18:45]: Right. If I have to port from one to the other, functionally, they are mostly equivalent by necessity. And increasingly a lot of tools are becoming interoperable. They're all tending towards standard API structures and standard contracts. That's all very positive. So it does mean that if you look at your architecture and the specific component that does the ML pipeline piece, if you know you can confidently swap that out for one of the other tools, you've generally built the right workflow to the hashicorp point. So I 100% agree with that where it becomes difficult, and this is me thinking with my bank sort of hat on large, large existing organizations that aren't say big tech, they will have a lot of legacy solutions lying around. And sometimes that just means, you know, there is bespoke things that have to be built.

Andy McMahon [00:19:40]: In general. The kind of, the core point of that still stands is like things should be interoperable. And that's how now you can get best of breed architectures where you say, I'm not going to go all in on one platform, I'm going to use multiple that will do things, different things better. I'm going to have one thing for monitoring and one thing for scalable training and one thing for data engineering. That's all very doable now.

Demetrios [00:20:05]: Yeah, I remember also back in the day I heard somebody talk about how they were hybrid cloud just to minimize the potential blast radius. If they took down their whole AWS instance, at least they knew it would just be on AWS and they had their whole GCP instance that wouldn't be affected.

Andy McMahon [00:20:27]: Well you, you're laughing, right? But that, that is, that is policy. Enlarged organizations that are not kind of a sort of technative, et cetera, that is the case. And so sometimes, sometimes it is necessary if you like, because say a large financial institute, like a large bank, we're, we're critical infrastructure for like the british economy. Yeah. So like it's good that we have people worrying about that in a sense of. But it's true as well that it still stands what you said. The workflow should be able to work on prem or in cloud. It's just different tools and components that do the specific tasks.

Demetrios [00:21:11]: And you've inevitably been in a lot of rooms. I can imagine where the ROI on switching tools or upgrading tools from legacy is being discussed. How do you generally go about that conversation?

Andy McMahon [00:21:28]: I think you're, like you said, you're always focusing on the ROI and that becomes complex very quickly because you're not just thinking per unit cost of running queries or of storage or something. You're also thinking of total cost of ownership, the entire lifecycle. So I have to decommission this existing stuff. I have to spin up all this news. I have to maintain and run this new stuff after maybe upskill or train or new hire sort of comes into it as well. So it can become complex. But I think you always have to think, yeah. What are the key components of the Roi calculation? And what is the target state we want to aim for? And it's sort of.

Andy McMahon [00:22:14]: It's always a balance between that short term benefit of maintaining things as they are versus, you know, you have to go through some pain to get to the real target state where the benefits will drive through. But it's always complex. And sometimes the answer is not rip it up. Sometimes the answer is no. No, that has to stay as is, or it's not. It's not worth it yet. Is another answer.

Demetrios [00:22:38]: When have you recognized that? Like, how? And because I can imagine there's. It's like hawks and doves in the government almost. And sometimes, yeah, it feels like there's probably a lot of. A lot more hawks than doves because engineers just like to get their hands dirty, and the new stuff is always gonna be cooler than the old stuff. So how have you, like, when have you thought about that? Just has to stay. And what were the arguments that won you overdose?

Andy McMahon [00:23:10]: Well, I think it's actually interesting. You think there'd be more hawks? I think in certain industries, heavily regulated industries. Right. This applies maybe less in some jurisdictions, but like in Europe. Yeah. If you're in finance and banking. Yeah, yeah.

Demetrios [00:23:26]: There's a lot of dubs.

Andy McMahon [00:23:27]: Yes, exactly. Just by nature of the. And that's. I think that's a good thing in a way, because, like I said, we're an important part of the economy, so we can't just go fail fast and break things. Right. Doesn't apply when people need their money out of machines. In some cases, it should apply, and you should always innovate and move forward. That's important.

Andy McMahon [00:23:49]: But, yeah, you can move at the pace of some big tech. Right. That's just fundamentally true. The other aspect is that, well, related to that is like, stability is really important. Trust is important. We are custodians of people's data, and it's super important data. Right. It's the stuff in your wallet.

Andy McMahon [00:24:08]: It's your earnings. It's very. It's very personal information. It's all of these things about your life. So that's taken very seriously. So that feeds through to there being many doves. Now, there are lots of cases where you put on a hot cat and you think, this is crazy. We need to rip up but I think the cases where it's prevailed, and I won't go into details necessarily, but there are just core pieces of banking infrastructure that have been around for decades, and I am not sure when they will be sort of totally hauled and revamped.

Andy McMahon [00:24:40]: And in a way, it's kind of like these systems keep working. So let's not poke it too much, which may sound scary to some people, but actually it's good because they are well maintained, they are well looked after. They do have really strong governance and programs around them. But, yeah, there's actually, I would say it's more dovish than hawkish. The key challenge is bringing in a balance so they innovate and you don't kind of get killed by your competitors. But it's a hard balance. Right. And there are, like, upcoming sorts, fintechs and new starter banks and stuff that are challenger banks, as they're called.

Andy McMahon [00:25:23]: They're a bit more nimble and lighter, but at the same time, they're getting started out. I'm sure once they become the scale of, like, a huge bank, they'll have different. They'll have to. Yeah, you, you know, you can. You'll join. You'll join us. Live long enough to become the villain, as they say in the Dark Knight.

Demetrios [00:25:39]: Right, exactly. I remember we had, I think her name was Michelle from Lloyds on here, and she was talking about how they were going through the migration to cloud, and it was a $3 billion project, and it was almost, I think she said they budgeted, like, six or seven years for the full migration. And so that type of stuff, it's like when I was in Portugal and I was getting a tour of Lisbon, and someone told me, yeah, that building took 250 years to be created, and the first and second generation didn't even live to see it be finished. And I'm kind of, like, looking at.

Andy McMahon [00:26:30]: That Lloyd's building the great wall or something. Yeah, you're definitely a building. Cathedrals. That's sort of how I think about it. But that's, again, like, when you break it down, it's actually made of much smaller winds. When you hear that, you think, why would I ever, why would I ever go somewhere where I do that? But it's not the case that you're just sitting around doing the tiniest iteration. What happens is more you divide and conquer. So you say, is one part of the bank or one set of data will migrate first, and then we'll derive lots of value and we'll build mlops and all the infrastructure we need for data science, et cetera, there while the hydration is continuing in other parts of it.

Andy McMahon [00:27:11]: So I was kind of speaking to someone about this recently. When you're in an environment where the end to end is a bit slower or there's kind of more steps, what happens is you, rather than vertical scaling you horizontal scale, to borrow a kind of infrastructure term, right. So what you do is you end up running lots of things in parallel and driving value that way, and then they all start landing in a sort of consequential fashion and get you to where you need to be. Whereas in maybe a more startup focused world. So when I worked in the much smaller companies, you know, like a startup, it was more, you know, you had a job, do it end to end, move on to the next one. Now it's more, we have slightly longer pieces of work, but let's horizontally stack them and spread our time between them and keep moving all of them forward.

Demetrios [00:27:55]: Yeah.

Andy McMahon [00:27:56]: So it's just, it's just different scale. But then when things land, it's massive impact. Right? Millions of customers, people, you know, we're catching fraud better. So we're literally protecting people's livelihoods and their money. You're, you know, and I remember working on projects before Matt west, you're breaking up slave, slave rings, you know, modern slavery and all these things. There's all these amazing things that come through banking technology. Right? Yeah. That's easy to sort of dismiss and forget, but it's like, it's.

Andy McMahon [00:28:28]: It's critically important and it's. It's very real for people. It was the first time I worked in an organization where everyone understood what I sort of did. I could say, you know that retail app on your phone where you do your banking? We're protecting that and looking at the data and. Oh, I get that. Whereas before when I worked in, say, energy or logistics, that just don't understand. What do you mean? What do you mean? You're doing a routing problem, lots of diagrams and whiteboards, whereas it's very visceral and it's really direct impact on customers, which I really like. That's why I've kind of stated finance these.

Andy McMahon [00:29:00]: Psy, real quick question for you.

Demetrios [00:29:03]: Are you a Microsoft fabric user? Well, if you are, you are in luck because we are introducing SAS decision builder. It's a decision intelligence solution that is so good, it makes your models useful because let's face it, your data means nothing unless you use it to drive business outcomes. It's something we say time and time again on this very show. But wait, what do you mean by nothing? Well, SAS decision builder integrates seamlessly with Microsoft Fabric to create effortless business intelligent flows. It's like having a team of geniuses you manage in your pocket without all that awkward small talk. With decision Builder, you'll turn data into insights faster than brewing a double espresso. And you know how much we like coffee on this show. Visually construct decision strategies, align your business and call external language models.

Demetrios [00:30:00]: Leverage decision builder to intuitively flex your data models and other capabilities at breakneck speeds. There's use cases for every industry, including finance, retail, education and customer service. So stop making decisions in the dark. Turn the lights on with SaaS decision builder. Yes, I did just make that joke. Watch your business shine, SaaS decision builder, because your business deserves better than guesswork. Want to be one of the first to experience diffusion? Want to be one of the first to experience the future of decisions? Well, sign up now for our exclusive preview. Visit SaaS.com dot slash fabric or click the link below.

Demetrios [00:30:49]: So talk to me about how you look at the addition of AI to the ML workflows. And when you're thinking about a platform, are you thinking about the ML platform also servicing the AI needs, or is it now you're thinking about these two are separate, or the AI part is greenfield, and we can move faster if we just start from scratch.

Andy McMahon [00:31:21]: Yeah.

Demetrios [00:31:22]: What is that?

Andy McMahon [00:31:23]: Good question. So the way I try and think of it more as a sort of, you're providing an ecosystem in which the organization can first of all play and experiments and then eventually build products and services. But you're sort of, it's not as strict in my mind as like this is the platform for this, this is the platform for this. So depends how you slice and dice it, but it's more, you're creating an ecosystem of capabilities that people can tap into. So sort of tools on the shelf, and then it's up to the business based on the requirements they have for customers, for colleagues, etcetera, what they need to build, and then they can go to the shelf. So the key thing is building the right stuff to put on that shelf for them to use or enabling it. So what I kind of think about a lot is, you know, the traditional ML use cases are not going anywhere. They're not, they're going to be around forever, right.

Andy McMahon [00:32:15]: The thing that's going to outlast all of this is SQL. Right? But let's not go there. You know, the longer that rule, I can't remember it, but the longer technology's been around, the longer it will be around.

Demetrios [00:32:24]: Yeah.

Andy McMahon [00:32:25]: So like, you think, like SQL has been around forever, it will be around forever. ML is catching up. It's going to be there a long time. And then you've got AI. The way I think about it is, yeah. Are you servicing the capability? Are you giving people places where they can access the data, do their exploratory analysis? Have you got development workflows where they can build machine learning models and the pipelines that those sit in, can they orchestrate them and can they monitor? A lot of that thinking still applies to the AI world. The big difference now is obviously you don't own the models. So the focus of the lifecycle is a bit different.

Andy McMahon [00:33:06]: You're consuming models as a commodity or a service. I keep talking about models as a service or as a commodity. So you're kind of, you're just going and purchase the use or renting the use of these models that someone else has built in most cases, the vast majority of cases, and that's just a different setup. So what it means is the traditional mlops lifecycle was far more focused on, you know, how are you going to train, how are you going to track all of your experiments when you're training? How are you going to optimize the hyper parameters and how are you going to then obviously validate all of those runs in a sensible way that translates to production setting. Now you're not training. You might be fine tuning a little bit, but even then that's not most use cases. What you're probably doing is actually saying, now I'm building vector database. I'm chunking and indexing data in that database, and I'm building my chains or my flows around how I interact with the LLM to get the best usage of that data.

Demetrios [00:34:06]: Totally.

Andy McMahon [00:34:07]: But you're still doing the monitoring piece, you're still doing evaluation, and you're still going through those sort of stage gates that you would before. They're just sort of different tools on the shelf again. So I kind of view it as, yeah, you're just building out this ecosystem. People can tap into it when and where they need. The kind of interesting case that I think hasn't came up yet, but will is when people want to do very hybrid solutions, they're both traditional ML and AI. So things like function calling and more agentic workflows. Right. That's going to be really interesting because what you're going to have is like AI will be the key backbone and even part of the orchestration framework almost if it's doing function calling, but it might call in to your proprietary models that you've built.

Andy McMahon [00:34:50]: So I can imagine a world where we say to an LLM, run this analysis and use my existing fraud detection models. And what it does is it hooks in your proprietary models, pulls in the answer and synthesizes it with other models. It comes up with an answer.

Demetrios [00:35:06]: Yeah.

Andy McMahon [00:35:07]: And there's going to be a really interesting question there about how do you expose your models as services that can be consumed across the organization. How are you going to again validate that? Because that becomes really complex very quickly. But that's the sort of thing I'm thinking towards. But yeah, I think of it as an ecosystem. So you could view it as all one platform or you could view it as just a collection of sort of buckets of technology. And the key thing is telling people, to your point earlier, was the workflow and process for hooking these things together to build a solution.

Demetrios [00:35:39]: Yeah. Making sure that's standardized so that all teams can access whatever they need to whenever they need to.

Andy McMahon [00:35:47]: Yeah.

Demetrios [00:35:48]: The idea of the hybrid AI plus traditional ML is fascinating to me too. I think about some things that I've seen. I think, and I can't quite put my finger on how it works now, but I know that we have had people come on to the LLMs in production virtual conference and talk about recommender systems and using LLMs in the recommender systems. There's the very easy way to do that is just asking an LLM to recommend things or saying I'm x type of person recommend something to me. So that's like a very basic LLM recommender system type thing that doesn't account for recommender systems wanting to be very fast. And the other pieces that most people aren't just looking at I'm x type of person recommend where I should go on vacation. So there's like yesterday I was talking to a guy who was mentioning all these recommendations, job recommendations and all the features that they're using to create those recommender models and then how they need to make sure that when they're serving it, it's super low latency and that you don't quite get with LLMs yet. Maybe you can figure out a way to prune or make it super, super small models and get something.

Demetrios [00:37:18]: But I do think there is a world where we start to see more LLMs incorporated into recommender workflows. I've heard people talk about how they use LLMs to get suggested features that might work, and so that's another cool one. It's a little bit before the training happens, right? Or it's a little bit, I would say, like on the other side, the serving, but it's cool. And the last thing I'll say about what you're talking about with the ecosystem, which is a great notion of how to think about it instead of thinking, hey, there's this platform that you plug into here if you're an ML engineer, and you plug into here if you want to do AI, it's more of you've got all these tools, you string them together however you want because you're the creative artist. I'm just giving you white paint, black paint, a few paintbrushes, and then you make the masterpiece. And the fun part now, I think with AI and having these models exposed or having these models as a commodity, as you were mentioning, is that you don't have to be an ML engineer, you don't have to be a data scientist to get the value, if you can figure out how to make it work from these models that are models as commodities. Right?

Andy McMahon [00:38:46]: Yeah, exactly. So you're going to have a kind of stratified strategy. Stratified strategy. But basically, yeah, you're going to have one tier of applications is exactly that. It's baked into some vendor offering or some existing solution you use. Those copilots are popping up everywhere. Those LLMs be baked into so many existing software products that we use day to day. So in my email, in my word processor, in my ide for coding, there's going to be copilots there that you can use.

Andy McMahon [00:39:18]: And like you say, I'm not building an application that does that, I'm just using it. Then there's sort of slightly lower level. There's things, you know, that are a bit more GUi driven. Drag and drop, push some buttons, like Microsoft copilot studio as an example, sort of. It's getting to work. You're still building something yourself, but you don't really have to be a deep technologist to do that. And then there's the level of, you know what? No, we have to have a lot of control here. We have to really own everything end to end, and we want to build very bespoke components or string them together in a certain way.

Andy McMahon [00:39:51]: And that's when you get to like the EI and ML engineer. And I think that'll all work together as long as people are clear. But you said something very important earlier, which is about being driven by the actual needs of the organization or of your customers. The danger I sort of worry about sometimes is we shouldn't just do Jnai to do Jnai, just like before. We shouldn't have done deep learning. To do deep learning. We shouldn't have done data science. To do data science.

Andy McMahon [00:40:19]: We should really spend time as organizations really understanding where does this drive value? Because it will drive value. It'll absolutely drive value. It might not drive value to the tune of trillions of dollars and stuff that's been thrown around. I'm a bit skeptical of that, but it will drive value. It's just about being very clear on, you know, here's the need for that. And then there's an education piece that comes as well, because now everyone has heard of jnai, but many people have not still heard of traditional ML. So, you know, there's a lot of people maybe thinking, oh, I could use, I've seen things like people saying, I could use genii to classify something. Technically, yes.

Andy McMahon [00:40:57]: However, you could use logistic regression, you.

Demetrios [00:41:00]: Know, probably cheaper and faster.

Andy McMahon [00:41:02]: It'll be cheaper, faster and far more controlled. And again, in a regulated industry, sometimes that's the kicker, is like, actually, I have full control and auditability of this, whereas otherwise that's a black box. And I know this thing is going to perform. It's far more deterministic, still statistical, but it's more deterministic than a Jenny eye model. And I completely own it. You know, my little logistic regression is going to give me the same prediction today, tomorrow and next year based on the same inputs. Some JNaI models won't, if they're provided by third party providers and still doing pre training, etcetera. So those are important considerations.

Andy McMahon [00:41:38]: So there's a kind of education piece that has to happen as well. Where people were, you know, they know that playing around with an LLM, they can do certain things. And then you say, actually, there's other ways to do this. The recommender example is brilliant. The naive way won't scale yet, it'll be super expensive. It doesn't probably make sense in so many ways, but maybe you use a traditional recommendation engine and you get some of the first N recommendations and use an LM to synthesize a response and say, if you're feeling this way, do this. If you're feeling, you know, there's kind of. You can do lots of different combinations.

Andy McMahon [00:42:11]: But, yeah, it's just that hard thinking still has to be done around what products should we actually be building.

Demetrios [00:42:16]: Yeah, that's why, that's exactly why I felt I like the product managers are the ones who are really crucial at this point in time. Yes, because we've both seen so many products come out that are doing Genai, just to do Genai and you see it and then you notice it. And I remember being at a hackathon a few months ago and people were telling me how they were doing this cool thing with the LLMs and it was an LLM hackathon. So of course they're trying to figure out ways to use LLMs. But then I asked at one point when I was, I was one of the judges and going around asking and hearing the pitches and I asked, so why can't you do this with just, even, just like regex? And that's true. They said, they literally said to me like, oh, oh, that's the cool part. You can, I was like, well then just do it.

Andy McMahon [00:43:18]: Yeah. I can't remember who said this, but I wish I had known. If someone knows, please tell Demetrius and I. But someone said this before, they were like, yeah, you're using, you're using a literal supercomputer to reinvent Reg ex. Like think about what you're doing. Like just learn some reg.

Demetrios [00:43:34]: Exactly.

Andy McMahon [00:43:35]: Learn some SQL. Now there are cases where you know, like text to SQL is an interesting one, right? I've seen a lot of debate about this and there's some people I follow and things are like, you should just learn SQL. But on the other hand, if you're a non technical user and you need some information from a database, text to SQL is going to be super useful. Yeah. If you do know SQL, however, sort of the precision control that gives you, it may end up not being worth your time to go text to SQL. If you, you know, it's the same, it's the same with the coding stuff. Like if you know nothing, like, you know, just get an LLM to write it and it's broken, then ask it to fix it and you'll get somewhere and you'll learn something. But if you know a little bit, you're not going to get, you're not going to get it to do everything for you.

Andy McMahon [00:44:15]: It's like having really helpful assistants around you. You're not going to get them to do absolutely everything because what's the point of view? You're going to offload tasks to them and delegate and manage. But I totally agree with your point on product. Really good product managers and people who think in a product mindset more generally, that's going to be the key differentiator for teams and organizations moving forward.

Demetrios [00:44:38]: Yeah. Folks who understand what technology they now have at their fingertips, but they're not so hawkish, as we were saying before, to just stuff it into everything and that's worth its weight in gold. Okay, dude. So going back to the idea of architecture and genai and having this ecosystem of toys that you can play with and string together, I think the tools that you need are so similar, but just a little bit different. So I don't know how bullish you are on trying to recycle tools from traditional ML into the genai space versus we probably should have a whole new tool for that.

Andy McMahon [00:45:24]: Yeah, I mean, you definitely have new tools and new capabilities. It's just, I sort of, I put them on the shelf with all the other things, as I was saying earlier. So for example, vector databases, if you're going to go hard and build your own rag stuff, which plenty of organizations will be doing, that wasn't really part of the classic data science ML engineer bag of tools unless you were in. It was if you were talking about recommendation engines. They were using vector embeddings for a long time and not in the same way we do for Rag, but getting towards that, right. And that was there. But in general, for most data science teams, they weren't sort of thinking about PG vector and Chroma and all these tools, they just weren't. Now that is part of the language and part of the toolkit they need and expect.

Andy McMahon [00:46:11]: So you need to enable that. The other thing is obviously just the providers of the models, they all have different sort of cells. So the different cloud providers, they have their models and they provide surrounding sort of ancillary products, they provide agent frameworks, they provide guardrails, they provide out of the box databases. And you just need to educate people and you know how much you want to consume that versus just consume the model and build some of the other stuff. It's just a sort of. It's always a balance and it's always a. It's always a balance. It's just like when we migrate to the cloud, the sort of how far up the abstraction sort of ladder do you want to go? You know, you can go all the way to just consuming SaaS.

Andy McMahon [00:46:57]: So I'm just going to use chat GPT perplexity. I'm just going to use MS 365, copilot, whatever. I'm never going to care about what's happening in the background. That's again one tier, right? You go a bit lower and you say, actually I'm going to build my own front end and hook into an LLM provider. And then you eventually say, I'm going to build my own vector dbs and do my own chunking strategies. So then you get all the way down to some crazy people will be saying, I'm going to compile Lama CPP and run it on a bare metal server that I have running in my room or something. That's all fine. It's just being clear how far up and down that ladder you want to go and what the benefits are.

Andy McMahon [00:47:33]: Obviously you get more abstracted, it's easier to use, but often costs more. And then you go down the ladder, it can be a bit cheaper, but you have more work to do and you need specific skills. Yep, they all have their place, but it's just, it's just sort of doing it. But yeah, the tools are differently different. I've seen a lot of tools was.

Demetrios [00:47:52]: Thinking about evaluation and how similar that is to monitoring, but how it's not enough to where you can and when you're thinking about, okay, now I'm collecting a golden dataset and training my model with that. How similar that is to. All right, I have my golden dataset for what I'm going to evaluate the model's output on.

Andy McMahon [00:48:18]: Yeah, that's a very good point. So I was talking about this to colleagues earlier, sort of the challenges exactly like you say, in traditional ML for supervised learning especially, I had ground truth. I will get the ground truth eventually. I'll be able to calculate my performance. If I do that in a schedule or event driven way, that's me doing monitoring. Point of time evaluating monitoring were very similar for a lot of LLM or foundation model. Jenny, I applications. Ground truth is a slippery concept and that's not really what you're after.

Andy McMahon [00:48:51]: Right. If I'm providing a chatbot experience, there's no such thing as ground truth. There might be for some specific aspects, like if I do a rag thing and then I can start talking about retrieval precision and all of these metrics. That makes sense. And I can apply some metrics, blue and rouge and all these newer things, but yeah, it's exactly that. And you have the kind of challenge of ground truth is a bit of a slippery concept. How much do I need? Human evaluation. You have human in the loop, human on the loop.

Andy McMahon [00:49:19]: How scalable is that, really? Yeah, how it's. And then there's obviously LLM as a judge using LLMs to evaluate other LLMs, which is the first time people hear that they're like inception, but actually makes a lot of sense because it's a sort of independent layer of verification, and you can have models that are trained specifically to look for certain things. Like Lamagaard was one. It was trained specifically to, you know, detects toxicity or things going outside of specific guardrooms. And guardrails in general are often using LLMs, though. So, yeah, I think you're totally right. The whole evaluation and monitoring piece is definitely a new challenge, and we need to apply some of the new metrics that we have for specific use cases. We should also be thinking really hard with that product mindset, what the metrics we actually care about.

Andy McMahon [00:50:10]: So a great example I always think about is when I learned about containment rates for chatbots. Right? So there's a metric you can calculate that basically says how long until you demand to speak to a human, and that's your containment rate, right? No, but that, but that's, that's a good metric of performance.

Demetrios [00:50:27]: Yes, but I also wonder, can you, can you put right next to that, the language of the demand like sentiment?

Andy McMahon [00:50:37]: No, you could. You can live track sentiment, right? No, but people do this, right? They say, how long do I contain them? Working with the bot? Do I successfully complete the task? And what's the sentiment of the conversation? Yeah. Either in summary or through time, if you're kind of, if you're able to do that. But those are kind of examples of proxy metrics. But they are far more aligned to your business outcome or your desired product outcome. Right. They're not vanity metrics, which I've been fighting against my entire career. You know, you almost don't care about, you know, first.

Andy McMahon [00:51:09]: Well, you do care in some aspect, but it's like time to first token, or I got that. I got that from 0.1 second to 0.9 seconds. Right? Not that big a deal. However, everyone is dumping over my chatbot after three. Three lines because the chatbot just doesn't have a clue what's going on. That's the metric you care about, right? I don't care that, like, the first token came really fast, but the rest of the tokens were absolute garbage. So you just need to be clear, you know, what's a vanity metric and what's a real metric that you're looking after? And that's the site, that's the cycle we're in now. But, yeah, you absolutely can't take existing monitoring tools and just plug them in for LLMs.

Andy McMahon [00:51:46]: You have to think you can still build pipelines and do the calculations and calculate these metrics. But it is different. So I do very much take that point. It's not plug and play.

Demetrios [00:51:57]: Wait, what are some other vanity metrics? Because that is such a good one. The time to first token is something that you should be looking at further down the field. Makes 100% sense. I'm sure you have seen a lot of other ones.

Andy McMahon [00:52:14]: Yeah. So. Well, I got this from the book lean star up by Eric Rice, which is a classic, absolutely brilliant book. You know, it's classic things like hits on a webpage. You don't really care about hits on a web page. If you're selling something, it's more how much product do they sell? Yeah, things like that. So when it comes to ML models, like, I don't care how many predictions you ran, really, you know, I care about like why were you running the prediction in the first place or to sell more mortgages or something. Well, how many more mortgages to just sell? You know, it's kind of, it's just things.

Andy McMahon [00:52:48]: It's just asking that next question. You know, I think about things like the Toyota five whys. Toyota, this is five whys to diagnose issues. But sometimes I just like asking why lots of times anyway, because then you get to the real heart of the problem. So. Yeah, things like that. Things like now, latency and all these things are important. Right.

Andy McMahon [00:53:06]: Because if, if things take minutes to load, that's a terrible experience. Yeah, but it's just making sure that's not the only thing you're driven by. Because once they're above a certain level is a vanity project making them slightly better. It's more what's, what the core offering. You're trying to, you're trying to deliver.

Demetrios [00:53:22]: Like there's certain things that need to be within a bound, but when you hit that, you really need to be looking at the most important metrics. And if those other things go outside of their bound, then you should look at them. But for the most part. Yeah. The. How many mortgages are you selling? Is a very clear way of being able to say this is valuable or this is not exactly.

Andy McMahon [00:53:53]: And it speaks across the technology business divide, which still sort of exists. One that's a bit more subtle but relates to this is, I'm a big fan of Dora metrics. So the DevOps industry standards metrics. So these are like, I won't remember them all, but change failure rate, how often does a change to production fail time from a change to production deployment frequency? All these things, they're basically a set of four metrics that are the standards in industry for DevOps maturity. It's really interesting because I brought in previous organizations, those for Mlops, and said they're important part of mlops. But you can play the vanity metric game there because in some organizations you might say, our change failure rate is zero. What's your deployment frequency though? Or once a year. Right.

Andy McMahon [00:54:44]: You know, so you can play games like that and you say, by how much value did you drive from the solutions? About x x million or whatever. And you say, well, if we actually accepted some change failure rate and opt our deployment frequency, we could generate five or ten x more value. And that's when you sort of, you know, that's the vanity metric game you can play. And I've seen that a lot, especially in more conservative industries. You can get fixated on the wrong thing. We must avoid this failure type at all costs. Sometimes that's not the case. Sometimes the downsides are actually relatively small.

Andy McMahon [00:55:17]: You should move fast and just try things.

Demetrios [00:55:21]: Yeah. And it goes back to really deeply understanding what you're trying to do, what your end goal is. And I really like how you pointed out there's that tech non tech divide, like the business side of the house and the technology side of the house. And if you can service the business side of the house, you're going to be in a great position because people are going to, if you can speak in dollars and cents, people are going to like what you're doing.

Andy McMahon [00:55:52]: Yeah. But I think it's incumbent on us to speak that language because why are we there as technologists? I'm not, you know, you're not a technologist in a large financial organization or a large organization of almost anything. You're not there because someone wants to pay you to have fun playing with tools. Right. It's great. It's great if you can have that. Right. I'm very lucky to have a career where that is true.

Andy McMahon [00:56:15]: I am fundamentally there to help drive value for customers, stakeholders, shareholders, etcetera, that it's just the truth. But that works even if you're in the third sector, even if you're in charities. Right. You're there to, you know, help your service users and help the people you help. You're not, you're not, you're not being paid to sit there because, you know, people like, just people are really keen that Andy has a good time. That's not, that's not sort of, you know, why I come into work every day and I think that's super, super important to remember. Right?

Demetrios [00:56:42]: Yeah. Keep that front of mind, put it on a post it, and put it on your wall.

Andy McMahon [00:56:46]: So that I am.

Demetrios [00:56:48]: Yeah. Well, we had this guy Nick, on the. A few episodes ago, and he was saying how he was listening to a leadership conversation or podcast, I think it was, and how they were really excited because they recognized an area that they could automate in their business, and they said, yeah, and then we can get rid of all those expensive humans. And so he made the point of, somebody's always trying to figure out a way to not pay you, so you have to be figuring out a way to make sure you're getting and. And creating more value than the person, the other opposite force that is trying to make you not get paid.

Andy McMahon [00:57:36]: Yeah, yeah, exactly. You don't want to be a cost center as an individual. You want to be a value generator. Right. I always think about that, though, because some of the best teams I've worked in were very aligned to Roi for the team and having an Roi target. And I try and think about that for myself. What's my Roi when I come in every day? Am I delivering far more value than I'm taking out? And you like to think you are, and I think in most cases, people are. That's why jobs exist.

Andy McMahon [00:58:07]: But the point is, you can't just sort of assume it's for granted to that point. You can't just sort of. You're not there just to spin up cool demos. You're there to drive value for the organization and teams you're part of. And that's just super important to bear in mind. But it's still a fun challenge. Like, this is the thing. It's not.

Andy McMahon [00:58:27]: People think that might take the fun out of it. Absolutely doesn't. I've always been driven by that. I don't want to just tinker for no reason. I want to be, like, there and go, you know, that. That thing that you use every day to your customer? We helped build that. I think that's really sort of exciting.

Demetrios [00:58:44]: These teams that you speak of that were some of the best you've worked with, which were highly aligned on Roi. What do you think made them like that? What were some of the things that if I'm on a team, I could take back to my team and say, hey, we might want to think about this.

Andy McMahon [00:59:06]: Yeah. Great leadership. I think that's.

Demetrios [00:59:13]: What does great leadership mean, exactly?

Andy McMahon [00:59:14]: I'm going to tell you. I'm going to tell you, don't worry. So I just think great leadership is someone who provides clarity, someone who provides stability, someone who provides an environment in which you feel psychologically safe. I'm a big fan of that, but also where you're safe to innovate and try things within the appropriate bounds for the organization. But as someone who provides air cover and backs you, it says, you know and wants to listen to your ideas no matter what level you're at. I've been in organizations where, like, it's super horizontal, really large organization, super horizontal. And that is excellent because that new data scientist you've just hired may have the best idea that anyone's ever had. So make sure you have an environment where they can say that.

Andy McMahon [00:59:59]: Also, you is, I heard this phrase before, hippos. Highest paid. Highest paid person's opinion. Hippos. Hippos are bad, right? You shouldn't just look at the highest paid person and assume they know everything or the most senior person. I work on the basis that I know nothing. You know, I'll go out and I may be the most senior engineer in the room, but I just kind of always assume all these people are smarter than me. Maybe that's my inherent imposter syndrome, but it's a really good tool because it means you listen, people come up with amazing ideas and they solve problems.

Andy McMahon [01:00:27]: So that's what good leadership is. I think alignment with, like we said, the organization, the business, which comes through leadership, but it's also just should be part of your team culture. Like, why are you there every day? You're there to, you know, and it could just be enable. My colleagues have a mission. Right. I'm there to sell more mortgages. Yeah, sell more mortgages. I like to think we're driving more value for our customers.

Andy McMahon [01:00:51]: Right.

Demetrios [01:00:51]: Okay.

Andy McMahon [01:00:51]: Or I'm trying to, I'm trying to stop fin crime, or I'm trying to, you know, whatever it is, whatever the mission is, be very clear on the mission and just really feel that throughout the team. And then I think get stuff done. I think Barack Obama or someone said that in their old biography, right. And I just always think about it, be the person who gets stuff done. Be the team who gets stuff done. That's just so critically important. It just ties in with everything we're saying. If you're the ones who sort of hum in hand, think, and, you know, you come back and forward and, you know, that's not, that's not where you want to be.

Andy McMahon [01:01:26]: You want to be the people who you say, yes, grab it by the horns, let's go. I think that's it. That's, that's what made those teams successful.

Demetrios [01:01:34]: And speaking again on this, having the mission, is that something that you feel like, you've, you've seen people continuously talk about when you're on these teams. Is it.

Andy McMahon [01:01:46]: Yeah.

Demetrios [01:01:46]: Or is it just because. Yeah. It's easy to forget. Right. And I. Yeah. In positions where I've led teams, I know that I've said it a few times, but it's incredible how you think you're saying it a lot, and then when you circle back, you recognize people have no idea. They're like, no, I never heard you say that.

Demetrios [01:02:08]: What are you said that or that's our mission.

Andy McMahon [01:02:11]: Wait, what? Yeah. Why are you saying our missions about mortgages? Yeah, exactly. So, no, absolutely. So when I kind of have teams and stuff and led teams, I always try and bring it back to, why are we doing this? Because I think it's important for people to hear and feel. It helps people understand why their contribution is valuable. If you're saying to someone, go fill in these three forms or go on board to this tool or help this team, you know, it's important to say and remember the reason we're doing that, because if we can get these teams onto this platform, they will stop fin crime, they will stop more financial crime. I think it's important to embed that in, and I don't think it's false or kind of disingenuous or calmly to do that. I think it's just the way it is.

Andy McMahon [01:02:58]: So, yeah, I think the mission, coming back to that mission is always super important and recognizing a job well done said, be the person to get stuff done. It's important to call out, you know, this was a good example, we should try and replicate that, or this didn't go so well. How can we learn from it? I think that's important as well. But, yeah, I think, I think mission's super important. So I kind of wake up and come into work every day and try and think, how can I enable all of them? I actually tell people, my job is to make you look better, nice. My job is to make you do better stuff. I want to enable you to go faster. I want to enable you to deploy more models.

Andy McMahon [01:03:31]: I want to enable you to build better products and services. Right. That's my job and I love it, and that's why I'm there. And that mission sort of really helps me focus on what's important.

Demetrios [01:03:42]: Sure. So last one for you before we jump in because, dude, so good on that enablement part.

Andy McMahon [01:03:53]: Yeah.

Demetrios [01:03:54]: Do you look at any metrics around and maybe we'll cut this out because you're like, yeah, that's part of those DevOps metrics that I was talking about earlier, but I think about the enabling. So there's the enabling enablement piece. So are you looking at metrics that are like time to production type metrics?

Andy McMahon [01:04:19]: Yeah. Through my career, I've always been obsessed with that. So time to value, I call it, or I've heard it called in different contexts, but exactly that. I think that. I think that's one of the key ones. Right. So do you go from an idea on a whiteboard or a sheet of paper or whatever to a service? And there's always a debate about when is the starting gun fired? And, you know, when do we, when do we say the clock started? But fine, just pick a definition. It's the first commit in the repo, it's the first meeting we have.

Andy McMahon [01:04:50]: Whatever, but just pick a definition and exactly that. So how quickly can I get to production? And then all those door metrics I did mention before. Also things, again, back to that value piece. Right? What is the value of this portfolio of stuff? If we get to the end of, like, say, two years from now or whatever, and we say we've not really delivered much ROI, then we've kind of. We've lost sight of that mission. And Roi can be, I think it's important ROI can be defined in some more qualitative ways. Right? Some things are just core enablers you need in an organization. So, like, you know, we need good hr technology, we need lots of stuff.

Andy McMahon [01:05:30]: We need just core it infrastructure, et cetera. They are not necessarily things that drive up sales or cut costs here, et cetera, but they deliver good ROI. It's just a bit more of a qualitative calculation. But yeah, look at metrics like that. I want to look more at, and I've done this before, look at metrics around, you know, adoption key tools and enablers. So if you build out, say, for example, some templates or exemplars for how to do specific things, so templates on how to do monitoring or certain training types of process, be really keen to track how many people are actually consuming and using them, and that justifies the value. And then what you can say is you can kind of bootstrap and say, do you know what, ten teams have used this and another ten are asking for it. That tells me there's something here and it tells you if you should invest more time in that versus, you know, doing this other side project.

Andy McMahon [01:06:28]: Actually only one person will ever use. And it's a bit kind of blue sky and stuff. So I'm very matrix driven and I try to be as much as possible.

Demetrios [01:06:38]: And that brings up a great point, which I completely forgot about earlier that I wanted to ask you about. Now it's coming back to me on being able to distinguish or know which use cases are more valuable than others and recognizing this model or use case, whatever service is driving millions for the business. So we really need to babysit it versus this is new, we don't have clear stuff around it, it's cool, but we don't necessarily have to do the same amount to it. And I think that goes hand in hand with what you're just saying is how, if you have those metrics about who's using it, what's it doing, how is it working, you can properly organize yourself to make sure that the right things are getting the right attention.

Andy McMahon [01:07:39]: Yes, absolutely. And that whole thing about speaking the language of the business, like some of the most valuable people in that conversation are like business embedded professionals, or maybe even not technical, or it could be the data scientists working most closely with the business, whatever. But the people who are closest to the actual value proposition always write the best business cases and you just sort of as a technologist, help them and say what's possible and what's nothing. So if someone says, you know, I want to, I want to catch 100% of fraudulent transactions, you know, you can come in and go, that's probably not possible, but how valuable would it be if we can't? We call 50% or 60% or 80% and you can start building a case around it. And you very quickly, to your point, to say, actually this is a very good return or this is a critical enabler. And sometimes though, you have to be a bit kind of, you know, you have to, you have to take the riskier bets and just say this is a huge upside and low downside if it fails. But it potentially is a huge upside if we can get this right. So we should try that.

Andy McMahon [01:08:41]: And you just do that. You manage it like a portfolio, an investment portfolio if you like. Sometimes.

Demetrios [01:08:47]: That's great analogy.

Andy McMahon [01:08:48]: Yeah, I think that's the way you do it. You've got your sure bets, you've got your medium risk ones that are a bit more valuable, and then you've got your ones that are blue sky, put it all in black type thing. But the upside is huge and I think it's just really good leadership. Coming back to that point helps you because they help manage that whole risk investment decision. And the more you're in the industry, the more you can smell a good use case, I think you develop a bit of a sense for it. You can go, this is going to run into lots of problems. I've done something like this before and we thought it was good on paper and it was a nightmare for these ten reasons. And then there's other cases you go, I know that sounds difficult, but actually in practice it's three steps and you're done.

Andy McMahon [01:09:35]: Let's just do it. Like, the upside may not even be that great, but it's so easy to do. Let's do it. So there's, you know, there's just, that's, that's where it's good to have good people around to been through the wringer a bit.

Demetrios [01:09:47]: So good, man. Well, Andy, it's been a pleasure talking to you. Dude, this is so much fun and I appreciate you coming back on here. And for anyone out there that has not read your book, if you want to be like the smart folks at Oxford, go and read it just like that.

Andy McMahon [01:10:03]: Thanks for having me. The metro, great.

+ Read More

Watch More

LLMOps: The Emerging Toolkit for Reliable, High-quality LLM Applications
Posted Jun 20, 2023 | Views 3.8K
# LLM in Production
# LLMs
# LLM Applications
# Databricks
# Redis.io
# Gantry.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Zilliz.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io