MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Making the ML Development Process Mature & Sustainable

Posted May 13, 2024 | Views 406
# ML Development
# AI/ML Development
# Netlight.com
Share
speakers
avatar
Viking Björk Friström
Associate Manager @ Netlight

Experienced Data Scientist with a demonstrated history of working in the market research industry. Skilled in machine learning, Statistical Modeling, and Data Analytics. Strong engineering professional with a Master's degree focused on Computational and Applied Mathematics from KTH Royal Institute of Technology.

+ Read More
avatar
Albin Sundqvist
ML Engineer @ Netlight Consulting AB

A software engineer who loves playing football and esports.

+ Read More
SUMMARY

You productionalize AI/ML development by having a good foundation. This can be given by a standard repository structure that helps you create good quality code, and use best practices but also lets you build automation on top of the structure.

The focus should always remain on delivering value, guided by strategic decisions on when and how to implement these practices to best support the project's goals and context.

+ Read More
TRANSCRIPT

Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/(url)

Viking Björk Friström [00:00:02]: My name is Viking Björk Friström. I'm a consultant here at Netlight and I work with everything related to data. So I've been working a lot with data science, MLOps, and also with quite a bit of data engineering. And with me today I have my colleague, Albion.

Albin Sundqvist [00:00:17]: Yes, hello, can you hear me? I'm Albin Sundqvist. I've been here at Netlight for five and a half years and I started my journey as full stack developer, but then I've gradually moved towards AI and data. So, yeah, happy to be here.

Viking Björk Friström [00:00:35]: Yeah. And we're here today to talk a bit about the ML development process and how to make it more mature and sustainable. So as a consultant, you get to be part of a lot of sort of like the first ML journey. For a lot of companies, that journey might look different in each company, but there are some sort of things that stands out that sort of repeat itself in every journey. And that is sort of like the first workflow that it works with. You start by identifying a business case where you want to use ML in your organization. You move on to sort of like building your pipelines, where you get your data into the right place. You start investigating your data, looking for different type of inference, you build a model and then finally you come to the point where you start thinking about mlops and want to start to deploy your model.

Viking Björk Friström [00:01:28]: These first four steps might be a bit more iterative, it might not be a straight line, as I illustrate for this, but I want to illustrate in a line here, because usually, of course, deploying the model has to come at the end here because we have to have a model to deploy before we can actually start deploying it. What usually happens as well is when you get to this point, you put on your mlops hat and start thinking, ok, we have a model, we need to deploy it. How do we do that? And what happened a lot of times is you don't put on this hat until the very end of this process. What I want to talk about today is I want to argue that you need to put on your envelopes hat throughout the entire process to make this process way smoother and for things, works better for your organization. So why then, why should we put on our mlops hat as soon as possible in our process for deploy our first ML model? So let's take a step back and think about the goal of any machine learning project, right. The first one that comes in mind, right, is that we have some sort of cost function, some target function that we want to minimize. We want to have our perfect predictions. We want to have our forecast to be as good as possible.

Viking Björk Friström [00:02:41]: And this is the first thing that always comes to mind. But we also have a second goal that is equally as important, and that is to minimize time to market. Now, why is this important? Well, if we get stuck in this first step, after a while someone from management is going to come to us, start talking about fiduciary responsibility to shareholders, and then take the product off. Because before you deploy your first model, your team is only costing money for the company. You're not bringing any value. So that's why you also, at the same time as you minimize your cost function, you also need to minimize your time to market. And the problem is sometimes these ones have sort of like an inverse correlation, right? If you spend more time on building a model, it can take you longer to come to market. And also a lot of times your model become more complex.

Viking Björk Friström [00:03:32]: And also that will also increase the time it takes to deploy it and bring it to the market. So I'm going to talk a bit how we can use the MLL pattern then to first of all minimize the time to market by starting putting on MLL upset early in journey. So first of all, when you start looking at your business case, the first thing you need to do is sort of like establish a baseline for your model that you need to beat to be able to bring business value to your, to your company. And then the second thing you need to decide here is how will our users then access the predictions from our model? Is it going to be directly to the users, is it going to be integrated in some other app? This needs to be the decided early on to be able to start thinking about this in an mlops way later down the line when we start building our pipelines. Of course we are still thinking about how do we bring data for training in the model, but also how do we bring data for making inference in the model once it's deployed? And also important, how do we extract data from our model later for monitoring? These are stuff you can start thinking about even before your model is deployed and already when you start building your pipelines. When we investigate the data, we can also look for patterns that will affect the way we do mlops later. As an example, you can start looking at data drift. So do we identify this as a model that needs to be retrained very often or not that very often? These are things you can start identifying early on.

Viking Björk Friström [00:04:58]: So you know further down the line when you start working with mlops. If it's something that you need to consider. And then lastly, when it comes to data science part, that first of all, your first goal should be to try and beat the baseline and not to build the most best model as possible from the start. And also you should go for simplicity over complexity. And now I'm talking about simplicity and complexity in terms of mlops. So when you evaluate the models, right, you evaluate how complex or simply are also not only for building a model, but also deploying the model. We can also use a bit of the mlops hat thinking in a way to minimize our cost function. So for example, we can start monitoring the model performance, right? Because we deploy a model, we know that it beats the baseline, but that was on our test data.

Viking Björk Friström [00:05:49]: You don't bring in a value if you can't beat the baseline in production as well. This is like the mlops version of, you know, it works on my machine. So you need to be able to show you that it actually works in real life and not only in your small data set. And you can't start doing this until you actually have deployed the model. And another thing is also getting early user feedback. You might have certain assumptions that you make about your users and how they will interact with the model, but you won't be sure how that works until you actually deploy it and start seeing how it works in a real life scenario. So we get to this point here. We have built a team, we have deployed our first model, and now we want to move on and sort of like make this mlops thinking process more mature.

Viking Björk Friström [00:06:31]: And therefore I will hand over to my colleague albin.

Albin Sundqvist [00:06:35]: Yeah, so how do you actually productionize your ML workflows or products? So like Viking said, it's important to know that once you deploy, it's when you actually give value. And once you deploy, it's also when your model starts to decay. So being able, it's important to find this process where you dare to deploy and find a way to deploy with confidence, which is really the problem that production is trying to solve. I think you shouldn't look way down the line in the future to have this super automated auto deploy system because it takes too long to reach the market. So you want to find a process where you can incrementally increase complexity. All right, so what's the first step? A robust repository structure. So leveraging templates using data science cookie cutters, maybe you modify them or you use your own repository that you can then fork when starting new projects from. And you get this nice, it paves the way for automization and can ensure these software engineering principles when you develop your models.

Albin Sundqvist [00:08:21]: And for example, it can be this predefined assumption of a test folder within your project that exists and runs some tests when you commit your code, or some validation that takes place when you insert data in your feature store. But the goal is to get this cleaner code within your repository and instead of data scientists or developers having to each figure out how they should set up, build and run tests and validation, you can leverage this base to have examples or maybe standard tests that is ran in the project, which makes it a lot easier to just add instead of building from scratch each time. And often you get this, what I've seen, you get this tightly coupled code when you let it grow organically. So instead have this well defined structure within your repository that helps with this and let the data scientists iterate over this. The same goes for modular coding where you again can leverage this base structure to have examples, letting or helping the data scientist know when and how they should refactor the code and where to place it within the repository to increase this level of testable code, but also being able to do this productionized, write this productionized script that you still can use the same code in your exploratory notebooks via imports, or even move the different modules outside the repository and use in multiple projects. But again, this base repository simplifies this integration of test and validation from the start. And you don't have to do this in the last bit where you should deploy the model or think about this stuff. So once you have this repository and nice structure, I think it's important to have this shift in mindset as well.

Albin Sundqvist [00:10:52]: So a shift left testing is basically you test things earlier. It's a software development methodology that yeah, you test things early, but I think it applies well in ML project as well because you need to validate the data. You need to validate if the data supports the model in production, are you able to use the data, but also the integration part between different systems or even end users? And how are people going to use the model and going from top to bottom, the next one deploying model versus deploying pipeline. So this is also an important thing to have in order to have this robustness when you deploy that. Instead of having data scientists in laboratory producing this model artifact that you handle and move around in different environments, you should have this assembly line that you move and treat the pipeline as the artifact. So the pipeline that creates the feature is the one you should deploy and the pipeline that creates the model or deploys the model. This is the thing you want to treat as the artifact, not the model itself. Then the last one, going from model prediction to product performance.

Albin Sundqvist [00:12:29]: I think it's important to zoom out a bit when the model is actually being used. You need to measure the right things. An example might be you have recommendation model giving recommendations and it gives the users all press, or like the recommendation it gives, but it isn't actually driving any sales. Maybe the user added the items to the cart but didn't buy. So you have to consider the model in the context of its product and the users that are using it. So yeah, what I'm basically trying to say is you should use software engineering best practices and raise your AI maturity level, which is yeah, super easy for me to say and very hard to implement. But with this well structured repository, I think this is the nice first stepping stone towards this journey of you being able to productionize your workflows, but also the product itself. So you fill the, I like to think of it as this mold, this repository mold, the scaffolding that you fill with good quality code using this software engineering best practice, and then build automation on top of this and move this around in your platform, or build pipelines around this structure.

Albin Sundqvist [00:14:07]: And that really helps you scale things and utilize this structure in its best ways. But you should always provide value. So this is important for the model, the product, but also the mlops side of things. So in one use case, it might be sufficient to have a person logging into a server executing scripts twice a day, every day. In some cases you want to have this super complex auto training, auto drift detection, and auto deploy, and sometimes you end up in the middle somewhere. But it's important to find this tipping point, when is it good enough? And then you move on to the next part and maybe automate the deployment. But the validation is, is manual, so the focus should always be on remaining on delivering value, but then guided by strategic decisions on when and how you implement these strategies to best support the project or team's goals and context. So hopefully I've given you something to think about.

Albin Sundqvist [00:15:25]: Thank you. Do we have time for questions?

Q1 [00:15:30]: As the systems become more and more complex, I find that interfaces between the modules are also something that requires special attention because you really want to exchange just a model or a component, not the whole damn pipeline, the integration part.

Albin Sundqvist [00:15:48]: Between different parts of your system or.

Q1 [00:15:52]: No, but the model exists in an environment, right? Usually as it evolves, there is not only one model or one component that you deploy. And when the system grows bigger, then exchanging the whole system, releasing the whole system doesn't work. You want to update parts and that's when you need the clean cut interfaces. That's something that I find is very valuable to have in mind.

Albin Sundqvist [00:16:19]: Yeah. And it with this modular coding or it easier to think about this stuff as, yeah, basically interfaces. It works both in the different pipelines that support the model, but also between different systems and models. But yeah, maybe if you have something to add to that.

Viking Björk Friström [00:16:50]: Yeah, I would say it's probably, I said it becomes very a different beast when it grows the product that size. Right. Where in the beginning it might be more easier to deploy them at fully scalable pipelines. Right. But you might reach a point in a product, right, where deploying a whole new pipeline every time might not cut it. And then you might have to start thinking about, yeah, restructuring your project. Right. Where you can actually have this more modular part where you can exchange the model itself and keep the pipeline.

Viking Björk Friström [00:17:15]: I would say it's hard to have this one solution fits all, but rather you have to adapt it more from product to product. And yeah, depending on the scope and the size of it.

Q1 [00:17:25]: The question is what about any type of versioning management tool or something like that? Are you using that and for what purposes?

Albin Sundqvist [00:17:37]: I don't know if they could hear you, but the question was if we're using any version control. So yeah, at my client we use git Bitpocket to support the code and then we have a platform that has modeled registries and we use hopsworks for that and also the feature store within hopsworks.

Q2 [00:18:06]: Yes, you mentioned there was important to use a good template. I wonder if you have some templates you can share.

Albin Sundqvist [00:18:15]: Yeah, I like the Azure DevOps data science toolkit, I think it's called. But I do think you should take inspiration from multiple sources. Look at the data science cookie cutter. Look at like Google and Azure and find out like pick your raisins from the cookie. And how can you adapt to your context to best fit your needs?

Viking Björk Friström [00:18:47]: Okay.

Q2 [00:18:48]: You don't have anything that you'll start out with at Netlight that's open sourced?

Albin Sundqvist [00:18:56]: No, I don't think so actually. But that might be something that we should have hang.

+ Read More

Watch More

Common Mistakes in the AI Development Process
Posted May 26, 2021 | Views 710
# Presentation
The Future of AI and ML in Process Automation
Posted Nov 22, 2021 | Views 542
# Scaling
# Interview
# Indico Data
# Indicodata.ai
Open Source and Fast Decision Making: Rob Hirschfeld on the Future of Software Development
Posted Jul 04, 2023 | Views 728
# DevOps Movement
# API Provision
# RackN.com