MLOps Community
+00:00 GMT

Building a Machine Learning Platform

Building a Machine Learning Platform

This blog is written by John Roberts

March 27, 2022
Building a Machine Learning Platform

This blog is written by John Roberts. He summarizes the MLOps coffee chat session with Orr Shilon at Lemonade.

Machine learning has centered on building accurate models which initiated the development of frameworks like TensorFlow, PyTorch and Scikit-Learn. The job isn’t done once the model is developed because the purpose of developing a model is to solve real-world problems.

What do you do after you build your model?

You need a process to deploy the model to production to make it available to solve real-world problems. But before you can go about putting a model into production, you need to define a machine learning platform. To respond to this question, we must first define a machine learning platform. Machine learning platforms are tools used for automating the development and improvement of machine learning models.

Let’s dive into the pillars of machine learning and highlight the tools Lemondae used as well as some other options that may fit your usecase. Remember, the right tools aren’t necessarily what Lemonade uses although they can serve as an inspiration. The right tools are the right choice for your use case!

Features of the Machine Learning Platform

  1. Allows team to execute at scale
  2. Easy update and retraining of models
  3. Makes it easy for teams to share model, code, and data
  4. Automates deployment

Five Pillars of Machine Learning Platform

  1. Feature Management
  2. Workflow Management
  3. Monitoring
  4. Tracking
  5. Model serving

Feature Management

Features? Are there any differences between features and datasets? Yes!

Dataset is raw data retrieved from data storage while features are preprocessed datasets; they are the direct input to the machine learning model.

Feature management is sometimes referred to as feature store. Feature stores are used to create, store and manage features used in training machine learning models. Feature engineering is the process of creating a feature. Feature engineering varies depending on the task, dataset, and project.

Fun fact: There were no off-the-shelf feature stores at the early stage of Lemonade.

Feature Stores

  1. Feast
  2. Tecton
  3. Iguazio
  4. Hopsworks
  5. Databricks Feature Store
  6. Sagemaker Feature Store
  7. Google cloud Vertex AI Feature Store

Workflow Management

Workflow management involves managing tasks in machine learning workflow and pipeline. Workflow is a sequence of tasks in a machine learning lifecycle while a pipeline consists of infrastructures used in automating the workflow.

ML requires a sequence of tasks, these tasks are sequential but sometimes you need to return to a previous task if some conditions are not met.

Tools Lemonade Uses

It is intriguing to see how Lemonade manages their pipeline with a Slack bot. The Slack bot is called Cooperand it is built with the Rasa framework. Cooper runs commands to automate model training and deployment. For instance, Cooper can start and shut down an AWS Sagemaker notebook, start model training, etc. Lemonade combines this Slack bot with Airflow to manage their workflow.

Other Accessible Workflow Management Tools

  1. KubeFlow
  2. Airflow
  3. Kedro
  4. Luigi
  5. Dagster

The list above is overwhelming and it keeps increasing. This article outlines the differences between all these tools and can guide you on the best choice for your use case.

Monitoring

Monitoring means observing and checking the progress or quality of something over time. And that begins by defining what you want to monitor.

Artifacts to monitor in machine learning:

  1. Code
  2. Data
  3. Model

Code Monitoring

Code version tools like GitHub, GitLab, and Bitbucket are ubiquitous, but you also need to monitor the version of the code that generated a result.

Data Monitoring

Data is one of the most important artifacts in machine learning. Once your data goes wrong, every other thing will be wrong. In data monitoring, you monitor:

  1. Data drift – change in input data. This means variation in the data that was used in building the model and then used in production over time. Over time, the training data can degrade. This could be a result of a change in data distribution or new features that affect the data. Data drift causes decreased inaccuracy. You can detect this problem with a Kolmogorov Smirnov (KS) Test, population stability index, adaptive windowing, or a model-based approach.
  2. Concept drift – this focuses on the statistical properties of the target variable. Machine learning models map independent variables to the target variable. Once there is a significant change in this mapping, it affects the accuracy of the model.
  3. Imbalance data – This is caused by skewness in class proportions in your data. For example, in a fault detection dataset, most of the classes are negative.
  4. Bias – this is caused by data acquisition. Imagine a wedding dress data was acquired in the US, this data is biased to US weddings and should not be used in training a model that will be used all over the world because it will not recognise traditional wedding wear from other countries.
  5. Invalid data – check for data types and NaN values

For an in depth presentation on ML monitoring you can see more about the 4 types of drift from this MLOps Meetup by Amy Holder.

Tools Lemonade Uses

This involves monitoring the model hyperparameters, model architect and model performance. Lemonade uses Aporia for monitoring. They also recommend the monitoring dashboard should be built by data scientists to avoid overwhelming alerts. You can find a comprehensive look at ML monitoring tools in this space under our monitoring compare page.

Other Accessible Monitoring Tools

  1. Fidder
  2. Superwise
  3. Arize
  4. Aporia
  5. Neptune
  6. Grafana
  7. WhyLabs
  8. Evidently AI

Tracking

Tracking is sometimes confused with monitoring. Tracking involves logging the metadata of experiments. Machine learning involves several experiments. Tracking the result and metadata of every experiment will make you know which experiment to deploy to production. Just like in monitoring, you also need to track your code, model and data versions used for an experiment.

Tools Lemonade Uses

At Lemonade, MLflow is the go-to tracking tool for machine learning experiments.

Other Accessible Tracking Tools

  1. Weight and Bais
  2. Neptune
  3. Comet
  4. MLflow
  5. ClearML

Model Serving

Model Serving involves how you make your model available to be used by others. Model can be served as an API, endpoint, or as a library.

Model serving tools

  1. Sagemaker
  2. Cortex Labs
  3. BentoML
  4. Torch Serve
  5. Tensorflow serving

Challenges with Creating a Platform

  1. Because there are numerous tools for each phrase in machine learning, deciding which one to use is a challenge. We’ve gone through the MLOps pillars and highlighted the tools Lemonade uses for each in addition to other accessible tools. But how you know which tool is ideal for your usecase is unique to your usecase and influenced by the rapid evolution of machine learning.

To select the right tool, answer the following questions:

– Which of your processes would you like to automate?

– What would you gain in terms of business value if you automate this process?

– What infrastructure is available?

– What are the users’ skill sets (data scientists, business analysts, etc.)?

– Can you build it yourself, use open-source, or purchase a tool?

– What kind of model are you going to use?

– What platform (web, mobile, embedded system) are you deploying to?

Find a list of tools for different tasks in machine learning here.

Dive in
Related
Blog
MLOps is 98% Data Engineering.
By Kostas Pardalis • Mar 23rd, 2023 Views 156
Blog
MLOps is 98% Data Engineering.
By Kostas Pardalis • Mar 23rd, 2023 Views 156
Blog
Machine Learning Engineering and Operations
By Segun Adelowo • Apr 17th, 2023 Views 148
Blog
Building a Machine Learning Pipeline With DBT
By Jeff Katz • Jun 5th, 2022 Views 128