MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Declarative MLOps - Streamlining Model Serving on Kubernetes

Posted Apr 18, 2023 | Views 693
# Declarative MLOps
# Streamlining Model Serving
# Kubernetes
Share
speaker
avatar
Rahul Parundekar
Founder @ A.I. Hero, Inc.

Rahul has 13+ years of experience building AI solutions and leading teams. He is passionate about building Artificial Intelligence (A.I.) solutions for improving the Human Experience. He is currently the founder of A.I. Hero - a platform that helps you train ML models and help improve data quality declaratively. As part of his work, he also helps companies bring LLM Models into production by working on an end-to-end LLM Ops Stack on top of Kubernetes that helps you keep your fine-tuning, data annotation, chat-bot deployment, and other LLM operations in your own VPC.

+ Read More
SUMMARY

Data Scientists prefer Jupyter Notebooks to experiment and train ML models. Serving these models in production can benefit from a more streamlined approach that can guarantee a repeatable, scalable, and high velocity. Kubernetes provides such an environment. And while third-party solutions for serving models make it easier, this talk demystifies how native K8s operators can be used to deploy models along with best practices for containerizing your own model, and CI/CD using GitOps.

+ Read More

Watch More

Databricks Model Serving V2
Posted Sep 30, 2022 | Views 1.1K
# Databricks
# Deployment
# Real-time ML Models
Streamlining Model Deployment // Daniel Lenton // AI in Production Talk
Posted Mar 08, 2024 | Views 376
# benchmarks
# open governance
# open source runtime benchmarks
# dynamic routing
Building ML/Data Platform on Top of Kubernetes
Posted Mar 10, 2022 | Views 863
# Data Platform
# Building ML
# Kubernetes