MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Databricks Model Serving V2

Posted Sep 30, 2022 | Views 1.1K
# Databricks
# Deployment
# Real-time ML Models
Share
speakers
avatar
Rafael Pierre
Solutions Architect @ Databricks

Rafael has worked for 15 years in data-intensive fields within finance in multiple roles: software engineering, product management, data engineering, data science, and machine learning engineering.

At Databricks, Rafael has fun bringing all these topics together as a Solutions Architect to help our customers become more and more data-driven.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
avatar
Ryan Russon
Manager, MLOps and Data Science @ Maven Wave Partners

From serving as an officer in the US Navy to Consulting for some of America's largest corporations, Ryan has found his passion in the enablement of Data Science workloads for companies and teams.

Having spent years as a data scientist, Ryan understands the types of challenges that DS teams face in scaling, tracking, and efficiently running their workloads.

+ Read More
SUMMARY

From our experience helping customers in the Data and AI field, we learned that the most challenging part of Machine Learning is deploying it. Putting models into production is complex and requires additional pieces of infrastructure as well as specialized people to take care of it - this is especially true if we are talking about real-time REST APIs for serving ML models.

With Databricks Model Serving V2, we introduce the idea of Serverless REST endpoints to the platform. This allows teams to easily deploy their ML models in a production-grade platform with a few mouse clicks (or lines of code 😀).

+ Read More

Watch More

58:58
Declarative MLOps - Streamlining Model Serving on Kubernetes
Posted Apr 18, 2023 | Views 697
# Declarative MLOps
# Streamlining Model Serving
# Kubernetes
Vision Pipelines in Production: Serving & Optimisations
Posted Feb 22, 2024 | Views 574
# Vision models
# upscaling pipelines
# Finetuning