MLOps Community
timezone
+00:00 GMT
Sign in or Join the community to continue

MLOps Community Meetup in NYC @ Spotify

Posted Nov 01, 2022 | Views 723
# Ray
# On Device
# Model Serving at Scale
Share
SPEAKERS
Divita Vohra
Divita Vohra
Divita Vohra
Senior Product Manager @ Spotify

Divita is an AI/ML Product Manager at Spotify focused on defining the next generation of ML Infrastructure and scaling ethical ML development practices across Spotify. She holds a BS in Computer Engineering from Virginia Tech and an MS in Computer Science from Georgia Tech.

+ Read More

Divita is an AI/ML Product Manager at Spotify focused on defining the next generation of ML Infrastructure and scaling ethical ML development practices across Spotify. She holds a BS in Computer Engineering from Virginia Tech and an MS in Computer Science from Georgia Tech.

+ Read More
Olga Nikonova
Olga Nikonova
Olga Nikonova
Senior ML Infra Engineer @ Spotify

Olga Nikonova is a Senior ML Infra Engineer at Spotify, where she has worked for close to two years. Prior to Spotify, she worked at Instagram and Microsoft. She holds a degree in computer science and resides in New York.

+ Read More

Olga Nikonova is a Senior ML Infra Engineer at Spotify, where she has worked for close to two years. Prior to Spotify, she worked at Instagram and Microsoft. She holds a degree in computer science and resides in New York.

+ Read More
SUMMARY

Accelerating ML Research and Prototyping with Ray Spotify started evaluating Ray on Kubernetes as a distributed compute platform that enables ML research and experimentation for ML workflows in early 2022. Ray and its ecosystem provide researchers and data scientists at Spotify with a better model development experience, instant access to distributed computing, and an expressive programming interface. These features greatly complement our existing production ML workflow.

In this talk, we share the story of how Ray started at Spotify, what our long-term goals are with Ray, and how Ray enabled tangible business impact for Spotify by accelerating an ML research use case for improving podcast recommendations.

Supporting Model Serving at Scale: From Backend to On Device Recently our ML serving team expanded its scope from backend infra to on-device models. The talk covers some (early) observations on this project and the new challenges that came with it.

+ Read More

Watch More

1:10:07
Posted Jan 07, 2023 | Views 466
# ML Innovation
# XAI Testing
# A/B Testing
# neptune.ai
59:24
Posted Apr 07, 2021 | Views 413
# MLOps Community
# Machine Learning
1:45:28
Posted Nov 22, 2023 | Views 540
# LLM Fine-tuning
# Large Language Models
# Weights and Biases
# Virtual Meetup