MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Optimising Routing and Caching ML Models on Serverless GPU Infra

Posted Jun 02, 2023 | Views 398
# Serverless GPU
# Radish
# Mystic.ai
Share

speaker

avatar
Paul Hetherington
CEO @ Mystic.ai

Paul is the CEO of Mystic.ai, they automate the deployment of AI models in production environments. He has a background in analog computing of ML models in research at Bath University. With Mystic he was a part of the W21 batch at Y Combinator, and leads the companies tech development.

+ Read More

SUMMARY

Paul discusses their transition from a hardware to a software company and the importance of MLOps in machine learning deployments. He introduced "Radish," a research project addressing deployment challenges using Redis for communication and websockets for fast GPU processing. The system allows for multi-environment deployment on a single GPU using libraries like ZeroMQ. Paul demonstrated the deployment process and highlighted the ease of use, scalability, and cost optimization offered by Mystic.ai's infrastructure.

+ Read More

Watch More

MLOps - Design Thinking to Build ML Infra for ML and LLM Use Cases
Posted Mar 29, 2024 | Views 2.5K
# MLOps
# ML Infra
# LLM Use Cases
# Klaviyo
# IBM
Evaluating and Integrating ML Models
Posted Feb 21, 2024 | Views 536
# ML Models
# Evaluation
# Integration
# Weights and Biases
# wandb.ai
On Juggling, Dr. Seuss and Feature Stores for Real-time AI/ML
Posted May 30, 2022 | Views 962
# Juggling
# Dr. Seuss
# Feature Stores
# Real-time AI/ML
# Redis.io