You know the old iceberg analogy, where the larger portion is hidden under the surface? Well, most of us in MLOps tend to focus on the visible, the models we need to deploy and run in production. But if we ignore resource management as our AI/ML initiatives grow, we’ll start to take on water, in the form of researchers fighting for resources, time-consuming manual workload rescheduling, and spiraling costs associated with ML inference.
In this talk, the experts at run:ai show what role resource management has in MLOps, what to strive for, and how to get buy-in from IT.