What is MLOps, and how can it help me work from home? MLOps is the intersection of three disciplines: software engineering, DevOps and machine learning. MLOps refers to the entire end-to-end lifecycle of getting models from lab to live where they can start delivering value. What do software engineers and DevOps need to learn about machine learning to ensure that it can be integrated into their dev & deployment pipelines? What do data scientists and ML engineers need to learn about DevOps, model deployment and monitoring to ensure they can effectively deploy their work without racking up tonnes of technical debt? And now that working from home is fast becoming the new normal, how can MLOps help my team stay efficient when asynchronous collaboratxion is needed, something our software engineering and DevOps friends have already mastered? MLOps is a complex discipline due to the many more moving parts involved than regular software DevOps, in this inaugural MLOps.community meetup we'll explore and navigate this new space together and give you a guide on how to avoid the most common pitfalls and challenges getting AI into production and collaborating effectively with your team – even when you're distributed., In this talk, we deep dive into building ML models into container images so that you can run them in production for inference. There are various questions around doing this: Who should build the images and when? What should they contain? How should data science & ML teams interact with DevOps teams? If you build images specific to one platform, will you get locked in? If you try to build your containers inside a container, what happens, and why is this a security challenge? Based on Luke's experience setting up ML container builds for many clients, he'll propose a set of best practices for ensuring secure, multi-tenant image builds that avoid lock-in, and he'll also share some tooling (chassis.ml) and a standard (openmodel.ml) Luke proposes for doing this.