MLOps Community
+00:00 GMT
Sign in or Join the community to continue

MLOps Critiques

Posted May 26, 2022 | Views 785
# Data Platform
# AI MLFlow
# DataOps
# Xccelerated.io
# Xccelerated
Share
speakers
avatar
Matthijs Brouns
CTO @ Xccelerated

Matthijs is a Machine Learning Engineer, active in Amsterdam, The Netherlands. His current work involves training MLEs at Xccelerated.io. This means Matthijs divides his time between building new training materials and exercises, giving live trainings, and acting as a sparring partner for the Xccelerators at their partner firms, as well as doing some consulting work on the side.

Matthijs spent a fair amount of time contributing to their open scientific computing ecosystem through various means. He maintains open source packages (scikit-lego, seers) as well as co-chairs the PyData Amsterdam conference and meetup.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
avatar
David Aponte
Senior Research SDE, Applied Sciences Group @ Microsoft

David is one of the organizers of the MLOps Community. He is an engineer, teacher, and lifelong student. He loves to build solutions to tough problems and share his learnings with others. He works out of NYC and loves to hike and box for fun. He enjoys meeting new people so feel free to reach out to him!

+ Read More
SUMMARY

MLOps is too tool-driven, don't let FOMO drive you to pick the latest feature/model/evaluation/ store but pay closer attention to what you actually need to release more safely and reliably.

+ Read More
TRANSCRIPT

Quotes

“Most of the time, people don’t know that much tooling coming in.”

“If we have a cool idea for something that might improve the model, what will hold us back from getting that? For me, that is probably the most important thing about MLOps as a movement or as a culture. How do we get rid of barriers to doing that? I think a very large barrier to release very often is not being to do it safely.”

“You see some tooling here and there but never really found it good enough, never offered me the flexibility that I want that I need to do it in the way that I would want it to so you often end up building your own.”

“One feature that I always want out of these ML Inferences is something that a lot of web deployment tools will just never support in the auto batching of requests coming in. That should have almost defaulted in every ML inference tool. I would generally want those more on a load balancer level than on the application level to make sure that you don’t have unneeded latency.”

“We do have the tendency to reinvent some of these infrastructure components for specifically ML which is a bit of a shame.”

“It’s interesting to see these cyclical things in terms of how we tend to solve problems.”

“Tools offer just enough to get you hooked. That’s the issue and then migrating out of them is a nightmare.”

“Let me profess that you don’t want to build everything yourself for sure but in general, when I evaluate a tool, I do evaluate how easy it is to get out of it again.”

“In general, I don’t like frameworks, I like libraries. Frameworks tie you into a lot of things and moving out for just a part of it can be quite difficult whereas if you have a bunch of libraries that don’t overlap too much, you can generally just rip one part out and bring one part back in.”

“Nowadays you see bias or modal fairness tooling that also handles your deployments and that feels wrong.”

“Monitoring is also one of these areas of this domain that really needs some fleshing out I think.”

“We don’t have a lot of standardization with our client unfortunately because nothing is ever greenfield.”

“The interfaces between different data jobs or different teams even are very often not really clearly defined and there is definitely not a deprication path set up.”

“Consistent data scientists who just don’t necessarily have experience with infrastructure or with running things with production anyway. It might be that things have to be rewritten a lot which is always a horrendous thing that needs to be changed as soon as possible, I think. That might be the biggest reason for a lot of larger companies at least.”

“Automate as much as possible. Everything that’s automated is safe for handover because it’s written down in code that code gets run very often. If code gets run very often, you never really end up in a state where suddenly it’s completely fubar.”

“You don’t always end up in a situation where you can validate your model automatically for a hundred percent, you shouldn’t even want that most of the time especially when you’re starting up.”

“By automating parts of the ML workflow the impact that you can have on people’s working life is pretty great.”

Blog

MLOps Critiques Recap

MLOps Community Coffee Session #100 Takeaways: MLOps Critiques

“ML is only such a small part of the picture and there is so much software around it…And for some reason, all the MLOps monitoring tools forget that. They forget that this software stack around it exists and also needs to be monitored. And if there is something I don’t want is monitoring in different tools. That seems horrendous!” – Matthijs Brouns https://www.linkedin.com/in/mbrouns/

TL;DR

Working in the ML consultancy and training field got Matthijs Brouns first-hand experience with a multitude of use cases as well as a plethora of MLOps tools. He sat with us to share his thoughts on productionizing ML in a low-risk manner, building your MLOps stack without reinventing the wheel, and keeping vendor lock-in away for good. Ultimately nobody wants a million moving parts or tools, so simplicity and clear value are key for a successful ML pipeline.

https://youtu.be/SS2_jQN3sG0

Matthijs’ Day-to-Day

Matthijs Brouns is a Machine Learning Engineer and currently, he acts as a CTO at http://xccelerated.io/ which is a training/consulting firm. Matthijs divides his time between building new training materials and exercises for MLEs, giving live training sessions, and acting as a sparring partner for the Xccelerated at their partner firms, as well as doing some consulting work on the side.[06:35]

Matthijs spent a fair amount of time contributing to their open scientific computing ecosystem through various means. He maintains open source packages (scikit-lego, seers) as well as co-chairs the PyData Amsterdam conference and meet-up.

What’s Hot Right Now?

When it comes to skills that are hot on the market some of the top picks are neural net-based things at medical companies, Kafka, Spark, deployment and orchestration tooling. [08:24]

“How do we get rid of barriers to getting things into production today or tomorrow?”

Low-Risk Releases

One of the biggest barriers Matthijs sees there is not being able to do this safely and reliably. There is some tooling here and there, but he has not found anything good enough that offers enough flexibility, so in most cases, you end up building your own tool. [13:20] The market is wide open for creating such tools.

On Picking MLOps Tooling

Matthijs is one of the many ML practitioners who agrees that you don’t want to build everything yourself when it comes to tooling but is also not a fan of vendor lock-in. When he evaluates tools one of the core things he measures is if this is a core component of his product and how easy it is to migrate away in case it no longer serves him down the road. This is one of the reasons he opts for libraries, instead of frameworks.

Most MLOps tools are not too mature and become bloatware very fast because they try to solve too many problems at once. This makes it a headache to extend and maintain as well as see their value clearly.

For example, when it comes to the monitoring space, it seems like most providers forget that ML is only a small part of the product and this software stack around it exists and also needs to be monitored. Monitoring each part of your product in a different tool is a concept no engineer would find pleasing. [24:02]

Technology Recommendations For a Chaotic Space

As we all know the MLOps space is far from offering standardized solutions yet, but the only piece that Matthijs sees is close to this concept is Kubernetes which is well adopted amongst his customers. Usually, when Matthijs gives recommendations on tooling to his clients he doesn’t think about the highest payoff but rather goes for what will cause the least regret down the line. [30:25]

Big barriers in production

Productionizing ML is still a struggle, and even before getting to deployment, you have barriers such as the regular need for manual validation steps like people checking the output, going through the code and making sure it works, as well as checking in with stakeholders that might be unavailable for long periods. From then on, common challenges are lack of process, no way to rollback if something breaks, and missing communication interfaces between different team members as well as between whole teams. [35:52]

Good Automation vs Bad Automation

When developing something for a customer, the time to leave and hand over the project always comes. Matthijs and his team found a cheat way to leave the client in a good position by leaving some of the people working on the project behind permanently.

In general, his advice would be to try to automate the software as much as possible before handing it over. Still, you always need to be mindful that in some cases too much automation might become a pain. Make sure you always sit with the stakeholders and see what they want to know before leaving them.[37:57]

Listen to the episode on https://podcasts.apple.com/gb/podcast/mlops-critiques-matthijs-brouns-mlops-coffee-sessions-100/id1505372978?i=1000564184156, https://open.spotify.com/episode/0fflAQWFiDSGLJ69alZtJs , https://anchor.fm/mlops/episodes/MLOps-Critiques--Matthijs-Brouns--MLOps-Coffee-Sessions-100-e1iudpu, https://www.listennotes.com/podcasts/mlopscommunity/mlops-critiques-matthijs-xm9TEFdMHWl/, or you can watch the interview on YouTube here https://youtu.be/SS2_jQN3sG0

+ Read More

Watch More

58:36
Practical MLOps, Doing MLOps
Posted Jan 27, 2021 | Views 1.1K
# ML in Production
# Machine Learning
# Pragmatic AI Labs
# Paiml.com
MLOps at DoorDash
Posted Aug 30, 2022 | Views 1.6K
# DoorDash
# Leverage
# Core Infrastructures
Doing MLOps
Posted Oct 06, 2021 | Views 709
# Model Serving
# Presentation
# Coding Workshop
# paiml.com