MLOps Community
+00:00 GMT
Sign in or Join the community to continue

I Am Once Again Asking "What is MLOps?"

Posted Apr 22, 2025 | Views 35
# MLOps
# AI
# Model
Share

speakers

avatar
Oleksandr Stasyk
Engineering Manager, ML Platform @ Synthesia

For the majority of my career I have been a "full stack" developer with a leaning towards devops and platforms. In the last four years or so, I have worked on ML Platforms. I find that applying good software engineering practises is more important than ever in this AI fueled world.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

SUMMARY

What does it mean to MLOps now? Everyone is trying to make a killing from AI, everyone wants the freshest technology to show off as part of their product. But what impact does that have on the "journey of the model". Do we still think about how an idea makes it's way to production to make money? How can we get better at it, maybe the answer lies in the ancient "non-AI" past...

+ Read More

TRANSCRIPT

Oleksandr Stasyk [00:00:00]: Hello, I'm Sash. I'm an engineering manager at Synthesia and I take my coffee out of the machine and then drink it.

Demetrios [00:00:09]: Folks, welcome back to another mlops community podcast. I'm your host Demetrios and Sash schooled me to so many things in this upcoming conversation. He has truly done mlops in many different places. Right now he is creating the ML platform and as you probably know, the requirements and expectations are quite different when you're creating a video generation type of product versus what he did at his last job around chemistry and creating an ML platform for a chemistry led ML initiative. He broke down the different parts of this and where he has seen initiatives fall flat. He also talked through how much of a cultural thing this is. Not just the technical thing which we harp on here many, many times, but this really felt like the man has gone through the trenches, he has some scars and he is sharing his wisdom with us. Hope you enjoy.

Demetrios [00:01:17]: And if you are listening on one of the podcast player platforms, not on YouTube, I've got a little treat for you because all these folks that are joining the community are giving me some incredible recommendations when it comes to music and I get to pass that on to you. This next song is from a band called the Voids and the name is Leave It In My Dreams. I literally just posted something that was more or less a meme saying like pour one out for the homies who thought they could vibe code their way into a million dollar product. And it really strikes me as funny how much I'm seeing folks talk about how now you don't need to be a software engineer to become a millionaire with a SaaS product because you can vibe code your way into a SaaS product and all fun stuff.

Oleksandr Stasyk [00:02:56]: Yeah, sometimes I guess people have to learn the hard way.

Demetrios [00:02:58]: Yeah. But even I, I do respect that there's a lot of great things that you can now do if you understand software engineering.

Oleksandr Stasyk [00:03:08]: I, I 100% agree. To me, it almost feels like, you know, when the vibe coding has to stop. That's the, that's the key skill. Vibe coding is all good, but you need to understand when what was generated is unmaintainable. And that's when you kind of say it's enough of the vibes for now.

Demetrios [00:03:28]: Getting too many vibes. Yep, I gotta stop with the vibes. Bad vibes, bad vibes. Yeah, that's so funny because for me, I laugh because when you watch videos of folks saying like, oh, I'm just going to vibe code it, the prompts are very Much coding prompts. And if you have known nothing about coding, you can't prompt your way into that type of thing. Like you're prompting with very coding specific language. Even if it is in English, it still is.

Oleksandr Stasyk [00:04:03]: Like describe the algorithm.

Demetrios [00:04:05]: Yeah. Or describing the exact things that you want and understanding what you need and the requirements there. It's. It's pretty difficult in my eyes to be able to do that. And then the whole reason that I came up with this was because there's the meme going around of someone on Twitter that said something along the lines of help me, I'm under attack. Since I've been sharing about how I Vibe coded my product. Now people are like ddosing it or they're finding security breaches and I don't know what I created and I don't know how to make it more secure.

Oleksandr Stasyk [00:04:46]: Some problems should not be solved with code, I feel. And Vibe coding kind of implies that you're writing code and you've got. It's text in, text out. And that's what developing software is all about. Right. You just write some symbols and the computer interprets it and out comes value and most importantly, Benjamin's. But in reality, sometimes you go, aha, maybe we should solve this infrastructure or maybe we should just actually move the file over here and I'll make things a lot simpler rather than brute forcing it. With, with Vibe coding, sometimes you don't need code at all and just a spreadsheet will do.

Demetrios [00:05:21]: Yeah. Or you have to think if someone has no idea about security best practices and they forget to ask, they're. Yeah, they're putting their secrets everywhere or they aren't even thinking about secrets and they aren't even thinking about the most basic common things that you learn as you're going through the trials and tribulations. You. You're going to learn the hard way.

Oleksandr Stasyk [00:05:47]: Yeah, yeah, yeah. It reminds me, once I asked a lawyer, you know, you know, in a very sick way, I'm kind of interested in law, you know, like, how hard is it just for me to do like, you know, cheeky little bit of lawyering without any of the background, any of the experience, any of the education? Like, what could, what could possibly go wrong? Right. You know, just read the book. It's like, you know, it's just like a giant document, the law, Right. And you go, yeah, there we go.

Demetrios [00:06:17]: And I'm sure the lawyer was very fond of that question.

Oleksandr Stasyk [00:06:20]: Yeah.

Demetrios [00:06:20]: I was like, yeah, give it a.

Oleksandr Stasyk [00:06:21]: Shot and just said, and call immediately.

Demetrios [00:06:27]: Yeah, why would you ever need to pay a lawyer or a software engineer when you have ChatGPT.

Oleksandr Stasyk [00:06:32]: Absolutely.

Demetrios [00:06:32]: Vibe code or Vibe Law, Vibe, Vibe defend yourself, all of that. So we are.

Oleksandr Stasyk [00:06:39]: On the other hand, I think the bright cities allow. It's. It lowers the bar for people to actually explore. Like there is, there is definitely a light side to it. I think not, not to just, you know, completely on Vibe coding. Like, I think it gives people the opportunity to do, to. To actually get into stuff because like, last thing you want to do is be a gatekeeper. Obviously like no Vibe coding for you because you need to have four years in uni to study how what.

Oleksandr Stasyk [00:07:03]: And N of N squared is, you know, O of N squared and all the complexities. But this way, you know, people can get stuff going quite quickly. Even though it might be wrong. I think guidance is always needed, but the fact that people are actually enabled to do this, that's. That's almost a completely different world to live in. Sorry, I just wanted to bring a little bit of light to the situation.

Demetrios [00:07:25]: Yeah, we got to steel man it because like I said, there is some incredible stuff that's happening with it. I think it just needs to be taken with a pinch of salt and like you're saying, the ability to explore and hopefully peak your interest and then you go deeper down the rabbit hole and you're now becoming more and more familiar with when you should do things, when you shouldn't, or how to explain yourself. How to know when this Vibe coding is going off the rails. I think exactly.

Oleksandr Stasyk [00:07:56]: Yes. Yeah. The key, knowing when to stop and.

Demetrios [00:07:59]: Ask for help or knowing when to hire someone that can actually do it.

Oleksandr Stasyk [00:08:03]: Yep. Oh, but that, that's an even bigger. That's an even bigger situation. Vibe coding aside, knowing when to. When you need to hire someone, that's a. That's a huge thing, isn't it?

Demetrios [00:08:13]: Yeah. Vibe hires the chat gbt, the test driven development doesn't tell you, hey, you should probably hire someone. Now that's classic. But dude, I wanted to talk to you and I know that you've also been interested in chatting because you have this idea that mlops is more important now than ever. And so it's great to hear that because in a way, because of all the Vibe coding, because of all the AI hype, you don't hear about how important it is to have these foundation indeed. And foundations.

Oleksandr Stasyk [00:08:52]: Good old best practices.

Demetrios [00:08:54]: Yeah, the best practices, the idea of whether it's Gen AI that you're putting into production or it's traditional ML, you still need those foundational pieces that are hopefully going to be a part of everyone's project. And so let's just start with that. I know you wrote an awesome blog post post like 2, 3 years ago all about what is MLM?

Oleksandr Stasyk [00:09:20]: It still holds in my opinion, but I could definitely write another one on a similar topic.

Demetrios [00:09:26]: Exactly. So, so let's just hear your thoughts on this and then we can dive into it.

Oleksandr Stasyk [00:09:32]: So the blog post specifically was incepted in my head. There was a scenario. I was given an mlops team. I got told go do mlops and that was happening. And in the meantime there was another who was kind of on the side and they were saying, oh, we would also like an MLOps team. And it was definitely not something that we could withhold. So it's definitely time to hire. And the interesting question was, and to be fair, this was me quite new to MLOps, et cetera, et cetera.

Oleksandr Stasyk [00:10:02]: There's more details in the article behind it, but the interesting question was what is the skill set? What do we put on the job description of, you know, you need to have X, Y and z to be MLOps. And boy oh boy, I sat there stunned. You know, I don't have an answer to that because if you list everything, you're never going to find that person. Like, good luck. So the, the, the blog post kind of reflects upon the idea thinking about. So I myself, big confession. I don't have a PhD. I don't have a researcher background which I sit on a chair of imposter syndrome.

Oleksandr Stasyk [00:10:38]: And I, I see all these people with their chevrons and their masters and their professorships. But on the other hand I can claim that I got some things working in production. I definitely know pods are failing and I got them fixed and I looked at logs and they know tracing and I know Grafana is pretty sweet, so I have that to support me. So I imagined that this skill set of an MLOps engineer kind of takes the idea of a T shaped skill set. So you have one long or deep specialization in something and kind of divides it. So one of the legs becomes your foundation, where you came from, be it either a software engineer or a researcher and it kind of drives you towards the other side. So you start off with solid software engineering background and then you get told you make some models in prod and get them served and then you kind of dive into that world and immediately hit with across key validation and you go, where are these names? I don't understand. But slowly but surely, hopefully you're surrounded by people who can help you learn that this is a model, this is a feature, this is drift, this is what inference does.

Oleksandr Stasyk [00:11:50]: And then you can see, ah, I can see there's a problem here in scaling. Let me just do some microservice magic and make that scalable. Right. And there's vice versa, which is I think almost just as beautiful because the combination of these works really well together is if you're a researcher and you go, right, I've got a super cool model in my jupyter notebook. I get my 99% accuracies. This is super sweet but boy, oh boy, how do I give it to my customers? You can't just walk around with your laptop letting people use it to influence your model. Right? So you gotta, you gotta, you got to do the deploys, right? And then you go Google, how do I deploy things? And then all of a sudden Docker Kubernetes is and everything starts falling onto you and you kind of go, whoa. Right.

Oleksandr Stasyk [00:12:32]: But then what I think the beauty is and when those two skill sets, two little pie shaped engineers or ML Ops meet and that's when the real magic happens because they can support each other. Right? There's also arguably a third little leg between them. No innuendo here please. This is a family friendly channel. Is the actual domain specialization, right? So for example, yes, you have AI and yes you have software engineering, but what you're doing, right, Are you doing chemistry, are you doing computer vision, you do finance? Yeah, like so that's the little, little, little thing that is the genuine, you know, domain expertise. So you can argue that like, you know, you could be super duper knowledgeable about the domain and then you can kind of grow other way. So like you can stretch this analogy left, right and center obviously. But like the main idea is that when you say mlops you kind of go, what does that actually mean? Like who, who is this MLOPS unicorn that can do all these things together? But in my opinion it's not about who, it's about the strength is in the team.

Oleksandr Stasyk [00:13:36]: Because then you have economy of scale, you have the sum is greater than part of the, the part of the sum, some of the parts. And like that's when the real beauty happens. It's about your 10x environment.

Demetrios [00:13:53]: Yeah. The other role or special specialization that I would think about is a data engineer and being able to get the data to where you need to go. And, and this is, it's interesting that you're talking about it in this way because it's very much in my Eyes something that you would potentially need more of these specialties in the traditional ML world. Do you feel like now in the Gen AI world you can have a bit less or is it just that it's different?

Oleksandr Stasyk [00:14:30]: Well, that's a good question. I do have some opinions about this current Gen AI world, but. But what I do think the real winner is, is money making. Because money making is a much more important thing now because you want to get that stuff going faster.

Demetrios [00:14:49]: Yeah.

Oleksandr Stasyk [00:14:49]: VCs. Please check out my cool video that I posted on LinkedIn. Right, yeah. You have to sacrifice some things. You have to basically. Right. And it's, it's, it's, it's pragmatism at the end of the day. Right.

Oleksandr Stasyk [00:15:02]: So when it comes down to yo, by the way, did you know data is important? Like that is a foundation, like how to put. It's essentially tech that you have to pay down at some point and if you don't pay down, there's going to be bad news bears. And then I think, I think you could definitely take steps towards it. But as you said, as you just mentioned, data, aside from MLOps. Oh yes. My personal opinion, and it's almost like a mantra that I always hold in the corner of my heart and my head that like, you can't have AI without good data. It's just that sometimes you get lucky enough with data to get going, but other times you have to like, you know, pay down the debt and get better at it or get better data.

Demetrios [00:15:52]: Essentially you can MVP something really quickly and it goes back to what we were talking about with Vibe coding. You can MVP something by vibe coding your way into the MVP and having some one shot or zero shot prompting and you're using AI with your product and it's just hitting an API. And then when you want to go to a deeper level of production, which you want to validate first, you don't want to get there. And I, I imagine you've seen this meme where it was something along the lines of, well, my product has seven concurrent users, so I guess we need to switch to Kubernetes now.

Oleksandr Stasyk [00:16:38]: Yes.

Demetrios [00:16:39]: And so you don't want to over engineer before you have to really.

Oleksandr Stasyk [00:16:45]: Absolutely. Yeah. Yeah, Yep. Inside of you there's two rules. One of them wants to do the right thing, the other one wants to do the thing right.

Demetrios [00:16:51]: Yeah, exactly. And the thing that is, is constantly going through my head is like, okay, well you MVP it, you get something out as quickly as possible, you test it and then once you see right on There's. There's legs here. Let's go for it then. You're doing what you're talking about very much like we're building the team with their specific specialities so that we can leverage each piece of the puzzle. And I'm just thinking back to what you told me before we hit record, and that is MLOps is more important now than it's ever been. And so I want to explore that idea a little bit more like why now more than before.

Oleksandr Stasyk [00:17:46]: Wipe coding, vibe, organization building, if you dare, if you will allow me to expand the analogy. And the question is, what is envelopes at the end of the day? Nietzsche, right?

Demetrios [00:18:01]: Yeah.

Oleksandr Stasyk [00:18:02]: What are those questions? I'm sure we get asked that every day, but I'll put it straight here. Rather than being mysterious about it, it's no different than essentially DevOps, but with extra things, right? So you have to understand that you need to break down the walls. You need to be able to think about your feedback loops in your organization. It's, as you said, validating what you're doing. Right. The more organizational barriers you put in place, the worse the time you're going to get in terms of getting your feedback faster. And that is 10 times as true for ML. Because now you have a little bit of research going on over here and a little bit of production going over here.

Oleksandr Stasyk [00:18:46]: And if you let that grow into two separate things, what you'll end up with two different companies, essentially. One is publishing papers, the other one is saying, yeah, we've got a model that we made originally, but we haven't updated it since. So mlops to me is making sure that. Well, it's not about making sure, but essentially trying to undo that. To create tunnels, to create bridges, to create windows, but ultimately to melt away the distinction completely. Even though sometimes it's very, very hard because researchers got the research and, you know, not everything in production is about ML. But what's most important is your value. Stream map from an idea in terms of I would like to predict something to does it actually give us the money? Right.

Oleksandr Stasyk [00:19:38]: And then the shorter you can make that feedback loop, the better off your business will be for sure. So it's about being able to iterate on your ideas, being able to validate them and as close to the customer as possible, essentially. Right? And then make those feedback loops tighter and tighter and tighter and tighter, bring teams together, melt the concepts of research and engineering if you need to. Depending on the skill sets you have, you might need to have a product organization step. You know, you Might need to rewrite some code for sure. Refactorings, you know, maybe the Kubernetes is managed by DevOps. Right. But you can try and build your organization towards in such a way where the process actually streamlined.

Oleksandr Stasyk [00:20:25]: Right, so this is kind of my spiel in terms of what I think mlops is about. It's about enabling your business to actually iterate on your machine learning efficiently.

Demetrios [00:20:36]: Yeah, efficiently and quick. And I think that you point out something that is inherent in these projects, especially the bigger the company gets, is how you need to break down those walls between the functions. And you, you want to look at it like we've said it for so long, but it does feel like it needs repeating. And that is recognizing this is just as much cultural and company culture, organizational culture, as it is technical. And so even if you have the coolest tech, but if each team is siloed, like you said, you're going to end up creating two different companies where you have one model that's been in production and hasn't been updated for a while, and then meanwhile you have another team that is just creating research papers.

Oleksandr Stasyk [00:21:42]: Yep, indeed. And recognizing that this is, this is where the vibe, organization building has to be kind of like, you know, put on pause, you can go, right, okay, now we, we might think of a reorg, we might think of this, we might think of that. But all of these little decisions come from, you know, sometimes arbitrary creation of a team even, right? Say, like, oh, we need to build a team that does this, right? And it's little by little, these little actions and these little decisions that can solidify this difference, essentially.

Demetrios [00:22:14]: Yeah. That was where I wanted to go next, was the idea of the organization and the team structure and what you've seen work well and not work well. When it comes to platforms, platform engineering, do you need, Is it the ML platform? Is it a data platform? Is it just a platform engineer? And how organizationally does that look? Are you embedding teams? Are you centralizing these platforms? Are you trying to do the hub and spoke model? Like, have you tried it all?

Oleksandr Stasyk [00:22:52]: Each of those questions are almost worth a podcast in themselves. I'll start with what I think works and what doesn't. I think what's important is the skill sets that you hire. And this is very hard to speak about what's good and what's not. But imagine, let's say you just only hire backend devs and only hire front end devs, right? If you just keep going like that, what does your organization look like after? If you like extrapolating time, right? You have a whole bunch of people who can do this and a whole bunch of people who do that. And then hopefully you have GraphQL to solve the hard problem between them, right? And then I see this, almost like equivalent. But you don't just have backend and front of devs now you have AI researchers and you have this and you have DevOps and you have that. So I think, and this is where I try myself to be as I don't like to use the word like full stack dev and all that kind of stuff because like I'm crap at front end, I'll put my hands up, right? I do try, but like I get laughed at every time my friends see what I do.

Oleksandr Stasyk [00:24:04]: I'm a classic backend doing front dev scenario. But what's more important is that even if you might not have the skills on the other side, right. What you must at least have is the empathy, the empathy to see things from that side, right. What are the problems that the front end devs face when you do stuff in the back end? Right. That's the first step to be able to actually work together, right? Same thing with research versus product engineering, et cetera, et cetera. You need to understand, hey, researchers value their freedom to be able to experiment things. And that's very important because you need to drive that innovation because if they're not researching, they're not really doing their job. So you need to enable that.

Oleksandr Stasyk [00:24:50]: This is where the platform stuff comes in. Hard to say in terms of what's good and bad. I did ask one of my good friends that I work with, what is MLOps? One of those philosopher questions. And his answer I still hold close to my heart, which is it could be anything and every organization is different. And I think there's just so much context in there, is that yes, you can kind of vaguely see these things you should do these things you shouldn't. So I've already kind of described them. But when it comes down to actually practical things, what do you actually do? Right. How do you actually organize your teams? Yes.

Oleksandr Stasyk [00:25:31]: Sometimes you need people who like, you know, look after infra and maybe call them ML infra people. Right. But what's more important I think is as a person who is in charge of hiring and building this organization is that you really, really, really need to know your people well, what their skill sets are, what they like, what their preferences are and understand this dynamic. Because if you then just kind of go, and here comes an ML platform team, and they will do this and here comes a research team and you'll do that. You just miss so much nuance. Like that kind of goes out the window and it's very easy to get things wrong that way. So you have to kind of be very careful and do it step by step and definitely don't be afraid of a reorganization just in case, because you might need to change things. Sometimes when you build a Lego house, you do need to like, go back a few steps, right.

Oleksandr Stasyk [00:26:27]: In. In terms of what not to do. It's also kind of hard to say because, like, sometimes things generally just work well even though they don't represent best practices.

Demetrios [00:26:39]: Yeah, they shouldn't. But somehow it's working.

Oleksandr Stasyk [00:26:41]: Yeah, yeah, yeah, it's. I'm sorry, I don't have any clean answers here. Something, something, it depends. And please, my consultancy fee for that.

Demetrios [00:26:54]: Are there specific things that in your experience haven't worked?

Oleksandr Stasyk [00:27:00]: I don't have any glaring examples of like, please don't ever do this kind of thing, but I can definitely say, like, this could be improved. Right. I think it's which skill sets you inject at which stages that can improve interaction. So let's say some like to give another non mlops actual example is that, you know, somebody wants to make a microservice and they need a database, right? Do they make a ticket to DevOps to say, Please can I have a postgres instance, blah, blah, blah, blah, blah. And then DevOps go, oh, that's nice, please join the queue. Right? And then we'll see you in 20 days, you know, and you just want a database, right? But imagine you're enabled to write the terraform to actually create your database, right. That little nuance there can unlock massive, massive innovations. So when you have two teams trying to do something together and you have enough skill set to kind of COVID all of these gaps and people with enough initiative to actually follow up on that, then you can definitely have things progress.

Oleksandr Stasyk [00:28:14]: Because sometimes it's almost not necessarily the things are complicated to achieve, it's that there'll be a blocker here because somebody just doesn't know how to do something right. And the right people are not in a conversation. So end of the day, the answer here is probably have the right people, the right skill sets and the right conversations.

Demetrios [00:28:31]: Oh, I like that. Because you're looking at things in this dimension of time also in the maturity and as it matures, you want to make sure that the right person is there.

Oleksandr Stasyk [00:28:44]: Yep.

Demetrios [00:28:46]: And I also like the idea of understanding through empathy that you Called out. I know that one of. One of my friends, Chad Sanderson, talks about this a lot and how when software engineering folks will make changes to data, it affects so many different things downstream. And it's usually the data engineer that has to deal with that. And there's products being built by the analysts and by the ML engineers or the data scientists that are not working. And so they are putting pressure on the data engineer, like, hey, why? Why did this not work? Like, my model is throwing all kinds of errors now. What is happening?

Oleksandr Stasyk [00:29:36]: My heart goes out to my data friends. They'll probably be watching this.

Demetrios [00:29:39]: Exactly.

Oleksandr Stasyk [00:29:40]: I have all the empathy in the world for them. That's.

Demetrios [00:29:44]: And it comes down to what you're saying, that if you can empathize with what is happening so far downstream, just as much as an ML engineer can empathize with the software engineer for why those changes need to continuously be happening, then maybe there's some way that you can create a very low friction idea of when changes are made. They get double checked on downstream.

Oleksandr Stasyk [00:30:18]: Yep. And you never know how much time you might save someone else by just doing a little check.

Demetrios [00:30:24]: Oh, my God, that is such a great point. You just think about all those wasted hours because of some change that, oh, yeah, this event. We don't need that anymore, do we? Yeah, let's get it out of here. Meanwhile.

Oleksandr Stasyk [00:30:41]: And sometimes it could be. So that's an example of Yagni, right. I ain't gonna need it. For people who don't know. Don't repeat yourself is also one of my favorites is just somebody trying to do the due diligence and refactoring something and saying, yo, let's. Let's. Let's just abstract this together. And then downstream, team wise, downstream, time wise, downstream, code wise, for example, all of a sudden you go, oh, my God, we're coupled now and we can't do X, Y and Z.

Oleksandr Stasyk [00:31:17]: Because reason. Because reason. Because reason. Because this drying has happened.

Demetrios [00:31:22]: You just got into a marriage you had no business being in.

Oleksandr Stasyk [00:31:27]: Sorry. Romeo and Juliet, your great, great, great great grandparents were at war. And that's just how things are.

Demetrios [00:31:33]: That's how it works. And you probably don't even realize it until two hours later or five days later when you recognize because this, because this, because this. And then you track down the root cause and you go, oh, as the.

Oleksandr Stasyk [00:31:48]: Victim, unfortunately, as the. As the. The culprit, you might not be in the company anymore.

Demetrios [00:31:55]: That's the classic scenario is that, hey, wait a minute. Why did this. Oh, I should start Looking for a job too.

Oleksandr Stasyk [00:32:04]: Now let's cheer up. Let's, let's, let's, let's talk about something fun.

Demetrios [00:32:09]: Yeah, exactly. Let's talk about the good stuff too. But that is a great point. It's not necessarily all doom and gloom. It is talking about this so that hopefully it becomes more and more commonplace to recognize that it is good practice. If you can figure out when changes are being made upstream, how can you track those changes so that downstream products are not affected by it?

Oleksandr Stasyk [00:32:34]: Absolutely. Sometimes all you need is a cheeky little test and CI to shift things left and make people aware.

Demetrios [00:32:41]: Yeah, that shift left. Yeah, that's a, that's such a good point. And the, even the ideas on like it's not only traditional ML that is affected by this because if you have this data and you're creating gen AI products from it, you can still get into the same scenario. And I think that's why the idea of the MLOPs being more important now than ever and hopefully bringing those barriers down is part of what we're talking about today. So this is, this is fun. Actually, you mentioned to me before we hit record that we're almost living in this post agile, post DevOps world. Why do you think that?

Oleksandr Stasyk [00:33:30]: Oh, that's, oh, I feel quite strongly about it because in a way as I grew up in my career it was all about agile good, DevOps good and these buzzwords and we almost religiously cult like followed it, said like, but this is Agile, but this is DevOps, right? And you know, maybe sometimes it worked out, maybe it didn't. Maybe sometimes you got hit in the face with the capital agile and you had to go work out story points. DevOps not so much. But I think, I think with DevOps it's a little bit more sad than painful because you kind of go, oh, DevOps is just terraform and AWS cool. But what I've experienced is that people kind of deny people see Agile as only ever existed as capital A. They've never experienced a good Agile with a small A. And there were some blog posts about like Igel Dadio, you know, nice and clickbaity. But in my opinion the spirit still lives on.

Oleksandr Stasyk [00:34:39]: Like I actually worked with one quite a senior engineer one day who was very much anti buzzword across the board. But when I talked to him about Agile, I just said like, you know, I don't get this Agile thing. It just means go do the thing. Right? Literally described this go do the thing. And I'm like, you know what, like that's that's literally it, right? Like don't mess around with this. Definitely don't Waterfall for sure. Right. But go do the thing.

Oleksandr Stasyk [00:35:03]: Just needs a little bit of feedback on the way back and there you go. That's all agile is, right? Just your very frequent feedback loops. Right? But when I say post Agile I see people saying that first and foremost Igel is dead, but also saying we don't need any meetings, right? Just delete all meetings and like have one sync once a week and say how are things? They're not delivered yet. Get them delivered. Right, that's it, that's your meetings.

Demetrios [00:35:33]: I'm blocked on xyz.

Oleksandr Stasyk [00:35:35]: Yeah, exactly. So I'm in a way it's kind of like this rejection of all these ceremonies. Like no stand ups, no sprint plannings. Like sprint plannings have very little defense towards that. Right? But sometimes it's like no to Jira. And I look at and go, yeah, you know I can't say I love Jira. Ain't nobody, ain't nobody here gonna say stand up and say I love Jira. I'm sorry Atlassian if you're watching this.

Demetrios [00:36:06]: But Jira, just a little bit of a side drag on that. One time one of the ML engineers that works at Atlassian came into the community and introduced themselves and said yeah, I'm working on Jira. And then put in parentheses sorry everyone. So they know.

Oleksandr Stasyk [00:36:23]: Fair enough. But when I look at like my, my recent team begged to get Jira because we had nothing, right? How do you track work? How do you represent your process discipline without some sort of can banish like tracker for example, right? You can do it in the whiteboard if you want. You can literally type it in slack every day. Every day. I did this. This is in progress, this is to do and this is completed, right? Yeah, but then why not just use Jira, right? It's up to you how you configure it. But it's almost kind of like, you know, this big reset, like oh no, no, no, no, no. Right.

Oleksandr Stasyk [00:37:02]: We're gonna go super fast because we're just not gonna do all those things that people did, right? So I think it's kind of like this revival. But the heart of it, right, the heart of agile still lives on because it's not like you don't waterfall things which is like the main other alternative. So like if you don't waterfall things, that just means like you do small things quickly and it's just the size of these things that varies. Right? But at the end of the day. Like, I think this post Agileness is mostly about rejecting the capital Agile. Capital A Agile.

Demetrios [00:37:37]: You think it's because there's too much friction that comes with the capital A Agile.

Oleksandr Stasyk [00:37:42]: Absolutely. So first one was, yes, but it's, it's the survivors of this friction that kind of. It's almost like they're pocket there. Agile apocalypse has the capital A. Agile apocalypse has happened. And now we see all the mammals that survived the meteorite and they're all kind of like, it's a whole new world now, right? We're not doing agile. Right? We're not, right. Even though, like, you know, life goes on.

Oleksandr Stasyk [00:38:06]: Right? But I do find it quite interesting because a lot of concepts are reinvented just under different names. And I remember, I remember speaking to some senior engineers back in my life where they just said, like, you know, things go in cycles. You know, this was like, you know, docker containers were a thing back in the day. They were just called something else. Like, it's just things keep going. It's very similar with DevOps. DevOps obviously is a different impact because, yeah, you know, DevOps equals AWS and Terraform, but DevOps could just literally equals reorganization, right? You could just say, like, it's not quite right. These teams should work like this, right? Boom.

Oleksandr Stasyk [00:38:49]: DevOps, you know, like out of nowhere, or, you know, how about we hire a platform team and not call it DevOps? That's already a step forward, right? Like, this concept of platform teams is just the same thing. It's just different names. So it's not like concepts disappear, right? You don't go, oh, DevOps is dead. So we're not doing DevOps, you know, but feedback loops still exist. And once people figure out that, oh, God damn, these feedback loops are good, there you go. There's your DevOps culture of feedbacking. So I think it's more a case of people having had enough of buzzwords and new people come in and they are not familiar with this whole education almost back in the day, like, because as I said at the start, like, you know, it was almost like, you know, pray to the agile gods and pray to the DevOps, and this is the way things are. And that means good.

Oleksandr Stasyk [00:39:46]: At least that's how I felt, right? And now we have this new generation who are like, you know, DevOps for those boomers, you know, like, you know, you know, we have new stuff, right? We just get shit done.

Demetrios [00:39:56]: We have vibe coding.

Oleksandr Stasyk [00:39:57]: Yeah, vibe coding and getting shit done, you know, so it's. I'm I feel like I'm getting to that stage of that senior that I've spoken to so many years ago about how they said like, you know, I've seen all this before and slowly but surely I'm like, oh, God damn, I'm starting to get the same feeling like I don't like, it's, it's obviously slightly different, but I'm kind of like, you know, why? Like this was a thing already, you know, and like, you know, sometimes it's just a case of learn from history, if that makes sense.

Demetrios [00:40:33]: Well, have I got news for you on the hype train, man. You want to come into a area where there is more hype and buzzwords than any other area.

Oleksandr Stasyk [00:40:45]: Oh yes, oh yes.

Demetrios [00:40:48]: I know just where to point you to. But at the end of the day, the idea or the concept of how do we create really fast feedback loops seems like it is a huge piece or thread that you're pulling on right now and you're saying, hey, look, I don't care if you are a full stack dev, if you're a data scientist, if you are a data engineer, how can you create as small of iteration loops or as fast feedback loops as possible to get that in place so that you can know you're going in the right direction? Whether we call it DevOps, whether we call it Agile, whether we call it MLOps, it doesn't really matter. We can call it LLM Ops if you want. That's another one of those buzzy words. But at the end of the day, what you, what you really want to do is try. And I really like how you said it. And so I'll iter reiterate it, it's breaking down those communication barriers and recognizing how to get folks talking when they need to be talking in the right ways.

Oleksandr Stasyk [00:42:03]: Yeah. And having like important conversations earlier.

Demetrios [00:42:06]: Yeah. And whether you do that with Jira or you do it with ClickUp, or you do it in Slack, it doesn't really matter. It's whatever works for you. And hopefully you and your team and your company have found ways to lower friction. I know there is probably an AI project at every company internally that is trying to figure out how can we use AI to get rid of Jira. I can almost guarantee you that Every company, over 500 people, has an internal team working on that. And I can just about guarantee that nobody's been successful so far.

Oleksandr Stasyk [00:42:46]: Vibe quoting yourself into a whole new jira.

Demetrios [00:42:49]: Yeah, you. And you would think like you, you kind of imagine, oh, it can't be that hard Right. It's gotta be, can we just get a AI notetaker to hang out on all of our standups? And can we get an AI whatever. Can we like, install a slack bot so that it's privy to all of our slack conversations? And then can we have it read Jira or ClickUp or whatever and hopefully it can just start doing what it needs to do? That is always. You think it's going to be that easy? And then you talk to people who have tried it and it's like, yeah, there's, there's a few problems with that approach.

Oleksandr Stasyk [00:43:34]: There's always the no plan survives the first contact with enemy or getting punched in the face.

Demetrios [00:43:43]: The enemy always has a say.

Oleksandr Stasyk [00:43:44]: Yeah, and. But this world that you described prior to this problem is that, like, it's almost like, you know, if we could just use AI, we could all live like the people on the wall, E ship just sitting there, you know, barely. Like, we'll just use AI to not even say our standups. Right. It'll just reach into the brain. Right. But yeah, sometimes, sometimes you just have to do the work and that's what makes you stronger. Right.

Oleksandr Stasyk [00:44:12]: Like, it's over. Reliance on vibe coding again. When does the vibe coding end? I think that that's a cool title. To the, to the talk. When should you stop vibe coding?

Demetrios [00:44:22]: Yeah, well, it, it's funny because there's the product folks who are probably thinking it can be done. We can get rid of jira. We can get rid of this friction if we have a better approach and we try and think AI first instead of bringing AI into jira. How can we make it so that our agile, for lack of a better word, approach is now going to be AI first? And I was talking with my friend Floris about this on one of the podcasts that we did a month or two ago, and he was saying, because they tried the whole, hey, let's bring an LLM to jira. We've got this JIRA agent. It's going to not only understand the context and everything and understand where we are with projects, but once certain things get updated and checked in and the PRs get merged, they're going to just update the Kanban board itself. What they found is that the LLMs do not understand the context on these projects, because when you're a human writing for another human, you write as the, or the least amount of information as possible so that the other human can understand because you're on the same wavelength. You've been working on this project maybe for the last two weeks or two months, who knows? And so you don't need to go and explain each little ticket and what you've done and how you've updated it.

Demetrios [00:45:58]: That makes it very hard for the AI agent to understand what the hell is going on. And so I asked him, I said, you know, well, what would you do differently? Would you try and create some kind of knowledge graph in Slack so it can get all of these more tech? Yeah, of course, man. Like, why? This is a problem that for sure can just be solved by.

Oleksandr Stasyk [00:46:20]: How else would we get paid to be choose?

Demetrios [00:46:22]: Yeah, you know, this project needs more AI. That's what we need on this project. So if you throw a few more LLM calls at it, it's going to work, right? And he said, I think about this a lot and what I would do is I would try to probably have each engineer be interviewed by a LLM notetaker and they get asked all the questions for all the context on the project so that it can now be in the loop. And he was saying, this is something that we think about a lot, is that are we trying to jam AI into an existing product or are we trying to reinvent the workflow with AI because it is now fundamentally changed.

Oleksandr Stasyk [00:47:15]: One, one. In this, in this AI hype, obviously everybody started doing it. I was just thinking like, yeah, can't even tie your shoelaces without AI now can you? And to be honest, again, it's genuinely very cool, right? And even I have some, like, you know, can I, like, so just a little hobby aside, I'm very, very deep into Warhammer and I'm like, can I just AI some Warhammer? You know, almost doesn't matter what it is, but like, you know, it would be cool to LLM this and do that, but at the end of the day just kind of sit down and go, right, no, I'm just going to do the human thing. Like, you know, just, just, just bash scripts, man. Yeah, just basically.

Demetrios [00:47:56]: What did I just see? Some. Someone wrote a blog post that I was reading this morning and it was talking about how more LLM calls or more work doesn't mean better work. And just because you are creating the illusion of doing more by prompting something and then getting output, if the output is mediocre and at the end of the day you have to do a lot of work on that output or scrap it all after giving up and saying, I don't think it's going to get there, then that is not progress.

Oleksandr Stasyk [00:48:35]: That sounds like exactly like my project last Night. I was just talking about writing this parser, trying to vibe code as far as I can, and just realizing I have to stop, you know, and just do it the proper way.

Demetrios [00:48:48]: Yeah. And so it's funny, just going back to the idea of, hey, can we use AI to help us break down friction, help us break down these communication gaps that there may be, or help us create faster iteration loops. That's an interesting one to think about. But the. The thing that it comes back to in my eyes is like, you can't substitute doing the work.

Oleksandr Stasyk [00:49:28]: That's true. In a way you can. I was just thinking, is it just the case that our brains are so much more multimodal than any LLM could possibly be, that there's no llming because you say, how do we use AI to solve our organizational problems? I'm like, well, do you need to have a little. Little pet on your shoulder that kind of watches your everyday work life and kind of goes, oh, there's a problem there, I think, you know, little clippy that suggests, have you considered reorganization and bringing these teams together? But first of all, how the hell are you going to train it? Yeah, like, how do you find the data to actually tell it that this is. This is what should be done, on the other hand, is, yeah, the multimodality is going to break, but as you said yourself, sometimes you just have to do the work yourself as a human.

Demetrios [00:50:19]: Yeah. How are you going to give it that reward function? How are you going to create evaluation sets?

Oleksandr Stasyk [00:50:25]: Cool, right? So I'll see you tomorrow. We're going to make a new startup. Surely going to get a lot of funding for this idea.

Demetrios [00:50:32]: I think we could go pretty far with this. Even though it's complete vaporware and it will never work. It sounds like a really good idea.

Oleksandr Stasyk [00:50:42]: Oh, yeah.

Demetrios [00:50:44]: It would sell to at least a few companies and you would have that overarching LLM that can see into all of the different channels and see when folks are getting stuck and then suggest, you should talk to John from the platform team.

Oleksandr Stasyk [00:51:01]: Might work. We never know.

Demetrios [00:51:06]: We're never going to know until we vibe.

Oleksandr Stasyk [00:51:08]: Absolutely. Yeah, that's true. Yep.

Demetrios [00:51:11]: We gotta just get that MVP out real quick. Yeah. So before we go, since I. I think I would be doing a disservice to not talk about, like, problems that you've seen in going to production with ML and AI and maybe you have some confessions, we started doing these confessional stories in the ML News ML Ops newsletter that goes out, and it's hilarious what people are writing in with, they are talking about how one time, you know, they took down the Etsy homepage because the recommender system, they just kind of like ported over some code from one of the smaller subpages and it exhausted the CPUs and all of this low level APIs were just getting bombarded.

Oleksandr Stasyk [00:51:59]: I wish I had something, something as dramatic like my. The ones I can think of, especially mlops related, were only as bad as, oops, the machine's gone. Let me reboot it. So please use a new IP address. Like it's. Unfortunately, it's very, very mundane in terms of confessions. Obviously, very painful for researchers, right?

Demetrios [00:52:22]: Yeah.

Oleksandr Stasyk [00:52:22]: But not as epic as. In fact. Yeah.

Demetrios [00:52:30]: Another one that I heard was from Flavio talking about how for 18 days straight, their recommender system was recommending the same item to every single person that went on to the webpage and they did not catch it until 18 days later and they realized, ooh, yeah, we may have lost out on a couple hundred thousand dollars worth of product sold.

Oleksandr Stasyk [00:52:54]: Actually, I do have a chemistry one. Um, so in Pandas world and all that tabular goodness, very favorable to do things columnarly. Columnarly. So you've got your labels and you got your values, right? And when you're jumping across, you know, umpteen microservices here and there, it's, you know, you better hope that those labels and values in the right order and in chemistry, it's beautiful because you have no idea if this is good or not, right? You're like, this compound. I have no idea what this compound is. And this number for this property, like, you know, how acidic is this random compound? You never, you're not even a chemist, you know, you barely know, oh, acidity probably, you know, something like lemons, right? So you kind of go this number kind of maybe, okay. But like, you know, you never know. So imagine like, you know, I'm gonna say like a million of them, right? And then you go, please, please, please, microservice.

Oleksandr Stasyk [00:53:58]: Microservice hops. Do not change the order of these, because if you do, nobody, nobody will ever know. So of course, of course there were some instances where like, you know, through these umpteen hops, through all these microservices, pop comes out the answer for all of these compounds, for all of these properties. And the numbers, we don't know if they're in the right order or not. And at some point the chemists kind of go like, something's not right because they obviously know if this should be acidic or not. But the way for them to prove it is that they'd have to reverse engineer the AI to kind of say this is what they predict it should actually be. And they go like, this doesn't align. And then one of the software engineers kind of goes, oh, we've, we've.

Oleksandr Stasyk [00:54:45]: We've sorted it by accident when we should have been sorted.

Demetrios [00:54:49]: Oh, no.

Oleksandr Stasyk [00:54:50]: Yep. So, like, that's definitely pretty painful, as you can probably imagine. Yeah. In terms of my current place, it's all this video generation and the AI problems themselves. Oh, man. There. It's almost kind of like you hope that something goes wrong because the results are absolutely hilarious. Like, literally today I've seen the wrong skeleton being applied for a render and somebody just got a chad chin and it's almost like, you know, why don't we just ship that as a product?

Demetrios [00:55:22]: Just these random creative hallucinations.

Oleksandr Stasyk [00:55:26]: Yeah, exactly. Yeah.

Demetrios [00:55:28]: That's the premium community that you can sell. They get access to all of the ones that you.

Oleksandr Stasyk [00:55:38]: Yeah. I think one last message I do have is that we talk about MLOPs and organizational practices, but software engineering best practices are just as applicable. So imagine pre mlops, almost kind of like pre DevOps things, but like continuous integration, continuous delivery. These things aren't just Jenkins. Right. So a lot of the times people go, do you do CI CD? They go, yes, we have GitHub actions. So I'm a big fan of kind of returning to thinking about what continuous delivery actually is, and that is so crucial in terms of implementing these feedback loops. This is all about the shifting left.

Oleksandr Stasyk [00:56:23]: This is all about thinking about when is the sweet spot for this automation. Right. Like, obviously not straight away because you have to prove things, but at the same time, if you leave it for too long, you'll grow way past that moment. Now, this might be a whole nother podcast, so I don't want to delay things too much, but I feel like that's almost kind of like the foundation of all of this. Right. Like, you know, it doesn't matter if it's MLOps or DevOps at this point or whatever, but, like, these things have their place and obviously, like, you know, doing them too early could be very damaging, just as much as doing them too late could be very damaging.

Demetrios [00:56:58]: Have you seen signs that it may be too late or it may be too early?

Oleksandr Stasyk [00:57:05]: Well, it's never too late, as they say, but I've seen signs of the luck. So in a way, if you have some code that isn't tested in the production, like, environment for too long, it might not even be integratable. It might even just basically say, like, we can't even ship this because the effort of actually productionizing it is way too high.

Demetrios [00:57:34]: And that's how, you know, okay, we need to do something here.

Oleksandr Stasyk [00:57:38]: This is, and this is where shifting left is such a key activity here.

Demetrios [00:57:42]: And can you explain what shift left means for you?

Oleksandr Stasyk [00:57:47]: Well, first and foremost, what does shifting left mean to begin with? What is left and what is right? So imagine right is production, left is your head and then hands and then keyboard and then commits and then pushes. So shifting left, hopefully you can visualize it by some sort of thing, initiative check test that is moved closer to your head. You know, the most immediate, best shift left is like, you know, you talk with someone and you say, let me rubber ducky an idea with you. And that gives you a way to say, wait a minute, this was a bad idea all along or this was a good idea all along, right? Next thing is you write some code and you have type checking. Type checking is one of the next shift left things that can happen. You know, bad type, please don't do this, please do that, right? And I think usually what happens is that the further right you go, the more and more value derive from these checks. For example, example, right? So like at the end of the day you go, production, go, where's my money? And either people start going, yes, this is a great product, or you go, oops, oops, it didn't work out, right? So this is like the rightest, the most right check you can possibly have, right? So the, the idea is kind of like shift as many of these checks as to the left as possible. But obviously like some of them you have to pay in cost of times.

Oleksandr Stasyk [00:59:15]: Like the best, the most practical examples is integration tests, right? There's this nice seesaw of how much do you test versus how much time it takes. You obviously don't want to implement the most end to end test possible, which takes three hours. And then every time you want to push the smallest change, it's a three hour CI job and you kind of go, well, at least I know it works, right? So you have to be very careful in terms of balance of value versus versus time. And you can extrapolate that it's not just integration tests, it's either conversations or it's anything further down the right. Let me try and think, sorry, left my head. But like for example, some sort of scientific check, you know, you can say like, you know, this model produces some something that's not good. You can't quite always check the nci, for example.

Demetrios [01:00:08]: It's interesting that you talk about that because I've seen it relayed as the closer you get to production, the more expensive it is to catch a problem.

Oleksandr Stasyk [01:00:19]: Yes.

Demetrios [01:00:21]: And the most expensive is when it's in production and as you start going left, it gets less and less expensive. If you catch that problem, if you catch it right at its source, that's great. You can quickly iterate and hopefully fix it. If you catch it downstream, then it's going to be a little bit more expensive.

Oleksandr Stasyk [01:00:42]: Sometimes things are not as easy to catch as well. So. So there's that dimension as well, is that of course, if you could catch the fact that nobody's going to buy your product and you've just spent like 10 billion in AWS bills, if you can write a unit test for that, that would be some unit test.

Demetrios [01:00:58]: That is the product that we need to create. We need to vibe code that product into existence. Because I can tell you what, if we could unit test whether or not someone is going to buy your product and sell that as our magic snake oil. Talk about a billion dollar company.

Oleksandr Stasyk [01:01:16]: Oh yes, easy. Yeah.

Demetrios [01:01:20]: And it's cool too that you're talking about shifting left because that idea I've seen a gain more popularity. I'm actually going to be hosting the Shift left Data conference. And so it's bringing the idea and the fundamentals of shifting left but to your data pipelines and your data cicd and whether or not your data is it has analytics built on top of it or it has ML built on top of it. That same idea of can we catch problems at their source, but also how do we balance one, which is I don't want to have to have three hour CICD check every time I make the smallest change verse. I don't want to just not have anything and be willy nilly and kind of pray every time I put something out there.

Oleksandr Stasyk [01:02:13]: It depends.

Demetrios [01:02:17]: Yeah, well, what in your eyes would be different when it comes to shifting left for code versus shifting left for data?

Oleksandr Stasyk [01:02:29]: Oh boy, oh boy. Once again my heart goes out to my data friends. So recently enough I had to do quite a lot in terms of setting up an entire data platform and that just gave me so much respect for that area. The best thing I could say is that it's a lot harder, right? A lot harder. Because code is stateless first and foremost, data is state and that in itself is already super hard. Like it's. I. I kind of struggle to explain it from first principles, but Please trust me.

Oleksandr Stasyk [01:03:13]: Data harder than code. So because of that I think like how do you test data? Well, you can have data validation and integrity tests and all that kind of stuff, but implementing that and running it and operating that like, you know, that is so much harder than for example, testing. If yo my inference works, that's great, right? Yeah. And also like the sheer amount of possible knock on effects because if you're going to dive into data like as a holistic view to kind of like take it from bird's eye view. Right. If you're going to start thinking about, you know, here's my source, this is the entry, right. And this once again comes down to how much value do you want to derive from your test? It's like how far down the stream of all the possible ways your data can go are you going to test things and the nature of your data? Like, you know, a lot of the times we speak about data, we implicitly assume tabular. Right.

Oleksandr Stasyk [01:04:19]: In my current place I don't have such luxury. There is tabular metadata for sure, but boy oh boy, the majority of the actual data is media. So testing dies even harder because you could imagine like working with media is not the same as, you know, having something that you can fit a lot of in the container. So I'm sorry to give you like a very hand wavy answer, but like shifting left in terms of data just seems so much more harder than in code. In code it's, I don't want to say easy. Absolutely not. But it's a lot more easier to visualize because you can kind of see this is right or this is not right. Your unit Test pass because 2 +2 does not equal 5 and you kind of see red unit tests.

Oleksandr Stasyk [01:05:11]: Right. And you can kind of extrapolate and extrapolate and extrapolate with data. It's just so many different cauliflower level fractal problems that you can expand to that I'm finding it hard to visualize. Sorry, maybe I just suck at data. Maybe that's.

Demetrios [01:05:29]: No, it's also true that with code it's a much more mature field and these ideas have been iterated on for.

Oleksandr Stasyk [01:05:39]: Sorry, I've got a guest.

Demetrios [01:05:40]: Yeah, what a cutie. For anybody that's not watching, we have a dog that just jumped on screen. Really good looking dog. So data is 100%. I forgot where I was going with this. But yeah, one thing is for sure. Yeah, the dog. I'm, I'm looking at the dog, so chill.

Demetrios [01:06:05]: So one thing is for sure though, data has had the idea of shifting that for much longer. So it's almost like this mature idea or it's much more mature than the idea of shifting left. Did I say data? I. I meant code.

Oleksandr Stasyk [01:06:22]: Code. Yeah, yeah, I assumed it. Yeah.

Demetrios [01:06:25]: So one thing is for sure, code shifting left is a much more mature idea because it's been around for so long and data shifting left is a newer idea and so it almost is. It needs those iterations and it needs diverse minds to come at it and try and attack the problem from different ways so that it also can become a bit more mature.

Oleksandr Stasyk [01:06:51]: And in my own experience, I definitely see that, like when you just try and tackle that from a code perspective, like, that's when things just stop lining up quite as neatly because it's almost like a different mindset you have to have as well.

Demetrios [01:07:05]: Oh, that's good. Yeah, that's a frame. Very interesting one.

+ Read More

Watch More

So, What is MLOps Anyway?
Posted Nov 17, 2022 | Views 850
# Hybercube Consulting
# wearehypercube.com
I Don't Really Know What MLOps is, but I Think I'm Starting to Like it
Posted Aug 18, 2021 | Views 451
# Cultural Side
# Case Study
# Presentation
# Coding Workshop
Who Does What (And When) in MLOps?
Posted Oct 24, 2022 | Views 1.1K
# Maturity Level
# Team Agreement
# MLOps Journey