MLOps Community
+00:00 GMT
Sign in or Join the community to continue

ML Security: Why Should You Care?

Posted Aug 16, 2021 | Views 560
# Machine Learning Security
# AI Product Development
# Machine Learning Engineer
# SAS Institute
# SAS.com
Share
speakers
avatar
Sahbi Chaieb
Customer Advisor, Data Scientist @ SAS Institute

Sahbi Chaieb is a Senior Data Scientist at SAS, he has been working on designing, implementing and deploying Machine Learning solutions in various industries for the past 5 years. Sahbi graduated with an Engineering degree from Supélec, France and holds a MS in Computer Science specialized in Machine Learning from Georgia Tech.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
avatar
Vishnu Rachakonda
Data Scientist @ Firsthand

Vishnu Rachakonda is the operations lead for the MLOps Community and co-hosts the MLOps Coffee Sessions podcast. He is a machine learning engineer at Tesseract Health, a 4Catalyzer company focused on retinal imaging. In this role, he builds machine learning models for clinical workflow augmentation and diagnostics in on-device and cloud use cases. Since studying bioengineering at Penn, Vishnu has been actively working in the fields of computational biomedicine and MLOps. In his spare time, Vishnu enjoys suspending all logic to watch Indian action movies, playing chess, and writing.

+ Read More
SUMMARY

Sahbi, a senior data scientist at SAS, joined us to discuss the various security challenges in MLOps. We went deep into the research he found describing various threats as part of a recent paper he wrote. We also discussed tooling options for this problem that is emerging from companies like Microsoft and Google.

+ Read More
TRANSCRIPT

0:00 Demetrios

Welcome back to another edition of our MLOps Coffee Session Podcast. Today, we are joined by none other than Sahbi. I brought him on the show because I wanted to talk about the article that he wrote some time ago now. It's all about machine learning systems and security of these machine learning systems. For those of you that have been listening for a while, you know that security is something high on my list of priorities when it comes to machine learning – that and a few other things. But today we are getting into security specifically. I will tell you that if you are not in our security and privacy channel on the MLOps Community Slack, and you enjoy these kind of topics, I guess that is the first important piece – if you enjoy or think that they are important, then jump into that channel because there are smart people sharing all that they can about security and privacy. One of the great pieces that has been shared in there is the article from Sahbi. So with me, as always, is Vishnu. Welcome, Sahbi. I'm excited to talk to you about this. Maybe we can start by getting a bit of background information about yourself.

1:25 Sahbi

Yes, thanks a lot, Demetrios and Vishnu, for having me here. I just want to also thank you all for all the work that you're doing with the MLOps community. It's really a community that I love. There is so much good content out there – a lot of experts sharing really interesting things, a lot of subjects that we talk about in the community. You're always bringing superstars into your media meetups. So thanks a lot for all of your work. Yes, so to present myself – I'm Sahbi. I'm a data scientist. I started working six years ago in a startup in Paris. We were actually a bit early – we started trying to automate machine learning and also work on putting models in production and explain these models, like six years ago. After that, now I work in a way working at SAS, which is a much bigger company. We've also got a lot of powerful tools to manage this data science world in order to have the best platform for data scientists.

2:55 Demetrios

Awesome. So I think the logical question here is, “Why did you write this paper? What inspired you to write the article?”

3:09 Sahbi

Yeah. Let me start with a story. All data scientists experience these kinds of things – it's what I call the “now you put it into production” kind of situation. So it's not directly related to security but it happened in my early days as a data scientist. I worked on a huge project with an international customer. We spent a lot of time working with the customers on this project. We went abroad to China, and the customers came to France to work with us. We did a really good job to actually have the best model in terms of performance. When this work was finished, my boss came to me and said, “Great! Now you put it into production!” [chuckles] And that's where you start to think about all the things that you maybe could have done differently if you want to put your model into production. Maybe you used some data that you want to query in real time, but your model will be offline – so this doesn't work. You made super complex architectures of models combining in some way different models and it's also too difficult to deploy it.

4:53 Sahbi

So there are a lot of choices that you make when you're building your model. The choices that you would do differently if you had the foresight about, “What are you going to do and what are you going to put in production?” And I think that security is one of the choices that you could do differently if you're aware of the implication of what you do. Recently I started to think, as a data scientist, of course, that I want to have the best performance for my models. Most of the time, I'm going to fetch a lot of data in order to have more data from different parts. For example, it can be scraping some websites, using some open data, using a pre-trained model from GitHub, etc. Actually, when you think about that, each one of these choices is actually a security threat. As soon as you go and fetch something from the internet or some external data or models, there is a risk. This kind of data could be malicious or could contain something that is a threat for the security of the model. So that's how I started to think about the subject. At the time, I tried to look for information about the different kinds of threats that we could have in machine learning.

6:59 Sahbi

I didn't actually find any sources where I could find a comprehensive list of threats, so I decided to do this research and to look at this subject myself. I discovered that it’s actually an area that is still being researched, which is why we don't have that much content about the subject. But in research, what you have is a lot of really bright people. But what they’re doing is designing attacks, in order to attack some machine learning models, and you have others designing defenses against these attacks. So it’s a vicious cycle because another guy's going to go and design another attack that is going to break this defense. So it's kind of a game. But actually, there is nothing that could help me, as a data scientist working on a business project with a customer, to make my model safer and have a clear idea of what's happening. So that's how the idea of this article came. I did some research and ended up writing this piece.

8:19 Demetrios

Well, I love how you went from needing it… First of all, I love this story about “now let's put it into production” because I think everyone can relate to that. Probably half of the people in the MLOps community are there because they've had that very same experience. So it's good to hear that you had that and you were a little bit discombobulated with, “Well, what do I do now?” and “Oh, maybe I shouldn't have optimized for those things that I optimized for. Maybe I was making things a little over engineered here.” So it's really nice that you were able to see that and learn from it. And then, the security part, where you realize that the majority of what you're reading out there was research and it applied to research – it didn't really apply to your specific use case and situation, where you needed to have tangible things you could do to make your model more secure. So that is just a great way that you're solving this pain point by learning by going through the different ways of how you can make this model secure on your own. And then you became an “expert” on it. So I love that journey. Maybe we can talk now about some of these reasons why we should even care about keeping our models secure? I imagine that a lot of people out there, especially data scientists, are thinking in the way that you used to think and they're just trying to create the best model with the best accuracy score and not really thinking about production yet and everything that comes with production. So maybe there are a few things – I know in the article you talked about, “What are things we should care about?” Or “Why should we care about these different things?”

10:20 Sahbi

Yeah. So when we're talking about machine learning security, first, we have to be aware that machine learning systems are actually software pieces. In terms of security, you're going to have your software security/cybersecurity part, but in addition to that, you're going to have some specific security threats for your machine learning model. So there is some part that is already solved – or not solved, but you already have a lot of resources that are available out there about this software part – but not that much about this specific machine learning. So when we're talking about machine learning models, the kinds of threats that we have could be about data or the model. The model is actually something that you build out of data, so as long as your model is subject to attacks, it's actually the data that is used, which is a threat as well. The kinds of threats that we are going to have are data extraction – the ability to extract some specific data from your model. And we know that this could be really problematic in terms of privacy – first and foremost, about privacy. Then there could be another type of problem that affects the model by corrupting it – making it generate false predictions. This could be used by malicious people, for example, to avoid some features.

12:36 Sahbi

For example, let's say on social media, there is a fake news detector. So by manipulating your input in some way, you're going to avoid that feature. And there is also the problem of model stealing, actually. Just by providing an API –so that somebody can query your model and get a response – just by doing that, a lot of times, we're actually able to make a copy of this model. This is a problem, for example, if your service is a paid service, because some people can copy your model and make it free or use it without you. Or it could be a competitor that could steal your model as well. So there are different kinds of threats related to models and data, and each kind of threat is going to have some sensitivity. I think that, of course, not all the projects that we work on every day are sensitive models, but something that we should do for each project is think about it and maybe assess the risks of these models.

14:13 Vishnu

Yeah, that makes a ton of sense, Sahbi. So this framework that you've provided for security, and the different threats that exist, it's something that sounds very similar to what we discussed with Diego Oppenheimer, the CEO of Algorithmia, who joined us to talk a lot about what MLSecOps looks like with their platform and how they've been thinking about this challenge. As a data scientist, now that you know some of these challenges and some of these threats that are posed to machine learning systems, how have you changed your development practices to mitigate those threats during the course of model development and productionization?

14:58 Sahbi

Yeah. So, when we talk about security – we talked about assessing the risk of our models – I think that each person in the organization can have his or her role in avoiding these big problems. As a data scientist, I have to be aware when I build my model, if there are security threats associated with it. If I'm going to fetch data from unreliable sources, this is a security threat. If I'm going to fetch a model from Model Zoo, where people can go to use some pretrained model, I have to make sure that it's a reliable source. Because if this model is corrupted, my model is going to be corrupted as well. During modeling, in the function of how I am going to use the data and expose my model – as a data scientist, I should know where there are threats. Then you're going to work with a machine learning engineer, with your manager, and at each stage, there could be some responsibility. Here, there is another unanswered question about accountability, “Who is accountable for that?” For example, as a product owner, I should be able to evaluate the different risks associated with this model. Working with data scientists, we now know where there are risks and we can associate maybe some importance to these risks. Then, as a manager and decision maker, I could decide if I put this model in production as is, or if I do some tweaks to it that actually often come at the cost of accuracy or other things that I later have to tweak.

17:23 Vishnu

Got it. So it sounds very logical, which is – as a professional, it's incumbent upon you to really understand some of the day-to-day threats that may emerge. Everybody has to play a role in that. And then to also work as part of a larger team to really take security in almost as a first consideration – to make that as part of your development practices. And that makes a lot of sense. One of the things I really enjoyed about your article that I would love for us to cover are some of the ways that we can actually secure models on a more almost tactical level. You mentioned things like differential privacy and homomorphic encryption. Could you talk to us a little bit about what some of those... Obviously, as a data scientist, I would love to be completely responsible with all of my security practices – as you said, with Model Zoo and looking at other external sources – and take that into account. But assuming I've done all of that, what are the other tools in the toolbox that you covered in the article? And could you give us an introduction to those?

18:35 Sahbi

Yes. For each one of the threats that you mentioned, we have some differences that exist. When we talk about the data extraction problem, for example, we have the fact that for large models we are able to extract data. This could affect very large language models, for example – these kinds of models are very susceptible to this kind of threat. For example, take the algorithms used in Gmail’s smart composer, some researchers have shown that by prompting the model with the right words, we are able to actually extract very specific personal data. They were able to extract the name, phone number, email address, physical address of a real person from these kinds of models. Luckily for us, these engineers also work at Google. So Google is doing the right things in order to secure these kinds of models. But the risk is there. In order to avoid this problem in large models, we have to understand the problem – why are we able to extract this data? It’s because the model is actually so complex that it's able to memorize some specific details in the data. So what can we do in order to defend against that? There is a technique called ‘differential privacy’ which is used to put the constraints on the model in some way in order to control how much it's going to memorize data. Then, for us, we're going to have some knob called ‘epsilon’ and depending on where I put the knob, I am able to decide whether I care about privacy first or accuracy first. So it's going to be a choice in where I'm going to set this. So this is differential privacy. But… [cross-talk]

21:27 Demetrios

Sorry, before we jump to the next one – do you have any other stories or ‘occurrences’ like this Google one, where data was leaked in that fashion? Do you have any other ones?

21:40 Sahbi

There’s an example that is much more recent is from Microsoft, who recently released a CoPilot for GitHub, which enables you, as a default developer, to have smart suggestions when you're coding. The first thing some people did is just tried “API key equals” and see what the model outputs. There you're going to have, of course, some serious privacy and security problems because it can output some real keys. Sometimes it also outputs some headers of websites with names inside of it. But the thing is, CoPilot is trained on public data – it's all open source. But you also have some personal details and sensitive data in public data. So yeah, the problem that you mentioned also exists in GPT-2, the model developed by open.AI. In these kinds of models, you're going to have these kinds of threats. But there are defenses. For each type of model, you're going to have some defense. One of the most known threats is adversarial examples, which work by tweaking some inputs – for example, it could be images or texts – you’re going to make the model output another prediction. It could in spam filters, in fake news detectors, social media platforms also have some harmful content filters. And if your model is not secured, it's possible to design some inputs that are going to get through these filters. The problem here is that the only defense is actually implementable before the attacker knows which kind of transformation he’s going to do and actually train your model with the data augmentation so that your model is more robust to these kinds of attacks.

24:45 Demetrios

In these things – I'm just wondering, as you're talking about this – since you're in France, you probably have heard and looked at the new proposed regulation from the EU, and how they talk about this robustness. Do you feel like this kind of strategy will help those who now have to, all of a sudden, take this new regulation into account?

25:14 Sahbi

Yes, exactly. Actually, what's good about the new regulation is that it puts a framework in place and also defines “What's a sensitive AI application?” Because that's what we're talking about – it's really these kinds of systems that should be robust and we should be sure that they're not threatened by such attacks. The EU regulation defines a “sensitive application” as something that is harmful psychologically or physically to people. If we define it as such, for example, if some people are able to put some harmful content on social media – this is harmful and it shouldn't happen. The consequences are some fines, etc. When we're talking about autonomous vehicles, they could be physically harmful. It's a sensitive system, as well. So, I think it allows to better define these kinds of sensitive applications as well.

26:48 Demetrios

There's something else that I wanted to talk about that you mentioned. This is concerning the dangers of federated learning. I know that federated learning is quite popular these days in the community. So maybe we can go down that route? Why is that different from just normal machine learning?

27:12 Sahbi

Yep. Yes, so federated learning is – just as a very quick introduction – it is about, let's say, distributed learning. So it's the ability to use data that is going to be in different locations, and to use it in order to train one model based on all this data without aggregating all the data in the same location. It's used, for example, in smartphones. In Android or iPhones, there is very some federated learning that is used. In theory, if I'm a hacker and I have a smartphone, I'm actually able to put anything I want in this device. I have total control over the data on this device and this data is going to be used in order to learn a model. This is actually a threat for federated learning because a hacker is able to inject any data he wants into the final model. And it's even more sensitive because federated learning is also used a lot in medical areas – between hospitals – because we have patient data in different hospitals and we want to use all of the data in all the hospitals around the world so that we have the best AI that can detect some disease. But we don't want to put all the data in the same place and it's difficult to share data between hospitals.

29:17 Sahbi

So federated learning is a solution for this. But for medical data, we have to be really hyper secure when we’re manipulating this data. Luckily, I think that the brightest people right now are working on this subject and that's why we have some really interesting techniques and defenses that were designed for federated learning. We can talk about homomorphic encryption, which is when you encrypt your data in some way, do the computation on the encrypted data, get results from it, and then decrypt the data in order to get the final result. So it's doing computation on encrypted data. There are also other methods such as secure multi-party computation. So there are different techniques designed mostly for federated learning, but you also have to be aware that it's still a research area. It's improving every day. We don't have yet the perfect framework, but we do have a different framework with some advantages over some problems.

30:55 Vishnu

Yeah, that's a great answer. It really highlights, I think, federated learning. There's a lot of excitement about federated learning in a lot of different industries. Because I think the challenge is that a lot of people see data as a strategic asset and unlocking the value of data has been a challenge because of that, and federated learning has been posited as a solution to that. But what you're highlighting is – it's not necessarily a panacea, when you consider the security challenges that are involved. I think that's a great point.

31:31 Demetrios

There’s a vocab word there “panacea”. [laughs] You've been studying! Word of the day, huh? There we go.

31:38 Vishnu

[laughs] Yeah, it is. So… after Demetrios just blew up my train of thought – I’m trying to reconstitute it. I think listening to your descriptions of some of the challenges that are inherent to model development – and the security challenges inherent to model development – it seems that there's almost this process of model interrogation, right? Which is in some ways similar to what we call “interpretability” but it's a little bit more directed in the sense that, for example, what these Google researchers did to unpack the smart compose models was a very focused exercise that wasn't just dependent on one analysis like interpretability. With interpretability, I think people are using SHAP scores or LIME and very specific techniques. In interrogation, there's almost an entire process. And to me, I think, when I hear you go through it, it seems like there are almost three legs to the stool of model interrogation. You have your operational component, where there are different teams – security, ML engineering, data science – that can help us. There's a regulatory piece, which is the regulators themselves, who are saying, “Have you answered these questions?” And the third leg of the stool, that I feel like I haven't had a good grasp on, is the tooling around this. What are the tools that allow us to interrogate models and to answer some of the security practice challenges that we face, that we just talked about? Where do you think the current status of tooling is? And are there any tools that you found useful in terms of ensuring model security?

33:25 Sahbi

Yes. What's exciting about this is that if you had asked this question when I wrote the article, I think there were none. But a few months later, there are some that have emerged. Actually, we're starting to see something really interesting in some of the biggest companies. Where you have these kinds of sensitive systems, these companies are now building some teams that they call AI Red Teams. This name comes from the security area, or gaming as well – we have the red team and the blue team. The Blue Team is focusing on defensive strategies and the Red Teams are focused on trying to attack their own systems in order to be aware of the risks and to defend them better in the end. So you have some AI Red Teams in some big companies like Microsoft, for example, and Facebook, and these teams just started to develop some tools that they are open sourcing right now. One, for example, we have the Facebook team that developed Ugly. This is actually a tool that allows software to be more robust to adversarial attacks. It's a tool where they try a lot of augmentations on the input data in order to make the model more robust to these kinds of attacks. You have Microsoft, who developed a tool called Counterfit, which looks interesting because the idea is to evaluate the threats of your model. It's based on this matrix of threats that we just discussed, and it tries to look at these different threats in order to evaluate the risk of the model. So these kinds of tools, we didn't have some months ago, but it's the tools that we need in order to make our model more secure. And if you look at it, it's actually like the same story again. We had the data scientists, and then with more tools we switched to machine learning engineering in order to be more production-focused. For security, the same thing is happening because it's a research area, but as soon as we have tools that are developed, we're going to maybe have more profiles who are able to take this and to work on the security of these kinds of models.

36:41 Vishnu

That's awesome. It's really, really promising to hear that there are so many of these tools that are coming out now. Especially, I think a lot of the ways that… if you look at the history of MLOps to date, so far it's been that you had ML happening and the uniquely talented enterprises like Google, Uber, Pinterest, etc. Then, people who built those systems came out and started MLOps tooling companies and then that has enabled it for all other companies to be able to adopt the same best practices that other companies develop more easily. And it sounds like what you're saying is that a similar sort of thing is occurring in ML security, where the leading enterprises are developing some new best practices – those professionals hopefully will come out as company users, creating knowledge that others then will be able to access to be able to implement ML security best practices. Does that sound like a fair summary?

37:41 Sahbi

Yeah, I think it's exactly that. That's how science works and that's why we have research. And I think that this domain is really exciting, because at the end, it’s the fusion of two great domains, which is AI and cybersecurity. When you look at a research paper and the problems, sometimes you really have to dig into the inner functioning of the model to the mathematical implication in order to design some tools like that.

38:26 Vishnu

Yeah. It's very interesting to see. I think you highlighted it exactly right. It's not just that there's this research happening in the ML world or security for ML – cybersecurity itself is a vast field, and it feels like there's so much there, with the advances in cryptography, and with advances just in terms of thinking about threat points for different cybersecurity systems. In fact, using machine learning for cybersecurity – it's crazy to think about all combinations of fields. It's all these different intersections that are happening.

39:04 Sahbi

Yeah, that's true. We've been hearing about machine learning for cybersecurity, but the cybersecurity of machine learning is actually completely different. It's a subject on its own as well.

39:26 Demetrios

I wanted to just mention something that was posted in the security and privacy channel on Slack. It’s from Diego, actually, who we also spoke to about this a few weeks ago. When we talked to him, he mentioned how vulnerable or how bad of a practice it is to just go and download anything off of PyPy and bring it onto your Jupyter Notebook and then just import whatever from this on PyPy. This article that he shared talks about how Python developers are being targeted with malicious packages on PyPy. Since we are talking about this and the greater ecosystem and ways that you can potentially be vulnerable, I wanted to mention that – anyone who wants to dig into it more, we can link to that article in the description and/or you can go into the security and privacy channel. So my next question for you, Sahbi, is more along the lines of synthetic data. I've heard a bit about synthetic data and I'm wondering where you stand on it. Maybe you can just give us your two cents and how you feel this can be helpful? Or is it not really that important? What would it be used for? Is it a tool that we can use as data scientists to help us create more robust systems?

41:01 Sahbi

Yeah. Synthetic data is mainly what you generate using generative models. It could be images, for example. We see a lot of use cases to make some art, like making some paintings that resemble some painter’s work. It's also used in music now, where we have some generation of music. But when we talk about security, it's not directly synthetic data, but data augmentation. The difference is that what you're going to take is your inputs and try to do a lot of transformation on these inputs. So we're going to synthesize some examples, but you're implementing some existing data in order to make it more robust to attacks. So by flipping it or rotating it, if it's images, for text we're going to introduce some characters or some different words, for example. So it's more data augmentation. But yeah – I'm trying to think if data synthesis could be used as well, but right now, I don't have examples of it.

42:46 Vishnu

Yeah, that makes sense from a synthetic data standpoint. Just to kind of bring this whole conversation full circle – we've talked about so many different new privacy methods, so to speak, or new security defense tactics, so many potential threats for data scientists to take into account as they develop models – as you sit here in 2021, what is your prescription on where you think this whole question of ML security will be in 5-10 years? I'm kind of curious how you see things looking going forward? Will we have almost automated security practices, the same way that we have in other places like infrastructure as code, or do you think that there's a different way that this will develop?

43:45 Sahbi

Yeah. I think that when you look at the big companies that actually reflect where most of the companies are going when they mature today, by creating these AI Red Teams, the idea actually is even broader than just the model security, because there are so many things at stake. You have the privacy, the security, but you also have the bias of the algorithms. With all of these things, you have to ensure that your models are always secure and robust to these kinds of threats, especially with MLOps and automating how our models are put into production. Tomorrow, I will have a robust algorithm in place that scrapes some kind of comments from a website, and one day or another, a competitor or malicious person is going to inject some data in there and I have to be aware that this happens, what the implications are for my company. So I think that we're going to see more and more of these teams in other companies – teams that are really focused on these machine learning security parts. For the moment, I think quite some scientific skills are needed most, because it's still a research area. But as we have seen with data science, with more and more tools that you're going to see (I hope that they're going to be developed) we would have maybe some more software-oriented persons and skills that could also take on these kinds of subjects.

46:05 Demetrios

I love that you're talking about this, because when you look at different companies that are heavily relying upon machine learning for their bottom line, you need to know the implications of what it means to have data poisoning or these different threats that are coming at you – you need to have that very clear, because that could be a heavy, heavy hit to your company and the bottom line, like we were talking about. The last thing I want to ask you, which is a bit of a tangent, but I would love to go down this little rabbit hole with you. As you were writing this and as you were trying to figure out, “What are some ways that my models are vulnerable? What are some threats to them? How can I fix these threats?” Where did you encounter the biggest pain points? What were some parts where… maybe it was in implementing some of these security measures and you realized it was very difficult, or maybe it was just in finding the information on how the model is a threat and then having to try and lock down your model in that way? What were the pain points that you encountered?

47:27 Sahbi

Yeah. The thing is, when I wrote the article, the content about security was not that widely available. I think that we're seeing more and more content out there today, luckily for us. So I think one of the biggest hurdles is maybe the level of science that you need to have in order to understand the attacks and the defenses. Because it's really linked to how the model and the learning is done. So I think, for now, you need to have this level of understanding of your models in order to understand how this works. This could also be an obstacle for hackers, because in order to design some new methods, maybe there are easier ways to attack some machine learning system than to design a whole new way to corrupt gradient descent. So it's good news from one point of view. But what I also liked when I started to do research, especially with these new tools that are coming, for example there is a tool that was released by NVIDIA, I think, which is an environment that would allow anyone to actually try to hack a machine learning model. So it's an environment made for attacks in order to have some hands-on experience and try to understand how it works. I think this is a good area and it's good that we’re starting to see these kinds of tools. There is another thing that is now organized, I think by Microsoft – it's actually a challenge. It's like Kaggle challenges for data science, but here it's not about making the best performance, instead it's about attacking an existing model, trying to divert it, and to make some attacks against this model. And there’s also a defense component, where the competitors try to best defend some model and make it robust to these attacks. So I hope that we're going to see more and more initiatives like this, because such initiatives make more and more people aware and knowledgeable about how this works.

50:43 Demetrios

Yeah. And that's one of the reasons why I love talking to people about security at this point in time, because it feels like it is one of those pieces of machine learning that is a little bit put off to the side. Or, like you talked about at the beginning, “Who owns this problem? Who is the one that is responsible for the security piece?” And I really liked what you said at the beginning on “Each person has to do their job, whatever that may be. And you really need to look at trying to make your piece of the puzzle as secure as possible and then once it's handed off, the next person is also trying to make their piece as secure as possible, so that you don't have any problems in the future and you don't have these hacks.” So, with that being said, this has been an excellent conversation. I really, really appreciate talking with you Sahbi. It has been a long time coming. I know we wanted to do this months ago and we finally got it done. Hopefully, everyone enjoyed it. If you are out there still listening, the way that you can show your support is by giving us a like or commenting, or subscribing, or whatever the cool kids do these days for the podcast and YouTube videos. That's all we got for today. Thank you everyone. Thank you, Sahbi. Thank you, Vishnu. And the word of the day is… what was it? Pansia? Panacea?

52:09 Vishnu

Panacea. [chuckles]

52:12 Demetrios

There we go. Hopefully you all learned something.

+ Read More

Watch More

1:11:12
Why You Need More Than Airflow
Posted Jul 21, 2022 | Views 885
# Airflow
# Orchestration
# ML Engineering
# Union
# UnionAI
DevOps, Security, and Observability in ML
Posted Jul 21, 2022 | Views 829
# DevOps
# Security
# Observability
# tryhelix.ai
Why Aren't You Using RAGs?
Posted Jan 15, 2024 | Views 438
# RAGs
# LLM Operations
# Couch HQ