MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Security and Privacy

Posted Mar 06, 2024 | Views 185
# Security
# Privacy
# AI Technologies
Share
SPEAKERS
Diego Oppenheimer
Diego Oppenheimer
Diego Oppenheimer
Co-founder @ Guardrails AI

Diego Oppenheimer is a serial entrepreneur, product developer and investor with an extensive background in all things data. Currently, he is a Partner at Factory a venture fund specialized in AI investments as well as a co-founder at Guardrails AI. Previously he was an executive vice president at DataRobot, Founder and CEO at Algorithmia (acquired by DataRobot) and shipped some of Microsoft’s most used data analysis products including Excel, PowerBI and SQL Server.

Diego is active in AI/ML communities as a founding member and strategic advisor for the AI Infrastructure Alliance and MLops.Community and works with leaders to define AI industry standards and best practices. Diego holds a Bachelor's degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.

+ Read More

Diego Oppenheimer is a serial entrepreneur, product developer and investor with an extensive background in all things data. Currently, he is a Partner at Factory a venture fund specialized in AI investments as well as a co-founder at Guardrails AI. Previously he was an executive vice president at DataRobot, Founder and CEO at Algorithmia (acquired by DataRobot) and shipped some of Microsoft’s most used data analysis products including Excel, PowerBI and SQL Server.

Diego is active in AI/ML communities as a founding member and strategic advisor for the AI Infrastructure Alliance and MLops.Community and works with leaders to define AI industry standards and best practices. Diego holds a Bachelor's degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.

+ Read More
Ads Dawson
Ads Dawson
Ads Dawson
Senior Security Engineer @ Cohere

A mainly self-taught, driven, and motivated proficient application, network infrastructure & cyber security professional holding over eleven years experience from start-up to large-size enterprises leading the incident response process and specializing in extensive LLM/AI Security, Web Application Security and DevSecOps protecting REST API endpoints, large-scale microservice architectures in hybrid cloud environments, application source code as well as EDR, threat hunting, reverse engineering, and forensics.

Ads have a passion for all things blue and red teams, be that offensive & API security, automation of detection & remediation (SOAR), or deep packet inspection for example.

Ads is also a networking veteran and love a good PCAP to delve into. One of my favorite things at Defcon is hunting for PWNs at the "Wall of Sheep" village and inspecting malicious payloads and binaries.

+ Read More

A mainly self-taught, driven, and motivated proficient application, network infrastructure & cyber security professional holding over eleven years experience from start-up to large-size enterprises leading the incident response process and specializing in extensive LLM/AI Security, Web Application Security and DevSecOps protecting REST API endpoints, large-scale microservice architectures in hybrid cloud environments, application source code as well as EDR, threat hunting, reverse engineering, and forensics.

Ads have a passion for all things blue and red teams, be that offensive & API security, automation of detection & remediation (SOAR), or deep packet inspection for example.

Ads is also a networking veteran and love a good PCAP to delve into. One of my favorite things at Defcon is hunting for PWNs at the "Wall of Sheep" village and inspecting malicious payloads and binaries.

+ Read More
Katharine Jarmul
Katharine Jarmul
Katharine Jarmul
Principal Data Scientist @ Thoughtworks

Katharine Jarmul is a privacy activist and data scientist whose work and research focuses on privacy and security in data science workflows. She recently authored Practical Data Privacy for O'Reilly and works as a Principal Data Scientist at Thoughtworks. Katharine has held numerous leadership and independent contributor roles at large companies and startups in the US and Germany -- implementing data processing and machine learning systems with privacy and security built in and developing forward-looking, privacy-first data strategy.

+ Read More

Katharine Jarmul is a privacy activist and data scientist whose work and research focuses on privacy and security in data science workflows. She recently authored Practical Data Privacy for O'Reilly and works as a Principal Data Scientist at Thoughtworks. Katharine has held numerous leadership and independent contributor roles at large companies and startups in the US and Germany -- implementing data processing and machine learning systems with privacy and security built in and developing forward-looking, privacy-first data strategy.

+ Read More
David Haber
David Haber
David Haber
CEO @ Lakera

David has started and grown several technology companies. He developed safety-critical AI in the healthcare space and for autonomous flight. David has educated thousands of people and Fortune 500 companies on the topic of AI. Outside of work, he loves to spend time with his family and enjoys training for the next Ironman.

+ Read More

David has started and grown several technology companies. He developed safety-critical AI in the healthcare space and for autonomous flight. David has educated thousands of people and Fortune 500 companies on the topic of AI. Outside of work, he loves to spend time with his family and enjoys training for the next Ironman.

+ Read More
Demetrios Brinkmann
Demetrios Brinkmann
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

Diego, David, Ads, and Katharine, bring to light the risks, vulnerabilities, and evolving security landscape of machine learning as we venture into the AI-driven future. They underscore the importance of education in managing AI risks and the critical role privacy engineering plays in this narrative. They explore the legal and ethical implications of AI technologies, fostering a vital conversation on the balance between utility and privacy.

+ Read More
TRANSCRIPT

Security and Privacy

AI in Production

Demetrios 00:00:05: Oh, look who it is. Mr. Diego Oppenheimer. I conned you into being part of one of these conferences yet another time.

Diego Oppenheimer 00:00:20: I am always happy to do it. Can you hear me all right?

Demetrios 00:00:23: I hear you loud and clear, man. It's great to have you. We've also got some other amazing people joining this next panel that we're about to do. We've got Mr. Ads Dawson, great to have you back. We've also got Catherine, how you doing? And of Gandalf fame, Mr. David Aber. So I get to jump off stage and let Diego run the show.

Demetrios 00:00:52: On this one. We're going to be talking all about security privacy in the LLM land. I'll let you take it over, Diego.

Diego Oppenheimer 00:01:01: Sounds good. Thanks, Demetrius. Hi, everyone. Super excited to talk with this panel. Has some amazing folks. I do laugh about the data kind of privacy and security kind of talk about it in a good way because I remember talking with Demetrius a long, long time ago about how there's all this security attached to models, yet we still allow our data scientists to kind of essentially import all of know in every single time of their scripts that they use. And one day somebody's going to exploit that. And then I think over the last two years we've been sharing back and forth text around know, hey, here.

Diego Oppenheimer 00:01:37: It happened again and happened again. So that was kind of pretty funny. So I'm Diego Oppenheimer. I'm a co founder at Guardrails AI, which is an organization that focuses on helping get the behavior you want out of llms and LLM workflows. I have an amazing panel today of absolute experts, and I'm going to let them introduce themselves. So I'm going to start with ads and let you introduce a little bit about your background to the audience.

Ads Dawson 00:02:06: Hey, everyone. Pleasure to be here. Thanks for having me. I'm a senior security engineer at Cohere. I primarily work on our red teaming operations, our application security, as well as protecting our foundation models. My background, I started as a network engineer, kind of moved into network pen testing and then towards sort of rest APIs and now forked into the crazy lm gentle AI space. Over to you. I'll pop on it to David.

David Haber 00:02:36: Hey, guys. Good to see you all. And it's great to see you again as well. So, David, co founder of Lakera, we work with some of the largest organizations out there and most exciting organizations out there in helping them adopt Gen AI securely. And as Demetrius mentioned, we also the company band Gandalf, which I'm sure some of you have enjoyed playing over the last couple of months.

Katharine Jarmul 00:03:05: Katharine, thanks. I'm Katharine Jarmul. I'm a principal data scientist at Thoughtworks. I've been working in NLP for more than ten years. So I saw the rise of llms coming a long time, and I moved to privacy in machine learning. Privacy, particularly natural language processing, about seven years ago. And I am the author of the book that got released last year, well received, called practical data privacy, about building data privacy into machine learning.

Diego Oppenheimer 00:03:35: Amazing. Well, I'll get right into it and I'll pick on you, Catherine. First, help the audience define for us security versus privacy in this context. Right. And in AI, when we think about certain practices around security, what should we think about? When we think about certain practices in privacy, what should we think about? Given your book, this seems to be very relevant.

Katharine Jarmul 00:03:57: Yeah. I've worked in the field of cryptography as well, so working on encrypted machine learning, which is where kind of like you can think of security and privacy meeting. But essentially we can think of privacy as both a social, a political, and a legal concept that is quite separate from security. GDPR is a privacy law, and it's probably the most relevant law for most of the things that we do around machine learning right now at scale. And it by default isn't a cybersecurity law. So it's a privacy law because there are special legal rights to privacy both within Europe and internationally. Sometimes privacy and security support each other. That's the world I want to be in.

Katharine Jarmul 00:04:41: Sometimes they actually don't support each other. So we can think of security surveillance systems like a doorbell camera as something that we might argue supports security. But in this case, it would be a huge violation of privacy. And we have kind of some of the same things happening in llms, I think, where, depending on how you surveil prompts and how you want to say, dangerous behavior could be a violation of privacy.

Diego Oppenheimer 00:05:10: To get really tactical on a couple of the privacy, so you mentioned one with around the prompts when thinking privacy in the world of llms, which is kind of what we're all looking at, what are some of the top two, three things that people need to be aware of today from a practitioner's, I.

Katharine Jarmul 00:05:29: Mean, I would expect that there might be some changes in the LLM landscape based on the New York Times case. In case you haven't heard of it, the New York Times is suing for copyright abuse. Of course, it could also be any type of ownership abuse, as well as private data abuse. For OpenAI, if you've been following any of the work of Google research. It's been long proven now, over the course of four or five years that llms memorize part of their training data. About 30% of repeated text is memorized. This is a feature, not a bug. It has to do with the way that the kernels in particular areas function.

Katharine Jarmul 00:06:07: So if you want to think of it as a large multidimensional space, we can say that the LLMs, especially the over parameterized models of any type, end up memorizing some of the training data in order to generalize for sparse areas of the data set. And so this, by default means if you're using a large over parameterized model, LLM or otherwise, it probably memorized some of the training data. There's some ways we can combat this. It's not all ways that we can combat this, but you should be aware that there may or may not be changes to the landscape because of that. And then, of course, there was some very interesting research released today by Google Research saying that they had received very good with differential privacy training of on device text data. So I think, thinking through how you actually train models from the beginning with privacy built in, which might mean differential privacy, it might mean federated learning, might mean encrypted computation, or encrypted machine learning, which are all, of course, covered in the book, that might be something that becomes quite relevant for this space.

Diego Oppenheimer 00:07:21: And I'll open this up to anyone. This is something that I've struggled, not struggled, but kind of like thinking through it philosophically, which is, I really like that you bring up kind of memory. Anybody who signed an NDA or actually have worked with kind of like contracts with mutual NDAs knows that essentially part of the exclusion is like something that you can learn, that you can learn and learn into memory while you're doing it. So you can't take papers, you can't take pictures. But if you learn into memory, there's this exclusion almost from legal frameworks around kind of what that looks like. Should we have a different paradigm for computers? Should shi be outside of that? As humans, we expect humans to be able to learn and use that information in every day. It's actually really hard to unlearn something, especially if you're in a room or kind of exclude yourself from it. Should we have a different paradigm for AI? If so, why? And it's not.

Diego Oppenheimer 00:08:21: Again, I'm kind of curious, and I'll let it to anybody, but I realize it's a little bit of a philosophical. I'll get a lot more tactical after this. But I've always kind of had that in the back of my head. And since you brought it up, that memory thing is a really interesting problem, I think. David, you're smiling. So I think you have something to say about it.

David Haber 00:08:39: I'm just thinking through this, and I wonder what ads would say. So, I mean, I mean, the short answer should certainly be yes. We probably need a new paradigm for what we are going through now. Some of the things that we are spending a lot of time thinking about is not only sort of the current systems that we are seeing out there, humans interacting with a typical LLM sort of application, but how do things look like as we take this into a world of agents, and we've got more and more machine to machine interactions, so now we are talking, not a human sort of learning things and memorizing things anymore. And sort of the analogy to the NDAs that we are signing today, that's going to be factored out very quickly, right? Like bits move faster than atoms, and there will be more machine to machine communication. These things will not only make decisions on our behalf, but they will also learn on our behalf and memorize things. So I think there's definitely a need for not only, I guess, the concept of an NDA, but more importantly, so a question around how we can trust these interactions and trust the learning that happens there and the memorization that happens there. These things are going to make all sorts of decisions for us.

Diego Oppenheimer 00:10:07: Right.

David Haber 00:10:07: And people typically, it depends who you talk with, but people typically overestimate the timeline towards a highly agent driven world. This is going to come sooner than we think. And I love the NDA example. I'll still think of a better answer, but I think there's just many, many pieces that we have to revisit and put in place for a highly agent driven, driven world.

Katharine Jarmul 00:10:34: I just want to chime in, you're still legally liable if you repeat something under NDA to somebody that hasn't signed the NDA, just in case anybody doesn't know.

Diego Oppenheimer 00:10:43: Yeah, I think in practicality, if you look at enough, this is something I was always been curious about. In practicality, if you look at it, it's very hard to prove those. Again, in terms of, again, you're right. I'm going to start with that. You're absolutely right where you're saying that. But there's not a lot of cases of that actually having played out or being able to have shown that going into kind of like, more of the practical. So this brings up this kind of, like, pursuit of privacy versus utility. Right.

Diego Oppenheimer 00:11:13: Like, what are the potential benefits of llms? Do they outweigh the privacy risks that we are talking about? What is the concept of private? I think there's a bunch of laws, especially in Europe, that have been quite successful in terms of helping with privacy. But some would argue that the illusion of privacy on the Internet is an illusion to a certain degree. So how do we have the utility versus kind of like benefits versus kind of risks of what we're doing right now with AI? Any frameworks that you've thought about or any interesting kind of work that you can point the audience.

Katharine Jarmul 00:11:52: Mean, I think Apple's approach is genius. So Apple's just been going around and signing license agreements with a bunch of content providers and several other AI startups are doing the same thing. So obviously, if you have license to use content and that license is extended to whoever produces the content, you find yourself in a very different legal standing. And I wouldn't be surprised if the LLMs that Apple tries to introduce, which are also a little bit more task specific and content specific, end up outperforming many of the others when they release them.

Diego Oppenheimer 00:12:29: Sorry, I seem to have lost my headset here. I can't hear anything. Can you guys still hear me?

David Haber 00:12:35: We can hear you perfectly. Yeah.

Diego Oppenheimer 00:12:38: Hello, can you hear me?

David Haber 00:12:40: Yeah, sorry, we lost.

Demetrios 00:12:44: Stop working, ringleader. But that's.

Ads Dawson 00:12:48: LLM has him now.

David Haber 00:12:50: Yeah.

Demetrios 00:12:53: While he's debugging that. I love what you're talking about, Catherine, because the idea here is that you're saying, oh, you know what? Apple is making a play not just for these bonds to be between Apple and selling New York Times on your iOS subscription, but also now if they can deepen that relationship, it can be used in their models. And so in case. Diego, you missed that. Are you back with us, Diego?

Diego Oppenheimer 00:13:24: I am back. Sorry about that. I don't know what happened. My Bluetooth headset decided that. But you're on top of it. Thanks for jumping in, like right there.

Demetrios 00:13:32: That's what I do. You caught me in the middle of eating a passion fruit, so I'm going to go back to eating the passion fruit. I'll let you guys get back to.

Diego Oppenheimer 00:13:41: Great. Yeah, thanks for that. So I'm going to one more time think on kind of privacy, and then I'm going to switch over to kind of like some of the security questions. Totally, Gregory, with you on the kind of licensing content and taking approach and really taking a serious kind of look at like, look, this is content generated. There's authors, we need to kind of give that now what know, we see a lot of data being shared across, especially here in the US. You get a credit card, they're sharing your data with everybody. You sign up for this, you're sharing your data with everywhere. So in those cases where it's a little bit hard for you as an individual to kind of control where all your data is being shared and how it's being shared, does that change somewhat? Your answer, Catherine? Because there is people licensing that data, I would argue it's technically legal for them to license that data because there's all the permissions, but they're really de anonymizing you from the Internet, and you are starting to show up in a bunch of how does this tie to the right to be forgotten when these models get trained, right? And how difficult that is to be achieved? Or David or anybody wants to kind of answer that? Does the question clear a little bit?

Katharine Jarmul 00:14:56: Yeah, I get what you're asking. So there's like a few different paradigms at play. So one is copyright, right, which is a little bit separate than privacy, but we tend to use those same arguments in machine learning, because with generative AI, at least the arguments line quite. Then you know you have the copyright owner, and the copyright owner legally has a copyright to the material. And that's why if you read the New York Times lawsuit that was filed, it's quite interesting because they were able to retrieve full article text from OpenAI models and so forth. So the argument here is like, if I can retrieve the full article text from a model and that article is copyright underneath somebody's thing, you either need to pay for that content or you have to untrain. From what I've heard, copilot has put some filters on the same type of problems. So copilot has the same problems, right.

Katharine Jarmul 00:15:53: Many of these over parameterized models will have the same problems, because if you follow Vitali Feldman's research, then you will see that it's mathematically impossible actually to not memorize data if your model is over parameterized. So that's fine. Okay, what does that mean? Some of the research, the research I referred to today was about Google's research and federated learning. Google's been doing federated learning on device for keyword prediction. Also, Apple does the same. And they received. Today, they announced, the research team announced they had received, like, within ten percentage points accuracy using differential privacy during federated training, which has kind of been unheard of. I haven't read the paper yet.

Katharine Jarmul 00:16:37: I haven't had a chance. It was just today. But one thing that I advise folks and why I wrote the book, is to start to familiarize yourself with privacy technologies and also with how they can be used in combination with stuff that you already do. And this is because when you use them correctly and with swag, Google and Apple do it, you're often still compliant under strong privacy laws. Now, we could have a whole other argument about do you want to go above and beyond compliance? But that's probably for a different time.

Diego Oppenheimer 00:17:13: Awesome. Thanks very much. I'm going to switch it over to a little bit of the kind of on the security side. So from a governance and kind of side I've seen personally, because I've worked in the space for quite a bit, in terms of how governance and kind of risk management of ML models from what's called classical ML has switched over to kind of like the new generation. What I'm not, because I'm not an expert on this, is how has the security landscape changed? So, as I was going to ask you, this type of stuff that you were working on from a security perspective on kind of like, I guess we call it classical ML now. Is that the official term to make us all feel old? But how's that shifted? What has changed? Is it mostly the same? Do you look at it in the same way? Really curious to understand, kind of like that landscape, and since you're right in the middle of it, yeah, I'd say.

Ads Dawson 00:18:07: It'S at least twofold, to be honest. You have what you would define when you say traditional machine learning, you've got your traditional kind of ML sec ops, vulnerabilities, threats, risks. But like we've been saying, this is kind of becoming agent driven. Large language models are being embedded within applications at the speed of light right now. Like a common one, for example, is chat bots. So whilst we have our traditional application security controls, as well as deep down in our ecosystem, our infrastructure and network controls, we also have what's kind of developed in terms of what we've been developing on the side of the new OS. Top ten for LM applications project is where those two kind of overlap. So where classic machine learning vulnerabilities may introduce and may be a lot more prevalent when embedded within an LM application, helping us define those additional trust boundaries in areas for risk where that could be exploited, where it can't be in a traditional environment.

Ads Dawson 00:19:15: There are overlaps between the two for sure, but they do share common vulnerabilities within them. Example. Right, let's take model data poisoning, something like front running or split view. Data poisoning is more like the ML Sec ops perspective. But then that's not relevant to LM application security per se, right. Because this is before the model is even produced and before it's put in the application. This is way over in the kind of machine learning cycle. But then obviously they share their obvious common things, right? Like we always try and monitor and try and remediate dependency vulnerabilities in our SDLC.

Ads Dawson 00:20:01: That's really obviously even more prevalent now when we're considering our machine learning lifecycle as well. So they definitely do overlap, but is creating a new landscape, new companies. AI red teaming is now kind of becoming its own thing, penetration testing. So, yeah.

Diego Oppenheimer 00:20:27: Awesome.

Ads Dawson 00:20:27: Yeah.

Diego Oppenheimer 00:20:28: So I think in the audience we have a lot of people building stuff with AI and it's interesting because if you're using for any of us who've built SaaS products and launched them to the world, we've all gone through a security review, right. We've all had to kind of present ourselves as saying, hey, this is the stuff that we've done and either have sock two or ISO whatever, someone in the standards and all these compliance and controls. And it's really interesting. What should we expect from our model provider vendors so that we can actually go sell again, assuming we're consuming models from other folks, right. If I'm building my own company and building my SaaS, what should I be asking from a security perspective from my vendor so that I can provide that to my clients, right. So that they get happy? I think that's a core, core question that we all have as we do that. And so I don't know if ads, you have some thought. I know David, you're going to have a lot of thoughts on this.

Diego Oppenheimer 00:21:31: I'm going to let answer that. If he has specific questions, then I'm going to move over to David.

Ads Dawson 00:21:36: No, for sure. I think part of what we've tried to do with the OS project is that there is definitely a gap. Traditional machine learning doesn't count for things like traditional application security, whereas all these kind of like model based vulnerabilities. So there's definitely like a kind of emerge that's happening here. There's definitely a gap to educate one another. But yeah, I'll let David, if he has anything to say on that.

Diego Oppenheimer 00:22:07: Okay, so David, you're building an area that's near and dear to my heart, which know I've said for a long time now that the number one barrier for AI adoption is that people don't understand how to measure and understand the risk associated with know. In general, building businesses is about taking risk, in financial services is about taking risk. It's all about taking and understanding risk. But measuring risk here is really hard. You've built a company that's attacking this space to a certain degree. So maybe tell us 2 seconds about your company, but also how should people think about and privacy and security go into that risk vector? How should we think about AI risk and what are the things that we can do about it?

David Haber 00:22:52: Yeah, I think just first of all, everything that ad said, and I'll add to that, I think we see a lot of companies now that are looking to the work that ads and group over at OAS have done to ultimately understand what are first and foremost the main risks that we should be looking at. It's from our perspective, one of the most helpful resources out there that companies use to educate themselves. And then I would say sort of the traditional risk management side is in a transition phase right now where they're used to working with traditional software, non gen AI software. Things are well oiled sort of on that end. And all of a sudden they have tens or hundreds of product teams that are running faster than anyone can think. And they're putting not only security teams under pressure, but they're also putting traditional risk management under pressure. Right. What does it mean now? What does it mean to pull in an OpenAI model into our stack and work with that? What does it mean to start sort of fine tuning our own models or even build our own models from scratch? I don't think we have templates for how to execute on that at this point that are sort of universally applicable.

David Haber 00:24:14: We fortunately get to speak to some really exciting companies out there and we see some mature sort of concepts evolve, and then it really comes down to ultimately, what kind of application are you building? More and more we see companies in the news that I guess somewhat prematurely deployed their geni application into the world and they're now sort of paying for that. Some companies are restricting use and sort of building genai applications to internal applications to mitigate some of the risk that comes from the outside world. Others are building publicly facing applications. So they all come with different risk profiles. And again, just coming back to what I said initially, OA is a great resource to learn more about this. I literally point every person to that. And there's a lot more work in the pipeline, I know, on multiple fronts to bring more sort of boilerplate execution templates into the world as it comes to risk management. And obviously of course as it comes to securing these gen AI applications prior to deployment, but then also in.

Ads Dawson 00:25:31: To. Sorry to interrupt, also wanted to ask, like David said, nailed it. One of the real things that is key primitive and is that education and is just basic elements that we go back to, like threat modeling for example, is like at a high level.

Diego Oppenheimer 00:25:51: Is.

Ads Dawson 00:25:51: A way that you kind of use a framework or develop your own framework to evaluate and assess risk by looking at your environment. Like David said, a lot of these vulnerabilities are not even relevant to your scenario. So about identifying just through basic things like threat modeling where you can identify risk, identify trust boundaries, and then look at your strengths and weaknesses of that particular trust boundary and effectively go back to even old school network and infrastructure security controls can really make key differences here.

David Haber 00:26:25: Yeah, I agree. Sorry, maybe just a follow on thought, because I fully agree. And I also think it's interesting to take a step back and think about what is really sort of changing the landscape here. And I think that's where it really pays off to go back to the basics that ads alluded to just now. But fundamentally, what we're doing here is we are bringing universal capabilities with universal interfaces into the world that literally like every twelve year old can play with. Gandalf is the best example of that. And you now push these products out that there's no tomorrow. So I think what our advice typically is to sort of take a step back and think about what are the trust boundaries? What does the threat model look like? Go back to the basics and you're in a good position to push applications out into the world.

Ads Dawson 00:27:19: Yeah, a lot of the vulnerabilities that have been made public, from what you can see from public reports on LM application security, for example, a lot of them are traditional things like cross ex scripting, which are resolved through things like content security policies or even API to API authorization. So like implicit trust, human in the loop intervention as well. Sorry, that's the last time.

Diego Oppenheimer 00:27:48: No, that was great.

Katharine Jarmul 00:27:50: Can I interrupt and chime in one additional thing, please, go for it. Yeah, so thoughtworks has been helping some large companies deploy llms in production settings. And one of the things that we've found that I'll just give as a tip for some of these that went live in big systems is develop your evaluation criteria early and then update it quite often with your beta testers. Because a the model behavior will change even on small version changes, and b if you don't have evaluation criteria for unexpected behavior, users will find ways to create unexpected behavior and will teach you to build your evaluation sets. So, yeah, evaluation is extremely important and you should probably define your own rather than use a generic one.

Diego Oppenheimer 00:28:43: I love that a lot of this stuff is extremely practical. So one of the kind of frustrations in the space, which I think is just a normal of how it works, is it was very philosophy driven for a while, right? We were talking about AI. Safety was mostly all defined about ethics, and everybody wanted to talk about ethics. And then once you double click into that, you're like, well, ethics change based on population and civilization and who you are and what. These are not concrete things that we can actually address, but now we're actually applying real security principles to kind of like same work. Ultimately, these are probabilistic workflows, right. That's kind of like the change, right. But it is code and it is machines, and we are working with them and we have a lot of the traditional stuff that's being applied.

Diego Oppenheimer 00:29:27: I think we also hit a bunch of problems. I will always remember being on a call with a CIO of a very large insurance company and him telling me that they would not be able to deploy our software, our ML, unless the scans came back with 95% vulnerabilities addressed from the generic scan. And I was like, well, I hate to break this to you, but this means you need to shut off all your software across the entire company because no software comes back with that kind of level of here's your framework and here's how you address each one of those risks and what you can be exploited versus what you can't be exploited. But if you're just purely scanning around this. And so I'm glad that a lot of these security engineers, like odds are kind of coming in and kind of bringing that knowledge of how this is done, because I think some of the same exact principles are going to apply. I'm going to kind of. We only have a couple of minutes left, so this is going to be a little bit of a crossfire. So when we think about kind of the black box nature of a lot of these, well, neural nets in general.

Diego Oppenheimer 00:30:36: Right. But on top of that and the fact that we're consuming models via an API, is there things that we need to be. What do we think that the golden state where we need to get to. Right. For essentially, ultimately, I think the objective function is to be comfortable with risk. I'm not even going to say reduce risk. I think we just be comfortable with the risk that we're taking. So what do we think as an industry needs to happen for that to happen.

Diego Oppenheimer 00:31:00: Right. For that to be true, we're going to assume that neonness will continue being black boxes to a certain degree. We are going to continue using them because the kind of positive nature of them is the benefits. We're seeing it every day. What do we need to do to make ourselves comfortable with the risk that we're taking there?

David Haber 00:31:20: Yeah, I have very strong opinions here, and I will not hijack the complete conversation. I only have one or two minutes here. But I would even add to what you said, Diego. I think it's about controlling risks. It's not about mitigating or measuring. It's about controlling, being in control. And I think one thing we've learned over the last decades, building some of the most complex systems out there, is that almost none of them are fully explainable. So this conversation around black box versus not black box and explainability, that's good, but it's also distracting in many ways.

David Haber 00:31:58: And so I think ultimately, what we need to get to here is a world where the world of AI meets systems and safety engineering to ultimately bring in principles into our development workflow and operational workflows that allow us to control the risk that is out there. If you think of a rocket that takes us to the moon, you've got a rocket that is not explainable at all, but has flown a million times and, you know, sort of what it's doing. That's a pretty good rocket to get on versus a rocket that is fully explainable and has never flown at all. That's probably not the one you want to get. Yeah, I think there's a lot to be said here, but I think it's really about controlling risks. So I love what you said there. Diego.

Diego Oppenheimer 00:32:46: Catherine adds, any last thoughts? Because we have to start wrapping this up. Sadly.

Ads Dawson 00:32:53: No 100% control risk. Accept the residual risk that is left over, whether that's an accepted risk for whatever reason. 100%. I would definitely agree with that statement, which you identify through things like threat modeling.

Katharine Jarmul 00:33:11: Now, I would say privacy engineering is another thing to look at. We're still kind of in the infancy compared to security engineering, but it's an exciting field full of lots of fun math. So if you got into machine learning for math, I welcome you to the field of privacy engineering.

Diego Oppenheimer 00:33:29: Well, I want to thank you all for being on this panel. I think it was awesome. I also want to thank you personally. Ads, Catherine, I believe this, deep in my heart, you're working on, really, the problem that needs to be worked on. Because I think this is the gateway to adopting what the greatest technology we will see in our lifetime is. Right. And I very much deeply believe that. I think we're working on this problem, and it's about how do we adopt that technology? And you're all working on it.

Diego Oppenheimer 00:33:55: So thanks for working on that. This was awesome. Thanks for having me. And Demetrius with floor is yours again.

Demetrios 00:34:02: Incredible stuff, everybody. That was super cool. As you know, Diego, I love just geeking out about this, and I really appreciate that we got all of you on a virtual room to geek out about it all at the same time. So it's really fun. And thanks again. I'm going to keep it moving. So that means you all got to say goodbye, and we will talk later. Hopefully I'll get to see you close.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

25:44
Posted Apr 27, 2023 | Views 923
# LLM
# LLM in Production
# Data Privacy
# Security
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com
32:46
Posted Jul 21, 2022 | Views 667
# DevOps
# Security
# Observability
# tryhelix.ai
53:24
Posted Jul 02, 2021 | Views 347
# MLOps Security Practices
# Ethical AI