MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Building Trust Through Technology: Responsible AI in Practice

Posted Mar 25, 2025 | Views 25
# Responsible AI
# Transparency and Privacy
# Lumiera
Share

speakers

avatar
Allegra Guinan
Co-founder @ Lumiera

Allegra is a technical leader with a background in managing data and enterprise engineering portfolios. Having built her career bridging technical teams and business stakeholders, she's seen the ins and outs of how decisions are made across organizations. She combines her understanding of data value chains, passion for responsible technology, and practical experience guiding teams through complex implementations into her role as co-founder and CTO of Lumiera.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

SUMMARY

Allegra joins the podcast to discuss how Responsible AI (RAI) extends beyond traditional pillars like transparency and privacy. While these foundational elements are crucial, true RAI success requires deeply embedding responsible practices into organizational culture and decision-making processes. Drawing from Lumiera's comprehensive approach, Allegra shares how organizations can move from checkbox compliance to genuine RAI integration that drives innovation and sustainable AI adoption.

+ Read More

TRANSCRIPT

Allegra: [00:00:00] My name is Allegra Guinan. I am the co founder and CTO of Lumiera, the boutique advisory firm focused on responsible AI, and I take my coffee as a flat white with oat milk. Demetrios: What is happening, good people of Earth? I thoroughly enjoyed this conversation with Allegra all about the nuts and bolts and practicalities of responsible AI. Demetrios: I've talked a lot over the years about responsible AI because some of you may not know this. I had a whole other podcast that was all about AI ethics, responsible AI, all of those buzzwords that seem to be more important. Every day we move forward. This was pre chat GPT that I was running that podcast. Demetrios: And sometimes I felt like it was a lot of talk when we would get into the topics of responsible AI or ethical AI. And in this conversation [00:01:00] with Allegra, it did not feel like that at all. So let's. Stop wasting time and jump in. Demetrios: All right. Let's talk about responsible AI. That feels like a good place to start. It is something I know you've been diving into and what do you mean when you say responsible AI? Allegra: Yeah, that's a good question because there is no one definition of responsible AI. When I think about it, it's an approach to the design, the development, the deployment, the use and regulation of AI that has, a focus on a set of key principles that are related to responsibility. Allegra: And these range, like, when people list these principles. It can go from three to ten. So when I think about it, there are definitely a few that come to mind. So fairness is one, accountability, transparency, explainability, privacy and safety, [00:02:00] reliability and robustness. I know that's a lot, but they're all really important, and there are many others that are important as well. Allegra: But the thought of responsible AI and the concept is this approach that is holistic to every, every part of the life cycle. Demetrios: It reminded me, as you were saying, all these different, parts of responsible AI, everything ended in E. I'm like, so basically any word in the English dictionary that ends with a Y. Allegra: Yeah, that's in there. Those are all the principles. Any word you can think of, they're always in there. But that's the whole thing, right? There are so many different ways to think about it. And all these different organizations are putting together frameworks around it. NIST, for example, the National Institute of Science and Technology, they have a risk framework for AI that's more about transparency when they talk about it. Allegra: There's the International Association of Privacy Professionals. They also have an approach. And then each company. That's out there. Any major company playing in [00:03:00] this field, comes up with their own definition and principles and puts it out there. And that's what makes it really challenging as well to measure, to talk about because nobody's really talking about the same thing and it's up for interpretation of whoever's talking about it at that time. Demetrios: Yeah, for me, it's a little bit like the words that you're using can be highly subjective. So even on the transparency, what exactly does transparency mean? And you have, it's okay, one corporation is going to interpret transparency as one thing and then maybe a nonprofit is going to interpret it as another thing. Demetrios: And so constantly getting. Different shades of transparency, even though we're saying, Oh, we all agree that transparency is important. Allegra: Yeah. And not even everybody agrees that it's important and yeah, it does change. And explainability and transparency seem very closely [00:04:00] related. All of these things can really be close. Allegra: Transparency, when I think about it is having this visibility of what is going on when you're building out a system. So you can see how they thought through design. You can see what kinds of decisions were made. But then explainability is really understanding. Like a human should be able to understand how we got to this output, why those decisions were made. Allegra: And those things are different and quite difficult a lot of the time as well. And so if a company decides explainability is not their top priority, then obviously they're going to build in a very different way and move in a very different way. And I think that's because these words are not really clearly defined and we're not aligned on them. Allegra: It also makes makes it hard as a user to understand what questions you can be asking what you should be thinking about, or if you're looking into buying an AI product or solution for your company, how do you even figure out what you're supposed to be looking for? And that's why it's really important for us to have these kinds of conversations and open up the floor to how we define them.[00:05:00] Demetrios: Okay. So if I'm understanding that correctly, If your company does not give you a clear guiding light or guiding principles on how they see responsible AI, then the ripple effects you will feel as you're going out there and you're trying to buy something from the market. And you don't know what questions you need to be asking of that provider because you're not quite clear on yeah, like we want something responsible. Demetrios: We're really big into responsible AI. And it's okay what does that mean? You're like I don't know. It's just a box that we have to tick. Allegra: Yeah, exactly. And that's another way that I think about it quite differently. It's not. Just a checkbox. It's not just compliance. These are two different things. Allegra: So it's one thing to have regulation, which obviously there's more and more regulation coming out. And so more companies will be trying to just check off that box. Yes, we are compliant, even if it's bare minimum compliance. But When we talk about responsible AI, it should really be an organizational change, a cultural change that's coming from leadership. Allegra: [00:06:00] But it's also the responsibility of everybody in an organization. Even if you're just building something on your own there, there should be this element of understanding the intent of what you're building, why you're putting something out there for people to use. So I think a lot of the times The privacy team, legal team is maybe doing the reviews of a vendor if you're working in an organization or you have engineering and they're suddenly the ones building everything, especially in this space where things are moving so fast and product managers or whoever are like, yeah, just go build this as fast as you can put it out there and leaders want to see that. Allegra: So suddenly it's an engineer's responsibility to understand this term and to put it out there. And if you don't have alignment on that within an organization within a region, whatever it is, as big as the scale you want it to be, or small, then you're not going to end up with a result that everybody agrees is fair and ethical and all of these different things. Demetrios: Do you feel like this is something that is more pertinent [00:07:00] for larger organizations? Allegra: I don't think so because there are a ton of small organizations that are building things that could end up getting a ton of use and reach. And being, snowball effect, they put something out there and somebody else builds on top of that or whatever. Allegra: I think it really should just be a part of the culture of development. I don't think it really matters what scale. the organization is, and even if it's just an individual, like I said, I think having that understanding of what the intent is, why you're actually building something. Do you have a positive intent? Allegra: Is it negative? Are you keeping track of the decisions you made? Even for yourself do you understand why the system that you're building is making the decisions? And if you don't, I think that's really a time to step back and reflect on why you're doing it, which is hard because people are super excited and they want to build really fast, which makes sense. Allegra: But I think it's easy to get carried away and then end up somewhere that is really far from what you originally intended. Demetrios: And do you have a set of [00:08:00] questions that you tend to encourage people to ask? Like just what you were mentioning. What's the intent of it? Why are you building it? How are decisions being made? Demetrios: That feels like a great start. It also feels like something that anyone that is involved in the process should be doing. Now, whether or not they do it is another question. Potentially that's because there's a lot of friction in answering those questions or I just answer it in my head as I'm coding my function. Demetrios: And I'm like, yeah, of course I know why I'm doing this, but that doesn't necessarily codify it for the whole organization. So have you seen. Best practices on questions to be asking and then how to make those questions discoverable for the whole organization. Allegra: Yeah. I think actually one step before that is who is in the room when you're coming up with the kinds of questions to ask. Allegra: When we're, and I'll [00:09:00] just back up for a second. So I I'm the co founder of a advisory firm that's focused on responsible AI. And so we are helping organizations make better decisions around their AI the way that they're building it, the way they're using it. And so one of the first things we do is when we're starting to speak with with leaders, with an organization, understanding who's in the room around decision making and one of our core values is perspective density. Allegra: And when we think about that term, when you think about density in the regular sense, like how much you can fit into a certain area, perspective density is how many different perspectives can you have in a given space, in a given room. So if you enter a room and you're starting to ask these kinds of questions that you just brought up who do you have there in this space? Allegra: And if you only have a very small group of people that are from the same side of the organization or have the same background. Then the kinds of questions you ask will not end up in a very fair place or responsible place. So I think before you even come up with that, it should [00:10:00] be, do we have everybody in the room that we think is important to be here? Allegra: Probably not at first and that's fine. So understanding where the gaps are and how can you bring in more views so that you can come up with the set of questions that will lead you to the kinds of answers that you need. Demetrios: I like this perspective density. And recognizing that you probably do not have all the right voices in the room, or maybe it's that you need to socialize some type of a piece of writing to a broader group of folks to get that perspective density. Allegra: Yeah, definitely. We only know what we know. We only have our own lived experiences, and that can be really diverse within ourselves, and we could have had a ton of different lifetimes that we've lived in various places or whatever, but it's still just ours, and so as you're starting to work on some sort of product solution that will have far reaching [00:11:00] impact and potentially be more out of your control than something previously that we've had in this area in technology How can you bring in more views to make sure that you're building something that can be used in a fair way by a lot of different people. Demetrios: So my instinct instantly is Oh, I got to bring in more like stakeholders. This is going to make things so much slower, or it's going to somebody, of course, is going to want to be like a dog peeing on a fire hydrant. And maybe what they say doesn't actually add much value, but they have to feel like they're contributing in one way or another. Demetrios: I think we've all worked with that person before. How do you keep the conversation productive while bringing in those different stakeholders? Allegra: I think there are a couple of different ways. So one is this cultural shift as well. So I think it shouldn't feel like a burden. [00:12:00] It's really a change of mindset of how you build. Allegra: If you are under a lot of pressure to build really quickly, for example, and then you're also given these roadblocks, what may feel like roadblocks, where you have to have more conversations, you need to create more artifacts, whatever it is then that will go against what you feel like needs to be the top priority, because that's what you've been told previously, so I do think there has to be And understanding if you're working in an organization or on a team where it's okay to take more time to do things correctly. Allegra: And I think you can apply that to anything, right? Even before responsible AI, you want to work on foundational things, you want to work on your tech debt, you can't just keep building faster and faster and not worry about all these other things. So it should be that same mindset of you're just taking a little bit more time to build things out correctly so that they are more reliable, scalable, robust and then you can build faster after that. Allegra: If you have these set of principles up front and you know what you're supposed to be doing, the kinds [00:13:00] of decisions you should be making, what to consider, then you can be faster later. So I think giving it a bit more time at the beginning is one thing and having this shift of mindset. The other is creating technical. Allegra: Requirements that are really easy to work around rather than having just a bunch of policies, a lot of back and forth with conversation, which I do think needs to happen. But maybe it doesn't need to be the engineers all of the time involved. Although I do think engineers need to be present a lot more in business conversations. Allegra: But for example, the EU act technical requirements that the guys from Lattice flow built out like completely comply AI or however you say it. I think that was a really good example of how to make something really tactical and not be this massive regulation that you're not really sure how to break down and use it every day. Allegra: So I think if we can do more of that and go more in this direction of how do you just build in checks, have metrics that you all align on that are related to these concepts, and [00:14:00] they're just part of your workflow, then I think it makes it a lot less daunting to, to have as an engineer. Demetrios: And the beauty of that is you've got something that you can almost like. Check against there's a side by side where you can say, all right, are we doing it or are we not? And if we're not what are the reasons for not doing it or if we are why are we doing it? Okay, it's explaining your rationale and your reasoning and It is very pointed Of a solution, maybe isn't the right word, but it is a very pointed style of documenting your work. Demetrios: Have you seen other areas where you can, or no, maybe that's not the question I want to ask. I think I want to ask something along the lines of, how have you seen business requirements? Being translated into those technical requirements or law documents being translated into technical documents, such as what happened with comply. Allegra: Yeah, I think we're. Not seeing a ton of it, to be honest. That was the first technical breakdown of the UAI Act, which is obviously a very new piece of regulation. It's not something that I've seen used a lot not that specific use case, but in general, technical requirements that are directly related to something like that, some piece of legislation or whatever it may be. Allegra: I think if you've been working and in development for any period of time, you do see this. Lack of connection between what people want originally, and then what ends up being built. I've been a product manager, [00:15:00] technical program manager, like I very clearly see that disconnect all the time. So we're not even good at doing it to begin with, let alone in this kind of new area. Allegra: So I think there's a lot of opportunity there for sure. And it's definitely something I want to see a lot more of. And that's also why it's important to have engineers in the room, because then you get this perspective as well. This is what would be easy for me to understand if it meets your needs. Allegra: So these are the kinds of questions that would come up for me when I'm building to know if I'm doing this in the way that I should be, or that you want it to be. So I think having those conversations and figuring out how to format these requirements into technical requirements is something that should be happening more. Allegra: We do have metrics obviously that, Are available out there. I don't think they're as widely used as some performance metrics, but there are some for bias and toxicity. You still have to define what your threshold is. So again, there has to be a conversation with whoever's building with the organization of what that means for you and what is the line [00:16:00] where you say this is not appropriate. Allegra: We will not move forward. But they exist out there and I'm hoping that we see more and that they become more socialized and regularly used in, in everything because right now. I'm seeing a lot of yes the context was accurate, yes, there was low latency, but these are performance things, it's not actually saying if this was harmful in the end, or even if it was useful necessarily to, to whoever was on the other end so I think changing a bit how we think about metrics really making that a core part of the planning process up front And incorporating more of the concepts of responsible AI into those metrics so we can measure them. Demetrios: It almost feels like some of this stuff is playing on a different level than the actual like technical part of it, right? Like you were talking about performance metrics there, but the, is it harmful or is it useful is more of a product metric in my eyes. And it feels like [00:17:00] the, those are almost bigger questions to be asking, and hopefully you're asking them earlier on in the conversation than when you've got the AI out deployed and then you recognize actually this doesn't, it's not really that valuable, which we've all seen happen plenty of times just because. Demetrios: Someone wants to be using AI, or at least say they're using AI. Allegra: Yeah, they are different. Definitely. I think because right now we're seeing a lot of engineers driving development because they're the ones who are the most literate in the space. Companies, organizations are realizing that they have people on their team that are able to build what they think is necessary. Allegra: And so a lot is being put on the engineers. And so I think Naturally, it leans more performance because that's what you're thinking about. As you're building you're like, okay, yes, this is exactly what I wanted. It's the output I expected, [00:18:00] whatever, however you're defining it for yourself. And so it's not really the same mindset as with product management in general, or building out a product where you have this whole. Allegra: Planning phase, you define your success metrics. Those are impact ones and product related and performance. And you have this whole thing, you have a ton of artifacts. It goes through a process. Like none of that is really happening right now. People are not thinking about AI as a product. So much is put on engineers to just build and put it out there. Allegra: And so that's what I'm calling out that there's a lot of focus on performance metrics, just naturally because of how things are evolving. And I think there needs to be a shift where we are considering other. Elements, other concepts and bringing that into the conversation as well. When I did this the MLOps meetup on responsible AI on the tactical side, on the engineering side. Allegra: And I was asking like, if a company decides they care about responsible AI and they put out all these things. And then they ask the engineers to like work [00:19:00] nonstop, put things out all the time. Is that really responsible? They're impacting their workforce. They're not giving their engineers the tools they need. Allegra: They're not bringing them into the conversation. They're saying they're doing responsible AI, but I think there's so much pressure on engineers right now. And they're really the only ones that are driving a lot of the conversations that I'm seeing forward on the tactical side. There's a lot of theoretical conversations and regulation and things like that, but at the end of the day, I'm not seeing a ton of connection between that and then the engineers. Demetrios: Which I think is a lot of the backlash that the responsible AI movement in general gets is that it is a lot of talk, but when the rubber hits the pavement, what do you actually do? And so that's nice that you're calling it out and saying, look, engineers are being forced to create things. Demetrios: And then you have other folks that are saying, yeah, we want it responsibly. They're not clearly defining what responsible means. And they're [00:20:00] not really giving the engineers the space to create the responsible AI that they're looking for. Allegra: Yeah, it's exactly that. And the literacy part is so big here for responsibility. Allegra: There, there's a lot of technical literacy on the engineering side of things. And I think there can be areas of Literacy related to AI in teams like privacy, who have to be really aware of what's going on in terms of regulation, but then everywhere else. Even at like C suite level, maybe even especially depending where you are, you're not seeing a ton of time put in to really understand what they're asking for and to understand the capabilities, limitations, who they have on their team right now, who they don't have in order to make the kinds of decisions and move the organization forward in the way that they have in mind. Allegra: I think there's a huge onus on the other teams outside of engineering to be part of this conversation [00:21:00] to, to create these definitions, to create that culture, and then put a framework into place that makes sense for everybody and not just put it onto one team or another and say go figure this out. Allegra: I want something transparent, but I won't tell you what that means. Good luck. That's where we are right now in terms of the conversations I'm seeing. And I think there needs to be a huge shift. Demetrios: It's good. It's like the Charlie Munger quote, tell me where I'm going to die so I can never go there. Demetrios: You're telling us everything that we shouldn't be doing. So hopefully we do the opposite and begs the question what are teams doing that are, that you feel are doing this well, what are some of the things that they're doing, basically the opposite of. What you're talking about, but maybe you've seen certain ways that. Demetrios: Certain strategies, even, that have proven to be very effective. Allegra: Yeah, I think having a strategy to begin with is quite effective. Not just going out there and trying to [00:22:00] do way too much without a plan. Maybe I just love a plan, but I do think it's helpful to have some sense of where you're going and why you're going there. Allegra: Starting small, I think, is quite helpful in an area where you have a lot of a lot of people who are engaged and understanding and who are going to work together on, on building something that's impactful in a responsible way try to just go after one area. And then build from there and be like, okay, this worked, we proved our value, this is not any new kind of information that I'm sharing, but we're still not necessarily seeing that. Allegra: I think the one of the defaults modes is like just put AI in everything, and That also could happen because maybe you have a lot of applications that you use and suddenly they all have AI in them and so it happened to you. But I think just starting small and focus and building up and seeing where you actually have value being delivered is one thing. Allegra: Another is shared learning. [00:23:00] So I think organizations that are finding out who the people are, that are on the team already, who are super excited about things, they have started a channel, Slack channel, whatever it is, they're starting to host weekly sessions, they're getting really excited, those people have a lot of Knowledge and are really dialed in and that can be really helpful to spread literacy across an organization to be if once you come up with whatever your principles are, whatever responsible AI means to you, those people can be so helpful and getting that out to everybody else. Allegra: So I think that's under leveraged right now as well, but companies that are that have that. And organizations that have that, I think, do a lot better in terms of moving things along and in the way that they have in mind. And I think this is just across the board something that isn't utilized enough or celebrated enough, really, are the people who are Champions for new things who are naturally starting the kind of conversations that you would want and doesn't have to be forced. Allegra: There will be people who will drive that forward. Demetrios: Yeah, the [00:24:00] evangelists that are doing that. It's such a great call out. When you talk about a plan, what is a successful plan? What does that look like? And what about On the other hand, when we've already got something going and now we're trying to retroactively go back and make it responsible in a way. Allegra: Yeah it obviously depends on what success means to you, what that plan ends up looking like. So maybe that's just the most important part is how you define what success is. Is it overall adoption? Is it increased literacy across, your team? To be called out as one of the innovators in this space. Allegra: Do you want to build something that's never been built before? What are you trying to do here? And do you understand what that is? So that's the very first thing. Not again, not just saying we want AI and then that's it. That's not enough of a plan, so I don't think you need to know every single detail. Allegra: It doesn't need to be [00:25:00] prescriptive in that way of these are the exact solutions that we're trying to put out there, but having an understanding of what success would mean for you, I think, is the best place to start. And then I forget what the second question was. Demetrios: If you're already. Allegra: Ah, yeah. Demetrios: You already have a product out there and you're retroactively trying to fit some. Demetrios: Yeah. Responsible. Allegra: So obviously you can make updates to things that you can not once you put something out there, it doesn't need to be the only version of that ever exists. I hope they have some sort of iteration process in mind. This is actually something that we do as well. So a lot of people have POCs out there that are failing. Allegra: They didn't deliver the ROI that they expected for so many different reasons, because they didn't define success because they didn't roll it out in a way that makes sense to their workforce or to their customers. They didn't come up with the right considerations, whatever it may be. So I actually think it's a really good exercise to have something that you already pushed out that isn't necessarily delivering what you imagined. Allegra: And hopefully you didn't choose. It's something that your entire business depends on, and it was quite small and controlled. So you can go back and say okay, if we could have done this differently, what's not going super well, where are we seeing potentially outputs that [00:26:00] don't align with our values or our vision or whatever it may be. Allegra: And let's make changes around those specific things. And now we know we need to have experimentation in place, or we need to have some sort of framework for evaluation in place that we didn't have before. So I think it can be really helpful, actually. if that's the position that you're in, because you'll very clearly see and very quickly see where those gaps are and where you're starting to fail on things that are quite important. Demetrios: Yeah, that is such a great call to leverage. These learnings and leverage the failures as a learning and then recognize like, all right next time we do this, how are we going to do it differently? Allegra: Yeah. This concept of infallibility is so interesting. I don't know if you are reading or did read Harari's Nexus. Allegra: So he is the author that wrote sapiens, which was a book that a lot of people enjoyed and nexus is the latest one. And so he's talking about information as a concept, and he ends up talking about AI as well. But there's this whole concept of [00:27:00] infallibility, and if you think that you are infallible, and you don't plan for anything around failure, then when you will fail, obviously. Allegra: It's inevitable. But you'll have nothing to fall back on, and you will ultimately be destroyed. And this, he talks about it in terms of really large institutions and organizations, like the church, for example, where They consider themselves infallible. And this applies to technology. It applies to small systems, big systems. Allegra: If you don't plan that something will go wrong and that you will have to iterate and you think everything will be perfect. And this is the be all end all perfect solution. Ultimately it won't work. It will fail and you'll have nothing after that. So I think it's a really interesting concept that we should think a lot about when it comes to technology, when we're building this. Allegra: Very closely translates into robustness as well, which falls into one of these principles. That's quite important for responsible AI. So different from reliability. Robustness is okay. [00:28:00] There's a path. I'm going to create more paths because if this one doesn't work, then I will be able to handle any sort of situation that comes my way and pivot and be flexible, adaptable. Allegra: That's also super important for anything that you're putting out there now. So when you're building, you should plan to iterate. It should not be the last version that you're ever putting out there because we're going to make mistakes. We're going to miss something. You're not going to have thought of everything because you didn't have everybody in the room because you were moving quickly. Allegra: Just naturally, it's going to happen. So accepting that first of all, and building in a process to, to handle it when it happens. Demetrios: How does that look like with an actual product? Let's take an example. Maybe you've seen some AI products that have been put out there and what is robustness equal for those products besides just like iterating on the product, Allegra: yeah. I'm not sure if I've seen a ton of. Really good examples. I'm definitely seeing [00:29:00] examples when things are failing and I'm seeing also being rolled back. I'm sure you are as well. We've all seen these stories that have come out this entire year of, major companies large language models doing crazy things and putting out text that is completely crazy. Allegra: And then things ultimately get rolled back and I'm sure then there are changes made and apologies made and then things are fixed. It's hard to tell when it's not that. I think it would probably be more subtle and wouldn't necessarily make the news for a lot of people and especially if you don't have the biggest organization. Allegra: I think the most important thing is to plan for that. And it's fine to, to iterate. This is also where transparency comes into play, right? So if you have something and you're working with AI and you put it out there into the world, having people understand that they are interacting with AI is really helpful. Allegra: And I think highlights responsibility as well. Like these answers may be incorrect. They are based on [00:30:00] data from this, This data set doesn't need to be specific, but up until this point in time, we have these limitations, I think calling that out more than we see already, which is quite limited, I think, at the moment, also because a lot of people don't know, they're building on foundational models that they have really limited visibility into. Allegra: But I think being a lot more explicit about what the gaps that we do know so that users understand that, okay, this may not be perfect, I'm going to adjust my expectations a bit, so it's fine if this ends up being rolled back and iterated, it's, everybody's still okay in this understanding that was going to happen. Allegra: So I think it's just that preparing and being adaptable and understanding that you're not going to get it right the first time. Demetrios: Yeah, it's funny how in the EU, I think one of the things that they put in the EU act is that if it's AI, it has to explicitly say so and talking with a few folks who have built voice agents in the US, [00:31:00] they say how redundant that is because I asked specifically, do you have the voice agent identify itself as AI when It's talking on the phone with people. Demetrios: They said, no, because people figure it out pretty quickly. They understand there's something not quite right with this. And so to say it. It doesn't really accomplish much. And since people already know they know something's up and they know it's not a human so that they can guess what it is and they won't, you don't get, you don't get that far by saying I'm an AI voice agent. Demetrios: Okay. Let's get into our conversation. All that happens is going back to the metrics. If you say that as the first thing, when you identify yourself as a voice agent, people just hang up and that doesn't get you to have. People have this experience with your voice agent, right? Allegra: I think [00:32:00] that is going to be a hard one to hang on to because obviously these agents will get better and better. Allegra: Like quality is going to be a lot higher. There will be a lot more personalization as well. So like a bot will know how to talk to a particular person to really resonate with them. And so I, I don't know how long that. That argument will hold up. But I also think this is where the literacy part comes back into play. Allegra: So I think the general public still has quite a misunderstanding. There's a large gap between understanding the limitations of these systems or what they do, how they work. I think a lot of it is just okay, yeah, I know, chat, GPT exists. Or, yes, that sounds like a bot, but what does that actually mean and What can I trust? Allegra: What should I trust this to be 100 percent accurate? We talk about hallucinations in general. I know you talk about it as it's on your merch. It's been less actually this year. I think a couple of years ago it was [00:33:00] more the topic, but people don't even understand that concept. Like the general public there, we're in such a bubble, especially in engineering. Allegra: It's an even smaller bubble within this entire thing. So a random person isn't really going to understand what those lines are. And I don't think it's fair to make that assumption. And again, when you're building, when you're designing, this is how we're going to deploy this. This is what the experience will be like. Allegra: If you don't have an understanding of what your general user thinks about or how they interpret that, then you're going to make a decision that is not aligned with. Reality, and maybe it's your assumption that they would hang up the phone if they're talking to a bot do you know that? Have you actually surveyed your user base, yeah, do you understand how they feel about AI? Maybe you'll make different decisions if you understand that they're skeptical, or if you hear from them that it's because they want a different kind of experience, maybe you can build that instead of just building something based on an assumption, and then [00:34:00] an assumption on how somebody would respond to that etc. Allegra: I think that's part of the problem, right? We're just Making a ton of assumptions and not actually understanding who's on the other side of things. Demetrios: Yeah, because I instinctively think and this is my assumption, that somebody just wants, if they're interacting with a voice agent, they just want to get done what they called in for. Demetrios: And If it is a human that helps me get what I'm calling for, great. If it's a voice agent that helps me get it, great. If it's me having to bang a few numbers on the telephone and it's robotic completely, great. Just as long as get it done and get me off of the phone as fast as possible. Allegra: Yeah if in this example where somebody has built a voice agent, If they really understand their user base, and the main thing that people care about is just getting the thing done as fast as possible and they have filled out a survey or done, the team did their due diligence and they really understand people don't care if they're talking to a bot or not, then it [00:35:00] doesn't matter if you make it transparent and announce that you're talking to a bot, maybe the bot can say, I am an AI agent, but I will get this done in one minute, guaranteed Demetrios: faster than Allegra: a human, whatever it is, like you can make it fit what your user base wants. Allegra: I don't think that's a excuse to not follow this kind of regulation or to understand the purpose of it. I think it's a bit of a deflection. Demetrios: Yeah, it makes sense. And it is fascinating to think about how certain areas of the world are really putting guardrails on from the beginning, like with the EU act, making it mandatory for AI to call out that it is AI. Demetrios: And then other parts of the world in the U S it's yeah, do whatever you want, wild west on this and whatever is best for. Or it's almost like [00:36:00] saying the free market will decide in a way. Allegra: Yeah, people talk a lot about these differences and I know that there's a perception and it might be real as well. Allegra: Like regulation is getting in the way of innovation and things moving quickly. And the EU obviously has a very different approach to this than the U. S. And the rate of things moving is reflecting that of course, but, yeah, I still support it. I still support having this idea of understanding intent, making sure that it's not harmful, taking steps towards creating the tools that you need in order to put things into place so that you can actually build things in a way that doesn't put humanity at risk. Allegra: That's just my stance on it. I know not everybody feels that way, but I think it's important. Like this, the mission of Lumira, this company that I am the co founder of is a future equipped for humanity. So rather than adjusting the way that we [00:37:00] are to fit technology as it's coming at us, this is an opportunity for us to shape technology, to support and protect everything that is human and that we want to maintain. Allegra: And I think that's that shift in perspective is where you also have this, where if you're thinking about responsible AI. As something that's getting in the way of advancing technology as fast as possible. Are you still considering the human aspect of it and what you will have left of humanity once you've reached this goal of getting your AI system out there as quickly as possible and expanding to X amount of people? Allegra: What have you actually accomplished for humanity if, you've completely wiped us off the map or whatever it is, whatever the potential negative outcome is once it's out there. So that's where I think the difference is. And I think obviously not everybody agrees and there will be some people who don't think the future needs to be equipped for humanity, but it's something that I really believe in. Demetrios: When you say that, it makes me think okay, we, or presumably by saying that. You are [00:38:00] stating there are certain things about our human experience that we want to preserve, or we want to make sure as. You mentioned the future is keeping those intact. What are those things in your eyes, like the human aspect of it? Allegra: Yeah. One thing that we talk about a lot is friction, which I know is against everything that people want to hear in the tech space. But I really think it is something that makes us human. And so we're heading into this direction of hyper personalized. Everybody is seeing, Something that is so tailored to them and it's as fast as possible. Allegra: They have to jump through zero hoops to get there. You never have to really think anymore and you're never in a challenging situation. And I think that is something that makes us really human having to be in an uncomfortable place, take a step. Being in a [00:39:00] position that everything takes a bit longer, if you've been in a different country and you don't speak the language and you don't have a translate app or whatever it is like readily available to you, being able to read the signs or get by or ask a stranger something and figure it out, I think that is such a powerful and life shaping experience and it doesn't have to be that one, it could be anything, but when you make everything frictionless, you really remove it. Allegra: This core part of being human, I think, so that's another opportunity where it's like, it's okay that things are taking maybe a day longer to define how we think about responsibility, because that will maintain what we actually care about for our community, our existence, whatever it is. And so that's something that I really associate with being human and there are many, but I think that one's a really important one. Demetrios: You saying that reminded me of a podcast that I was just listening to I can't remember probably two weeks ago with [00:40:00] this doctor who has studied over 10, 000 near death experiences, and one of the things that he said that Struck me to the bone was how a lot of folks, when they have the, their near death experiences, they have the typical thing that we think of they go into a void and it's just bliss and peace and all the good emotions and feelings you could think of. Demetrios: And then they come back and it's back to this world, which is a bit more dense and there is conflict. And one thing that the doctor said, or the person, I'll get the name real fast and tell you exactly what the name was, but one thing that he said was, there [00:41:00] is a, Uniqueness about us being here on this plane and having to deal with conflict that you don't get when you are in this heavenly state, because it is all just bliss and the positive emotions. Demetrios: And so what I realized is that. Us having conflict and the adversarial nature that you get in the human experience, that's almost like the beauty of being human, being able to go through that. And then ideally, this is where it gets a little bit more flowers in the pride eye for my personal belief set is that. Demetrios: You learn to love through the conflict and you learn to be able to have those positive emotions Despite all the adversary. Allegra: Yeah, this concept of like you can't can only experience [00:42:00] Light when you have dark and you only know when things are good because you've also experienced bad if you've only ever had this perfectly curated frictionless Experience where nothing ever goes wrong and you never have to employ critical thinking. Allegra: Then it's not really the most exciting and interesting experience. And it's also not a robust one, back to that concept where it's, you're not really understanding that there could be something that gets in the way of You and what you want and so when you start to face that you will not have the tools to be able to handle them in a human way, like maybe you have an app to handle it, but let's imagine that your app breaks and you as a human really handle life anymore if your entire World has been curated and made perfect by these solutions. Allegra: Quote unquote solutions to everything you might need. Yeah. So I think yeah, this idea of having a [00:43:00] chance right now for as many people as you. Possible again, back to perspective density to contribute to the conversation of what our future will look like to shape technology in a way that protects what we care about the kinds of experiences that we actually enjoy to have that I think that's, it's really important right now. Demetrios: So the guy's name that I was talking about is Dr. Bruce Grayson, just as an FYI, and I heard him on the Tim Ferriss podcast. I just looked it up. And it's studying over a thousand near death experiences. I think I said 10, 000, actually just a thousand. Allegra: That's a lot still. Demetrios: Still quite a big number of people who have died and come back. Demetrios: And it is great to think about this idea of not dying. But the idea of just having as many people in the room giving [00:44:00] their Opinions on the outcome that you're trying to shape and having that perspective density is very cool to think about. Allegra: Yeah, this idea also of of being a leader in this time is really related to that. Allegra: For myself, like curiosity is such a big part of the kind of leader that I am. And because I work with a lot of engineers and a lot of technical teams, I'm in this position where I can be super curious and ask a lot of questions and I end up at a result and in an understanding that is so much more. Allegra: Rich and dense because I've spoken to so many people I've really sat through and listened to like, how does this work? How are you working about it? Like, how would you think through this problem? And it's not just me having this idea of this is the best solution. I want to do this like Just do it. I'm not like that at all because I don't know a lot of things There are a lot of things I do know but there are a lot of things I don't [00:45:00] know and I so I really hang on to that where i'm like, how can I go into this room? Allegra: Assuming I know nothing and come out of this with a much stronger understanding because I've really taken the time to, to talk to all these different people. So I think right now what we really need is a sense of leadership that is built on curiosity and from trying to interact with a lot of different folks. Allegra: One of our other values other than perspective density is intellectual generosity. I think that's another huge element of this. New era, for sure. If you know something being able to share it with others, if you are in a space and you have the opportunity to bring other intelligent people into the room and have them share what they're working on, that's massive. Allegra: It's also why I really love the MLOps community meetups. Like I'm just getting to bring people together and be like, Hey, how are you thinking about this problem? What are you working on? Here's the stage, talk through it. And now we can all learn from you. I think that's such a [00:46:00] transformational important part of this time that we're in. Demetrios: Yes, it is such a good excuse to be able to get to talk with and hang out with incredible people. That's, you learned the trick of why I do this podcast. I get to talk with people like you. intro: Yeah, Demetrios: totally. I will also give a shout out to your newsletter. I think that is awesome. And. More people should be subscribed to it. Demetrios: So if anyone wants to subscribe, we will leave a link to that in the description, and I think that you're doing great work. Thank you for coming on here. Allegra: Yeah, thank you so much for that shout out. We just hit one month of the newsletter and I'm so proud. I'm so happy about it. It was really the most exciting year of research and an intellectual generosity. Allegra: So thank you so much for the [00:47:00] opportunity.

+ Read More

Watch More

Responsible AI in Action
Posted Aug 07, 2024 | Views 728
# Responsible AI
# ML Landscape
# MLOps
Model Monitoring in Practice: Top Trends
Posted Apr 14, 2022 | Views 966
# Responsible AI
# Explainable AI
# AI Model Governance
# Fiddler.ai
# Fiddler AI
How Explainable AI is Critical to Building Responsible AI
Posted Feb 24, 2021 | Views 416
# AI
# Responsible AI
# Fiddler Labs
# Fiddler.ai