MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Cleric AI SRE: Towards Self-healing Autonomous Software // Willem Pienaar // Agents in Production

Posted Nov 15, 2024 | Views 936
# SRE Agents
# Cleric AI
# Agents in Production
Share
speakers
avatar
Willem Pienaar
Co-Founder & CTO @ Cleric

Willem Pienaar, CTO of Cleric, is a builder with a focus on LLM agents, MLOps, and open-source tooling. He is the creator of Feast, an open-source feature store, and contributed to the creation of both the feature store and MLOps categories.

Before starting Cleric, Willem led the open-source engineering team at Tecton and established the ML platform team at Gojek, where he built high-scale ML systems for the Southeast Asian Decacorn.

+ Read More
avatar
Adam Becker
IRL @ MLOps Community

I'm a tech entrepreneur and I spent the last decade founding companies that drive societal change.

I am now building Deep Matter, a startup still in stealth mode...

I was most recently building Telepath, the world's most developer-friendly machine learning platform. Throughout my previous projects, I had learned that building machine learning powered applications is hard - especially hard when you don't have a background in data science. I believe that this is choking innovation, especially in industries that can't support large data teams.

For example, I previously co-founded Call Time AI, where we used Artificial Intelligence to assemble and study the largest database of political contributions. The company powered progressive campaigns from school board to the Presidency. As of October, 2020, we helped Democrats raise tens of millions of dollars. In April of 2021, we sold Call Time to Political Data Inc.. Our success, in large part, is due to our ability to productionize machine learning.

I believe that knowledge is unbounded, and that everything that is not forbidden by laws of nature is achievable, given the right knowledge. This holds immense promise for the future of intelligence and therefore for the future of well-being. I believe that the process of mining knowledge should be done honestly and responsibly, and that wielding it should be done with care. I co-founded Telepath to give more tools to more people to access more knowledge.

I'm fascinated by the relationship between technology, science and history. I graduated from UC Berkeley with degrees in Astrophysics and Classics and have published several papers on those topics. I was previously a researcher at the Getty Villa where I wrote about Ancient Greek math and at the Weizmann Institute, where I researched supernovae.

I currently live in New York City. I enjoy advising startups, thinking about how they can make for an excellent vehicle for addressing the Israeli-Palestinian conflict, and hearing from random folks who stumble on my LinkedIn profile. Reach out, friend!

+ Read More
SUMMARY

Cleric is building an AI-powered Site Reliability Engineer capable of autonomously monitoring, diagnosing, and fixing issues in complex distributed systems. This talk will explore how Cleric's AI agent can handle novel situations, learn continuously from its environment, and respond to issues at speeds far beyond human capabilities. It will cover the technical approach behind creating an AI that can understand system architectures, reason about failures, and take appropriate actions without human intervention. The discussion will also examine how autonomous SRE agents could reshape the future of software operations and reliability.

+ Read More
TRANSCRIPT

Adam Becker [00:00:02]: And I believe we have Willem around. Willem, let's see. Are you around?

Willem Pienaar [00:00:06]: Hey. Hey, Adam.

Adam Becker [00:00:08]: Good to see you, man. How have you been?

Willem Pienaar [00:00:11]: It's been great. It's been, what, at least a year?

Adam Becker [00:00:14]: It's been. It's been some time. And since we last met, you've been building. I believe at first you were building it. It was like, in stealth mode, maybe, last time I met you. And now it's. It's. It's.

Adam Becker [00:00:25]: It's. Are you live with Cleric?

Willem Pienaar [00:00:28]: We are live at Enterprises, yes.

Adam Becker [00:00:30]: Oh, awesome.

Willem Pienaar [00:00:31]: Okay with us, if you want to try the product, you can reach out.

Adam Becker [00:00:34]: Nice. Actually, I just posted a job description for an sre, so maybe I should play around with your tool first. Or perhaps equip whoever I hire with it. So very excited to hear what you have to share with us today. Let's see. I think that your screen is already up. Let me see.

Willem Pienaar [00:00:58]: Yeah.

Adam Becker [00:00:59]: Awesome. All right, so if folks have questions, I'm going to leave five minutes at the end for Q and A, so definitely write your questions there and I'll be back in about 20 minutes. Willem, take it away.

Willem Pienaar [00:01:13]: Cool. Thanks, Adam. Hey, everybody, I'm Willem. I'm the CTO of Cleric. Our mission at Cleric is really to free engineers up from the groundwork of the production environment. So I started Cleric because I personally had this problem. I had this problem of we all paid the price of producing code and shipping that into the production environment, and the more you develop, the more you pay that price. So our mission at Cleric is really free engineers up from that grunt work.

Willem Pienaar [00:01:45]: And if I had an AI agent by my side that could help me diagnose and debug issues faster and take that load off me, I could get back to developing software. And so that's what I'm going to talk you through today. How do we go from being reactive to proactive? And how can AI help us? So the agenda for today is I'm going to talk to you a little bit about what this problem is that we're trying to solve. What Cleric is an AI powered sre, and what that means, some of the lessons we've learned along the way. So there will be two lessons and then three other focus areas of what we're looking at right now, some of the more gnarly challenges in the space, and then really our road to full autonomy and what's ahead longer term. But just want to see that upfront. One of the things that we've realized in building this product is that a lot of the complexities around LLMs and AI. And those intricacies are a little bit more solvable than the change and product challenges in practically deploying an agent into a team, an actual teammate.

Willem Pienaar [00:02:53]: So let's start with the infrastructural problem and why this is so vast and egregious. So here we have just a graph of infrastructure that we've mapped out. Infrastructure is always complex and always evolving. It's dynamic. So this is a simple single Kubernetes cluster that we've encountered at one of the enterprises that we work with and we try to model it on our side and Cleric can build this essentially this graph of relationships. So these are pods and deployments and other resources within a Kubernetes cluster. And it's actually a very small cluster, but just the amount of connections here is very expansive. So when something goes wrong or just understanding this and keeping it in your mind isn't feasible.

Willem Pienaar [00:03:40]: So most engineers have heuristics on how they process this information. They know which services to look at, which ones not to. But this doesn't really scale and it's extremely complex. And I don't need to tell most people as they understand the complexities of production systems. So dealing with that is hard and this is only getting worse. We're shipping more into production environments and this happened a lot during the kind of Zerk era where we built all these microservices and all these clusters and a lot of the people that originally built a lot of systems aren't even there anymore. And the way humans scale is different from machines. So we can add more people, but people also produce more.

Willem Pienaar [00:04:18]: And so there's also an incentive to create more production systems. So hiring your way out of this problem is costly and it also still doesn't solve the underlying problem. Secondly, if you're just adding scripts and runbooks, yeah, we've been trying that for years. And you can't code and script your way out of this because every situation is unique and requires human judgment to deal with. So that's a first class problem as well. And finally, you can't just add more logs and metrics and observability. It's not going to solve the underlying problem. If you throw 500 data dogs at this problem, you'll still be left with more dashboards, more information, but you need action and this is going to come home to roost.

Willem Pienaar [00:04:59]: We've got a bunch of AI startups, many of them you'll be hearing from at this conference. You know, we're producing more code and it's coming faster and hot, and it's all going to land straight on the production environment and it's going to be poorly understood. And that's fine. That's a good problem to solve, but we need a counterbalance to that. So code generation will accelerate this to a breaking point. And honestly, where we are today, when the production environment is resignment, we basically said, let's triage the biggest fires and ignore the small fires and wait for them to become larger. And that's why engineering teams, backlogs just grow infinitely, essentially. So let me talk through what an AI powered SRE is or what Cleric is and how it can.

Willem Pienaar [00:05:49]: So we think machines should monitor machines. So in the production environment, of course there's loads of events. Some of them are fine to ignore, they're just business as usual. But sometimes there are failures. And a system like Cleric, an AI agent, is really well positioned because of its ability to react instantly to those events, triage stats and then diagnose problems and ideally in the long term, present a resolution or a fixed. So it can scale out infinitely. Unlike people, it's always available. It never goes to sleep.

Willem Pienaar [00:06:22]: It learns from every interaction. So it gets better and better. And the more you invest in it, the more you. More time you spend tailoring it to your organization. And so that's why we think Cleric or AI agents in this space is such a compelling use case. And it's the use case that I think I'll be spending a lot of the rest of my career in focusing on. So let's talk about. About how Cleric is architected.

Willem Pienaar [00:06:46]: Internally, it's got a reasoning engine. This is really the brain of Cleric, and this is not really controversial. A lot of agentic systems have this kind of like reasoning loops. So it has an ability to plan, it has an ability to reflect, it has an ability to execute. Sorry, to execute and then reflect on its executions. Clerk also has access to tools. Tools is really what makes Clerical powerful. So it's deployed in vpc in a production environment.

Willem Pienaar [00:07:11]: It has access to your existing observability tools, access to your knowledge bases, your logs, your metrics. So unlike a datadog, it's not just receiving information and storing it, it actually can access real life information from production systems. So we don't really introduce new tools into that environment. Clery can operate like an engineer in your team. Cleric also has memories, comes better and better over time. So one is its knowledge graph. So it maintains this graph of relationships in your organization, your teams, your clusters, your VMs. How they connect who's deployed what.

Willem Pienaar [00:07:47]: It always maintains that graph, and it uses that during its investigations and during its operation within this environment to efficiently get to an answer quickly. It also has a learning module. And this learning module, it basically just, you know, if it solves a problem, it remembers that, and if you teach it, it remembers that. So this architecture is kind of common across agents, but we just think it's extremely compelling in the production environment because of the vastness of the tools that are available already and the difficulty for humans to keep up with that and the complexities of these graphs. Right. You've already had service maps in the past, so now we're trying to remodel the same things, but in an agentic way. So let me talk you through quickly how clerical respond to an alert, how it can actually do an investigation. So imagine you get a login failure, and this login failure creates an alert.

Willem Pienaar [00:08:44]: Cleric can then get kicked off from this alert. So kicked into action generates many hypotheses. These hypotheses are potential root causes that it wants to investigate. Investigate. So in this case, a login failure could be caused by an auth service issue or database connection issue and a deployment issue. So I'm not going to get into like the minutia of a specific use case here. I just want to take you through the high level flow. So you just get a gist of how cleric does an investigation.

Willem Pienaar [00:09:08]: So it branches out and it can scale as many investigative paths as you need, basically as you're willing to afford or allow it to get into. So it thinks the database connection be the problem. And it does this because it's at the first level calling a tool, getting information and reasoning about that. It also has pre existing context from the knowledge graph. And so it'll just go through this iterative flow going deeper and deeper into these steps of iteration until eventually it gets to a conclusion. And it's looked at many different systems and it gets to a diagnosis, and this diagnosis is then presented to the human. So if you were to do this without a cleric, you know, you'd open up kubectl, you'd open up your dashboards and your observability stack, you'd look at traces, you try and correlate across systems, you touch loads and loads of systems in order to get your recalls. Now we can do this elastically with an agent concurrently and give you a high confident answer.

Willem Pienaar [00:10:06]: And cleric maintains a confidence and it reasons about the responses. It gets back from infrastructure in order to know when it's right and know when it's wrong. It's not perfect. Agents do fail, they do hallucinate, but it maintains and tracks that confidence. And the second time it runs, it will learn from a previous run. So in this case, a similar issue or alert was fired and Cleric eliminates a certain path of investigation, goes down a different path, gets information about the from the infrastructure and then it drills in and comes up with a different answer. And it again presents this to the human and short circuits a lot of investigative work and really a lot of context switching that engineers are dealing with today. And so that's just a high level of how cleric operates.

Willem Pienaar [00:10:54]: And as a human, you're only seeing the tip of the iceberg. Now you are seeing this team member essentially going on this quest to find an answer for you, touching hundreds or thousands of systems and within seconds or sometimes minutes, it gives you a very concise answer right in context in Slack where you're sitting. So again, this is not so different from what a lot of other agents are doing in different spaces. But for us, we see this as an incredibly compelling use case to free engineers from infrastructure. So now the fun part is really like what are the learnings that we've had along the way? And I think just want to see this, that it's been a lot of interesting product and UX challenges in introducing this kind of quote unquote teammate into a team. So I'd love to talk you through those. The first one is really trust but verify and I'll stage this out for you in two slides. The first one is if we're not sure, don't just throw information at the, at your teammate or an engineer.

Willem Pienaar [00:12:04]: So we have a self assessment of our confidence and I'll talk you through it in a bit on how we compute that confidence. But if we're not confident in an answer, we don't waste the time of an engineer. I want to like linger on this point a little bit. We spend a lot of time trying out different structures of information. Sometimes we present links to a web UI and ask an engineer to go to a web ui. Sometimes we give them all the raw findings, metrics, dashboards, all the facts they need. And what we found is being very concise and leaving it up to the human was critical. That's the one fact.

Willem Pienaar [00:12:40]: And the second one was if you're not sure, don't even say anything. It's better to not waste an engineer's time because ultimately it needs to be a net value add. Having this member in your team. And this is how humans operate as well. And there's progressive disclosure as well. So if an engineer doesn't know or isn't certain about your answer, give them a way to find out more. So you can open up a link to an investigation or a conclusion we came to and you can drill into the underlying reasoning as well as the tools we called to understand and verify our findings and our work. And that gives you a lot more confidence in our answer and honestly confidence in our confidence score.

Willem Pienaar [00:13:25]: And so the next time this alert fires, you're gonna look at the conference score and say, okay, Calyric's confident, I trust it. Because you know, nine out of ten times it's been right when the confidence has been high. So we really wanna just give you the cruel fact and not even pull you into the web ui. It's an exception flow, it's not the price primary flow. And so why is this important? Why do you care as an agent builder? Well if you overwhelm humans they're just going to ignore you and they're not going to even give you any feedback. And the other one is we always have a follow up action. So in this case we have like rerun with feedback or proposed solution as buttons and those are positive and negative signals and those give us ground truth in training our agent. And so if the humans aren't even interacting with your agent, then you're not going to get feedback and you're not going to improve over time.

Willem Pienaar [00:14:17]: So this is a key lesson for us. The second lesson is engineers really like to teach. You know, engineers have processes, they know how to or if they bring on a new engineer into their team, they know how to onboard that member and they want to follow the same process when onboarding an agent into the team team as well. And so interacting with an agent like cleric should be natural. You should be able to do it in context in Slack. So in this case what you can see on screen is, you know, we're just, it's an engineer is asking a follow up question. The cleric learns from those questions. So if the engineer says, if you click rerun a feedback we'll remember that if they ask questions, we'll extract the services, the facts, the from that dialogue.

Willem Pienaar [00:15:06]: And also you should be able to give cleric guidance. So we provide control services that allow a team to locally in their environment. Say here are specific instructions that you should follow when operating on my services, on these clusters, in these conditions. And that's just how engineers operate. And if you don't give them that ownership and that control, then they really feel like this is an external system, this is a person in their team or a teammate. They can't steer and guide. But if you make it a team member, then you should provide them these control services. It's also very important from an agent builder perspective, because you can learn from engineers how they teach an agent and generalize those learnings across or into your, upstream it into your prompts and into your product.

Willem Pienaar [00:15:57]: So this is an incredibly important thing to see. If you see somebody building a custom tool, but they build the same custom tool across your deployments and across your customers, you should probably upstream that into the core product. So let me talk about some of the more gnarly challenges that we're working on right now and where we've got our lasers focused. One of them is tools. It's really all about tools. The production environment is just overflowing with APIs and CLIS and dashboards and information. Some of that is easy to process. Ll are incredible at processing logs, finding a needle in a haystack, correlating information across code and documents.

Willem Pienaar [00:16:39]: But some of them are a little bit harder. Traces are a little bit more, but also tractable for an LLM, as well as API calls and structured information like JSON. But others are a little bit harder, like metrics. When you talk to engineering teams, a lot of them track metrics on their core services. These metrics are often used to debug issues in a causal way. They'll say, there's an alert firing, I'm going to go look at the dashboard on Grafana. Then I'm going to open up the dashboard with 500 services, with 500 CPU graphs and memory graphs and latency graphs. And then they're going to see which ones of those spiked up and which ones didn't.

Willem Pienaar [00:17:25]: They build causal relationships and links between their services, and then they infer how these things are connected and where the root cause could be. We can build a memory, a knowledge graph and memory of this infrastructure, but if we had to go and look at every single system, that would be a very costly operation. So we want to be efficient. So you really want to use the information that's available. And you don't always want to model your process after a human. But in this case, if this information is all you have available, then you have to find a way to use it. So for us, building tools around systems like metrics is critical because it allows us to do that causal link and jump from one service to another and really reduce the Search space. That's really what we're trying to do is reduce the search space to the relevant services, look at those, and then see which one was the underlying cause of a problem.

Willem Pienaar [00:18:15]: So tool building is really a first class problem for us and honestly one of the key values that we bring in enabling an agent to operate successfully. A lot of the learnings we've had from other domains like the CO generation has applied into our space as well. So suite agents and open hands and those folks where you see an ACI layer that's been a very effective approach to giving agents a more uniform view of infrastructure so that they can operate more effectively and not get overwhelmed with the vast amounts of information and tools that are available. The next one is confidence comes from experience. This one is a little bit more complicated to explain, but I'm going to do my best. So when an event comes in or an alert comes in, one of the things the Holy Grail is effectively you know when you've seen that before and you know when you can solve that problem. So if you were just blindly acting on every single issue or alert, you basically do confidence scoring using an LLM. That's the worst thing that you can do because you're not really using a grounded mechanism for knowing when you have familiarity with solving that problem.

Willem Pienaar [00:19:35]: So what we do is we try and classify and enrich and tag and label that incoming alert and then we store our successful memories or learnings from that event in our memory store. The next time we see it, we pull those out again and then we say, hey, we've solved this problem 50 times, we're pretty confident we can do it again. So we look at those past runs and past memories of that issue. But the key challenge here is you don't know along which dimensions you're good and bad. So you have to label these events or investigations along many different dimensions. And so here's just a quick glance and this is just a high level overview of what it looks like. But you basically slice and dice this incoming payload and then you say, oh, we've got three memories. And based on those memories in the past and our resolutions, here's our confidence of our answer.

Willem Pienaar [00:20:32]: The final gnarly challenge we're working on is really generalizing learnings. So a lot of our learnings is localized inside of a customer, inside of a team, and they learn a lot and they operate cleric and improve the product over time with tools and context. But where you really want to get to is organizational, industry or universal patterns. You want to glean from the way that team operates and those learnings and share that across all of your customers and as a core part of your product. So if some kind of zero day vulnerability is solved in one place, it should benefit everybody else. So pushing the learnings down is a key part of what we're trying to do. And now just going forward, I want to just wrap up by telling you where we are and what the next steps are for Cleric. So we want to get to full autonomy.

Willem Pienaar [00:21:24]: So you can slice and dice this as many ways as you want, but ultimately it's about closed loop resolution. So this is just a draft, like one type of dimension, and these are not real numbers, but in different classes of problems that you find in this production environment, we essentially want to go to a depth of like, accurate plans, to accurate findings, to an accurate diagnosis, and then a fixed. When you can scratch off each one of these classes, then you know that you can confidently and reliably solve that problem. Now, of course there's going to be black swan events and some kind of issues that you just can't anticipate, but at least you know that in certain classes of events or issues, you can solve that reliably. And so our mission is really to dial in our performance in all of these different dimensions towards autonomy. So right now we're really focused on diagnosis in order to get to proactive infrastructure. That's the first step. The second step is really remediation for different classes.

Willem Pienaar [00:22:23]: Can we get to a closed loop and can we do that very confidently? And then we want to move away from alerting, away from observability, towards preventative actions. Can we identify the failures before they happen? And that's the dream state or the nirvana state. With that, I'll open it up for questions.

Adam Becker [00:22:44]: That does sound like an nirvana state that I hope we get to very quickly. Willem, this is fascinating. Thank you very much for sharing this with us. We have a bunch of questions for you and I have a few questions too, but let's see how far we can get. Let's start here. We have a question from Andres. He's asking, does Cleric run autonomously in the background and checking the logs for errors, or do you need to run it manually? Once some error or once some error.

Willem Pienaar [00:23:15]: Or bug occurred, you didn't run it manually. So it triggers automatically. Based on events in your infrastructure.

Adam Becker [00:23:23]: What model do you use to power Cleric? And have you done any fine tuning or any other type of customization?

Willem Pienaar [00:23:31]: I prefer not to share that information.

Adam Becker [00:23:33]: Yeah, I figured as much. They're trying. I mean, I'll give it to them. Why not?

Willem Pienaar [00:23:39]: They can try.

Adam Becker [00:23:40]: Yeah. How does Cleric learn? This is another one is about fine tuning.

Willem Pienaar [00:23:47]: Whether you do fine tuning on the model point, we are able to use models. I want to add there that we are able to use your existing models. We don't have to bring our own models. We have configurability, optionality there. But I don't want to get into the specifics on the fine tuning and our evaluations. And the learning is really driven primarily by the memories and accumulation of data over time. So that's what the knowledge graph and the learning and the memory bank is for.

Adam Becker [00:24:24]: Demetrios asking, does it do like this, replace a Datadog, or do you see them ultimately as complementary? What is the relationship between this and that?

Willem Pienaar [00:24:33]: We see Cleric as a team member, and so it operates. It's a system like Datadog. So Datadog is the tool and Cleric is the operator.

Adam Becker [00:24:42]: Would this have been. Henrik here is asking, would this have been impossible to build before LLMs in your mind?

Willem Pienaar [00:24:51]: I don't know if it's impossible, but I'm not aware of any technology that could make it possible. I guess if yes is the answer.

Adam Becker [00:25:00]: Andres is asking, is Cleric able to autonomously resolve the issue? And I think you addressed that maybe in one of the last slides about remediation.

Willem Pienaar [00:25:11]: I think that will be a bit flip for us, but we currently guard that on human action, so we don't allow automatic resolution.

Adam Becker [00:25:19]: Yeah. Okay. So we have a question here from Mustafa. Great work, Cleric team. Optimization of systems in production depends on tuning on the tuning of your LLM systems through their hyper parameters. Not just the model, but your rag, guardrails, prompt tuner, etc. Does Cleric perform auto Bayesian optimization of these knobs? Say, chunking size, embedding models of your rag, temperature of models, and so on? Is there anything? Yeah.

Willem Pienaar [00:25:49]: Yeah. So we experimented with different approaches. Most of that is currently happening in our offline evaluation benchmark, which is pretty vast, but we haven't done any of that in an online setting. But the hyperparameters are as much a problem as or a challenge as the actual models. And yeah, it's everything around that. Right.

Adam Becker [00:26:10]: And then last here, Jan is asking, or Jan, how do you measure performance of Cleric? And I suspect that that's. That might go into the first category.

Willem Pienaar [00:26:20]: Yeah. So this is actually a very big challenge for a lot of agent builders. You can do this at the planning stage. Like, how aligned are you to plans? How accurately do you find findings, how accurately do you diagnose? How actively do you resolve? This is many different KPIs we track.

Adam Becker [00:26:38]: There is. I want to now ask you kind of like a question that's sort of on my mind. So I think it's interesting that the way that you're conceptualizing this as with respect to infrastructure, but is there something unique to infrastructure that isn't, let's say, unique to other software systems? So, for example, I think I've seen some people do like, some type of like debugging or diagnosing of just like software failure. That is not infrastructure. Yet you're depending on infrastructure, which in my mind just means lots of different systems and subsystems and they react to one. There's more of complexity there. But is there something that is like categorically different in your mind in, let's say infrastructure and let's say for software to just sort of like run on its own?

Willem Pienaar [00:27:28]: Well, I mean, it's distributed and stateful and you. I think one of the key challenges or differences is that there are no external verifiers or oracles. With software, you have tests and you kind of know when something is working or not. And so the ground truth is often there for you. With infrastructure, we have to source that from different ways. You can't just easily restage Facebook on your VM or some large tech company. It's very modular that you have to address these problems. This is a challenging domain.

Willem Pienaar [00:28:00]: For that sake.

Adam Becker [00:28:03]: I see another question here. I wonder if any hyperscaler cloud is working on similar service. I'm not sure what to make of that. If you have any thoughts on that, that might be a comment and less of a question. Andres, if you'd like to refine that, I can ask that differently. And Willem, thank you very much for sharing. You're saying you guys are hiring. What kinds of roles are you looking to hire for?

Willem Pienaar [00:28:30]: All across the board. But good software engineers, people obsessed with the space like we are. I can join the quest.

Adam Becker [00:28:37]: Nice. And you're looking for. I guess you're already an enterprise, right? Are you looking for more enterprise that might be interested in trying it out? Yes.

Willem Pienaar [00:28:47]: Reach out to us. Come to Cleric I.O. and ping us.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

33:58
Building Robust Autonomous Conversational Agents with Simulation Techniques from Self Driving // Brooke Hopkins // Agents in Production
Posted Nov 26, 2024 | Views 1K
# Conversational Agents
# Coval
# Agents in Production
Deploying Autonomous Coding Agents // Graham Neubig // Agents in Production
Posted Nov 22, 2024 | Views 832
# Coding
# Agents
# Autonomous
Generative AI Agents in Production: Best Practices and Lessons Learned // Patrick Marlow // Agents in Production
Posted Nov 15, 2024 | Views 1.8K
# Generative AI Agents
# Vertex Applied AI
# Agents in Production