MLOps Community
+00:00 GMT
Sign in or Join the community to continue

LangChain: Enabling LLMs to Use Tools

Posted Apr 23, 2023 | Views 1.9K
# LLM
# LLM in Production
# LangChain
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com
Share
speakers
avatar
Harrison Chase
CEO @ LangChain

Harrison Chase is the CEO and co-founder of LangChain, a company formed around the open source Python/Typescript packages that aim to make it easy to develop Language Model applications. Prior to starting LangChain, he led the ML team at Robust Intelligence (an MLOps company focused on testing and validation of machine learning models), led the entity linking team at Kensho (a fintech startup), and studied stats and CS at Harvard.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

This talk covers everything related to getting LLMs to use tools. It will discuss why enabling tool use is important, different types of tools, popular prompting strategies for using tools, and what difficulties still exist.

+ Read More
TRANSCRIPT

Link to slides

So, yeah, I mean, there's a, there's a lot to, to potentially talk about. Uh you know, I think uh Lane Chain's been really focused on kind of like prototyping for the most part. But now we're starting to think a lot about what does it take to actually run these chains and, and agents and, and, and everything that people are building in production. And so there's a lot of different things that uh I considered uh uh talking about. Um but I think the one Awesome, thank you.

So, yeah, I mean, there's a, there's a lot to, to potentially talk about. Uh you know, I think uh Lane Chain's been really focused on kind of like prototyping for the most part. But now we're starting to think a lot about what does it take to actually run these chains and, and agents and, and, and everything that people are building in production. And so there's a lot of different things that uh I considered uh uh talking about. Um but I think the one that I settled on is basically how to enable L MS to use tools because I think it covers a lot of different components that um you know, even even in Will's talk just before this, that he, that he mentioned a lot of those are really relevant for this.

Um And then this is also very top of mind with, with all of the A I plugins, um the, the chat GP T plugins and, and, and, and all the like auto GP TB B A G I stuff going on. So I figured this would be a fun uh conversation for today. Um So, uh that I settled on is basically how to enable L MS to use tools because I think it covers a lot of different components that um you know, even even in Will's talk just before this, that he, that he mentioned a lot of those are really relevant for this. Um And then this is also very top of mind with, with all of the A I plugins, um the, the chat GP T plugins and, and, and, and all the like auto GP TB B A G I stuff going on.

So I figured this would be a fun uh conversation for today. Um So, uh yes, so I'm gonna try to talk in 10 minutes about why is tool use important, different types of tools. Then the main chunk of this will be uh talking about how to get language models to use tools. Um And some pitfalls uh that that occur and then, and then common fixes for them. Um And then a little bit of spotlight on some of the open API tools that we've been working on over the past few weeks, again, kind of driven largely by this A I plugins chat GP T plugins announcement. yes, so I'm gonna try to talk in 10 minutes about why is tool use important, different types of tools.

Then the main chunk of this will be uh talking about how to get language models to use tools. Um And some pitfalls uh that that occur and then, and then common fixes for them. Um And then a little bit of spotlight on some of the open API tools that we've been working on over the past few weeks, again, kind of driven largely by this A I plugins chat GP T plugins announcement. So why is tool use important? Um I think there's a a bunch of different uh uh reasons.

I think the two main ones are allowing it to retrieve relevant context. So so get in information about current events. Um pull in information about proprietary data, navigate complex So why is tool use important? Um I think there's a a bunch of different uh uh reasons. I think the two main ones are allowing it to retrieve relevant context. So so get in information about current events. Um pull in information about proprietary data, navigate complex data structures. So so tool usage itself can actually be used to interact with data structures. Um And then also allowing the language model to interact with the outside world. And by this, I really meant kind of like taking actions. Um So whether it be pushing something to a database or, or some more complex things like that, um Basically, you know, language models are themselves just text in text out or or roughly that.

Um And so hooking them up to different tools can enable a lot of different and really cool capabilities. data structures. So so tool usage itself can actually be used to interact with data structures. Um And then also allowing the language model to interact with the outside world. And by this, I really meant kind of like taking actions. Um So whether it be pushing something to a database or, or some more complex things like that, um Basically, you know, language models are themselves just text in text out or or roughly that. Um And so hooking them up to different tools can enable a lot of different and really cool capabilities.

Um On that note, different types of tools. This is still I think a a really uh interesting area to keep on adding more things. Um But the main ones that we've kind of seen are search engines. This is really relevant for uh kind of like getting current information and calculators. Uh L MS aren't great at math necessarily. And so adding in calculators can help them with that retrieval systems. This is like Um On that note, different types of tools. This is still I think a a really uh interesting area to keep on adding more things. Um But the main ones that we've kind of seen are search engines. This is really relevant for uh kind of like getting current information and calculators. Uh L MS aren't great at math necessarily. And so adding in calculators can help them with that retrieval systems.

This is like again around pulling and proprietary data. There's been some really cool stuff around coding agents and having language models uh basically code and so interacting with Python Repel or other repel um For that uh arbitrary functions are kind of just a generic. Catch all you can, you can really make anything a tool that you want and then API S and that's bolded because yeah, with the, with the chat GP T plugins um has been top of mind for a lot of folks. again around pulling and proprietary data.

There's been some really cool stuff around coding agents and having language models uh basically code and so interacting with Python Repel or other repel um For that uh arbitrary functions are kind of just a generic. Catch all you can, you can really make anything a tool that you want and then API S and that's bolded because yeah, with the, with the chat GP T plugins um has been top of mind for a lot of folks. Um So how do you get language models to use tools like uh like everything with language models? You just kind of tell them to.

Um so you tell them when to use them, you tell them how to use them and then you want to tell them what they return as well. And so this is a bit um over simplistic obviously, but I think it does uh underscore a big part of the, the and I'll talk more in depth about a lot of this in, in, in just a second. But you know, Um So how do you get language models to use tools like uh like everything with language models? You just kind of tell them to.

Um so you tell them when to use them, you tell them how to use them and then you want to tell them what they return as well. And so this is a bit um over simplistic obviously, but I think it does uh underscore a big part of the, the and I'll talk more in depth about a lot of this in, in, in just a second.

But you know, you tell them to is kind of really AAA real answer for how do you get language models to do things? Um It's uh I think, yeah, it, it is really about what you put in the prompt, how you put it in the prompt. Um And then, and then how you use the output as well. Um And so, so you tell them to is the, is the quick and, and, and, and short answer, but there are a lot of challenges with this. Um And so some of the most common ones that we see people running into in link chain are how to get you tell them to is kind of really AAA real answer for how do you get language models to do things? Um It's uh I think, yeah, it, it is really about what you put in the prompt, how you put it in the prompt. Um And then, and then how you use the output as well. Um And so, so you tell them to is the, is the quick and, and, and, and short answer, but there are a lot of challenges with this.

Um And so some of the most common ones that we see people running into in link chain are how to get language models to use tools in the right scenario. Um how to get them not to always use tools. Uh Oftentimes, you know, you have a conversational bot or something and you may want to have the option to use a tool, but it's also totally fine if it just wants to converse with you. And so it's so striking that balance is really tricky. Um And then the third one that we've seen a bunch is basically parsing the L M output to be a specific tool in vacation. Um And so I'm going to deep dive on these for the majority of this presentation. language models to use tools in the right scenario. Um how to get them not to always use tools.

Uh Oftentimes, you know, you have a conversational bot or something and you may want to have the option to use a tool, but it's also totally fine if it just wants to converse with you. And so it's so striking that balance is really tricky. Um And then the third one that we've seen a bunch is basically parsing the L M output to be a specific tool in vacation. Um And so I'm going to deep dive on these for the majority of this presentation. So um for the first challenge, getting them to right use tools in the right scenario.

Um So there are a few different kind of like tips, tricks techniques um that we've uh discovered, heard um kind of see to, to deal with this. So um for the first challenge, getting them to right use tools in the right scenario. Um So there are a few different kind of like tips, tricks techniques um that we've uh discovered, heard um kind of see to, to deal with this. Um One is making the instructions uh really clear whether it's instructions in the prompt, whether it's a system message for. Now these new chat based models, um you, you kind of have to uh tell them to uh tell them to use tools, tell them what tools they have available, um Tell them what the tools do. So this gets to 10.2 around tool description, Um One is making the instructions uh really clear whether it's instructions in the prompt, whether it's a system message for. Now these new chat based models, um you, you kind of have to uh tell them to uh tell them to use tools, tell them what tools they have available, um Tell them what the tools do.

So this gets to 10.2 around tool description, um telling them when to use tools in certain scenarios is, is really useful. So, you know, if you give a language model, uh uh a search engine that's really good for like current events or something, which I think is the main use case for for search engines. Um I've, I've found that, you know, you want to put in the description, hey, use this for current events. Um otherwise it will, it will do some guessing and it won't always be perfect. So for a lot of the questions around, you know, um telling them when to use tools in certain scenarios is, is really useful. So, you know, if you give a language model, uh uh a search engine that's really good for like current events or something, which I think is the main use case for for search engines. Um I've, I've found that, you know, you want to put in the description, hey, use this for current events. Um otherwise it will, it will do some guessing and it won't always be perfect.

So for a lot of the questions around, you know, the language model isn't using my tool in the right way. Um The answer is kind of like beef up the tool description um and, and tell it that it should use it in the right way. Um repeating the instructions at the end, especially for some of the older models um has been really helpful because I've observed purely anecdotally that if you put kind of like instructions at the beginning, by the time it gets to the end, it kind of like forgets about them a little bit and So the language model isn't using my tool in the right way.

Um The answer is kind of like beef up the tool description um and, and tell it that it should use it in the right way. Um repeating the instructions at the end, especially for some of the older models um has been really helpful because I've observed purely anecdotally that if you put kind of like instructions at the beginning, by the time it gets to the end, it kind of like forgets about them a little bit and So my general technique is put instructions at the beginning like, hey, you have these tools you can use them, you should use them in this way, et cetera.

And then, and then right at the end be like, remember to like uh format uh the output in the correct way or remember to only use tools if you need to. my general technique is put instructions at the beginning like, hey, you have these tools you can use them, you should use them in this way, et cetera. And then, and then right at the end be like, remember to like uh format uh the output in the correct way or remember to only use tools if you need to.

Um And, and so a little reminder at the end goes a long way. Um And then the fourth one is a bit newer, but this gets around. Um And, and I think will kind of touched on this a little bit earlier about using some like embeddings or semantic similarity to, to decide when to use tools. And this is really useful when you have a lot of tool. Um And, and so a little reminder at the end goes a long way. Um And then the fourth one is a bit newer, but this gets around.

Um And, and I think will kind of touched on this a little bit earlier about using some like embeddings or semantic similarity to, to decide when to use tools. And this is really useful when you have a lot of tool. So if you have like 100 tools, you can't, you, you, you probably can't put them all on the prompt and, and, and ask the language model to choose between them.

So one thing that we've seen be helpful is basically do some tool retrieval step first, retrieve them, then put like the top five tools on the prompt and ask it to decide between those. So if you have like 100 tools, you can't, you, you, you probably can't put them all on the prompt and, and, and ask the language model to choose between them. So one thing that we've seen be helpful is basically do some tool retrieval step first, retrieve them, then put like the top five tools on the prompt and ask it to decide between those. Um So that's around getting them to use tools in the right scenario.

Um The second challenge that we've seen is they don't always need to use tools. And so again, kind of like telling them that in the system message, again, repeating the instructions at the end. And then the third interesting thing is basically adding a tool which is itself is basically like just responding to the user. Um And so we've seen this be really helpful. Um because Um So that's around getting them to use tools in the right scenario.

Um The second challenge that we've seen is they don't always need to use tools. And so again, kind of like telling them that in the system message, again, repeating the instructions at the end. And then the third interesting thing is basically adding a tool which is itself is basically like just responding to the user. Um And so we've seen this be really helpful. Um because yeah, the, the, the, the instructions sometimes aren't enough by themselves and so explicitly having a tool, quote unquote that it can call to, to get um to get uh uh a response to the user has been, has been really helpful there.

Um And then the third challenge that we've seen yeah, the, the, the, the instructions sometimes aren't enough by themselves and so explicitly having a tool, quote unquote that it can call to, to get um to get uh uh a response to the user has been, has been really helpful there. Um And then the third challenge that we've seen is is basically parsing L L M outputs to get a tool in vacation. And so the solutions that we've seen for this are are basically using more structured kind of like response types like Willem was talking about. Um So, so Jason typescript have been have been really good ones.

Um And then we have a bunch of concepts in link chain around how to do this kind of like explicitly and easily. So we have a concept of output parsers. Um And we have also a concept of, of is is basically parsing L L M outputs to get a tool in vacation. And so the solutions that we've seen for this are are basically using more structured kind of like response types like Willem was talking about. Um So, so Jason typescript have been have been really good ones. Um And then we have a bunch of concepts in link chain around how to do this kind of like explicitly and easily. So we have a concept of output parsers.

Um And we have also a concept of, of output parsers. So jumping through some examples really quickly there, um this is the base kind of like output parser example, you actually define it in pedantic. Um So, so super common to those who are, who are familiar with Python um in, in the javascript library, we have a different kind of like schema um definition thing. And then we automatically can convert the pedantic object into an output parser. You can then use that output parser output parsers. So jumping through some examples really quickly there, um this is the base kind of like output parser example, you actually define it in pedantic. Um So, so super common to those who are, who are familiar with Python um in, in the javascript library, we have a different kind of like schema um definition thing. And then we automatically can convert the pedantic object into an output parser. You can then use that output parser Um So if you look here, we do parser dot get format instructions.

And so this uh uses the, the, the schema to generate some format instructions. Um and these uh are typescript slash JSON and, and then we put those format instructions into the prompt itself. And so that's how we tell the language model to use a specific format. And then we can then parse the output explicitly back into this uh pedantic object. Um So if you look here, we do parser dot get format instructions. And so this uh uses the, the, the schema to generate some format instructions. Um and these uh are typescript slash JSON and, and then we put those format instructions into the prompt itself. And so that's how we tell the language model to use a specific format. And then we can then parse the output explicitly back into this uh pedantic object.

Um There's the obvious question as, as Willem was mentioning like what happens when that fails? Um And so we, we can see here um we have an example of uh uh output parser that's mis formatted. Um And so we have um Um There's the obvious question as, as Willem was mentioning like what happens when that fails? Um And so we, we can see here um we have an example of uh uh output parser that's mis formatted. Um And so we have um we have some stuff in linkedin to basically fix that by passing it to a language model and asking it to fix the, the, in this case, the JSON decoding errors.

There's an interesting kind of like nuance here where this doesn't actually um fix all the issues that could possibly arise. So if we go to the next slide, we can see here that uh the, the, the response in the at the top is actually invalid, not we have some stuff in linkedin to basically fix that by passing it to a language model and asking it to fix the, the, in this case, the JSON decoding errors. There's an interesting kind of like nuance here where this doesn't actually um fix all the issues that could possibly arise. So if we go to the next slide, we can see here that uh the, the, the response in the at the top is actually invalid, not because it's bad Json, but because it's missing uh argument. And so if we ask another language model to just like fix this thing, it won't actually know how it should fix it because there's a second argument that should be there where there is some kind of like correct meaning. And So here it just kind of like puts a blank string, but that's not actually what it should be. Um We can see at the bottom when we use this other type of output parser. because it's bad Json, but because it's missing uh argument.

And so if we ask another language model to just like fix this thing, it won't actually know how it should fix it because there's a second argument that should be there where there is some kind of like correct meaning. And So here it just kind of like puts a blank string, but that's not actually what it should be. Um We can see at the bottom when we use this other type of output parser. Um It, it, it kind of retries with the prompt originally. So now it has all the information. Um And I know I'm running a bit low on time. So with the last slide, I just wanted to spotlight how this all comes together for some of the open API tools, which are these chat GP D plugins. Um We can use some of the prompting and output parsers um to, to communicate the JSON and typescript parameters for each endpoint. Um And then this is a bit of a of a tangent. Um It, it, it kind of retries with the prompt originally. So now it has all the information. Um And I know I'm running a bit low on time.

So with the last slide, I just wanted to spotlight how this all comes together for some of the open API tools, which are these chat GP D plugins. Um We can use some of the prompting and output parsers um to, to communicate the JSON and typescript parameters for each endpoint. Um And then this is a bit of a of a tangent. But basically, we've seen that language models still struggle with some of the more complex parameters and, and, and function definitions. And so we've found it really effective to actually wrap each endpoint in its own chain and have that chain basically be responsible for a single end point because then it can kind of like learn and know how to interact with that end point and those complex parameters better.

But basically, we've seen that language models still struggle with some of the more complex parameters and, and, and function definitions. And so we've found it really effective to actually wrap each endpoint in its own chain and have that chain basically be responsible for a single end point because then it can kind of like learn and know how to interact with that end point and those complex parameters better. And then we have the agent, this basically router language model, communicate with each chain independently and the input to those chains is just a string, um a natural language string. And so it's way simpler to kind of like tell the the agent to, to interact with those chains. And then we have the agent, this basically router language model, communicate with each chain independently and the input to those chains is just a string, um a natural language string. And so it's way simpler to kind of like tell the the agent to, to interact with those chains.

Um that is, that is all I have for today. I think I went a little bit over, so I hope you're not too mad at me. Demetrios. Uh Thanks again for having me. Um that is, that is all I have for today. I think I went a little bit over, so I hope you're not too mad at me. Demetrios. Uh Thanks again for having me.

+ Read More

Watch More

Taking LangChain Apps to Production with LangChain-serve
Posted Apr 27, 2023 | Views 2.3K
# LLM
# LLM in Production
# LangChain
# LangChain-serve
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com