Skip to content

Commit

Permalink
docs: add tool-calling agent (#20328)
Browse files Browse the repository at this point in the history
  • Loading branch information
baskaryan authored and hinthornw committed Apr 26, 2024
1 parent a7732aa commit d68af26
Show file tree
Hide file tree
Showing 4 changed files with 355 additions and 36 deletions.
5 changes: 3 additions & 2 deletions docs/docs/modules/agents/agent_types/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,9 @@ Our commentary on when you should consider using this agent type.

| Agent Type | Intended Model Type | Supports Chat History | Supports Multi-Input Tools | Supports Parallel Function Calling | Required Model Params | When to Use | API |
|--------------------------------------------|---------------------|-----------------------|----------------------------|-------------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
| [OpenAI Tools](./openai_tools) | Chat |||| `tools` | If you are using a recent OpenAI model (`1106` onwards) | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_tools.base.create_openai_tools_agent.html) |
| [OpenAI Functions](./openai_functions_agent)| Chat ||| | `functions` | If you are using an OpenAI model, or an open-source model that has been finetuned for function calling and exposes the same `functions` parameters as OpenAI | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.create_openai_functions_agent.html) |
| [Tool Calling](/docs/modules/agents/agent_types/tool_calling) | Chat |||| `tools` | If you are using a tool-calling model | TODO: Ref |
| [OpenAI Tools](./openai_tools) | Chat |||| `tools` | [Legacy] If you are using a recent OpenAI model (`1106` onwards). Generic Tool Calling agent recommended instead. | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_tools.base.create_openai_tools_agent.html) |
| [OpenAI Functions](./openai_functions_agent)| Chat ||| | `functions` | [Legacy] If you are using an OpenAI model, or an open-source model that has been finetuned for function calling and exposes the same `functions` parameters as OpenAI. Generic Tool Calling agent recommended instead | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.create_openai_functions_agent.html) |
| [XML](./xml_agent) | LLM || | | | If you are using Anthropic models, or other models good at XML | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.create_xml_agent.html) |
| [Structured Chat](./structured_chat) | Chat ||| | | If you need to support tools with multiple inputs | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.create_structured_chat_agent.html) |
| [JSON Chat](./json_agent) | Chat || | | | If you are using a model good at JSON | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.json_chat.base.create_json_chat_agent.html) |
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/modules/agents/agent_types/openai_tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"sidebar_position: 0.1\n",
"---"
]
},
Expand Down Expand Up @@ -252,7 +252,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,
Expand Down
306 changes: 306 additions & 0 deletions docs/docs/modules/agents/agent_types/tool_calling.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,306 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"sidebar_label: Tool calling\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tool calling agent\n",
"\n",
"[Tool calling](/docs/modules/model_io/chat/function_calling) allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools. The goal of tools APIs is to more reliably return valid and useful tool calls than what can be done using a generic text completion or chat API.\n",
"\n",
"We can take advantage of this structured output, combined with the fact that you can bind multiple tools to a [tool calling chat model](/docs/integrations/chat/) and\n",
"allow the model to choose which one to call, to create an agent that repeatedly calls tools and receives results until a query is resolved.\n",
"\n",
"This is a more generalized version of the [OpenAI tools agent](/docs/modules/agents/agent_types/openai_tools/), which was designed for OpenAI's specific style of\n",
"tool calling. It uses LangChain's ToolCall interface to support a wider range of\n",
"provider implementations, such as [Anthropic](/docs/integrations/chat/anthropic/), [Google Gemini](/docs/integrations/chat/google_vertex_ai_palm/), and [Mistral](/docs/integrations/chat/mistralai/)\n",
"in addition to [OpenAI](/docs/integrations/chat/openai/).\n",
"\n",
"## Setup\n",
"\n",
"Any models that support tool calling can be used in this agent. [TODO ADD WHICH]\n",
"\n",
"This demo uses [Tavily](https://app.tavily.com), but you can also swap in any other [built-in tool](/docs/integrations/tools) or add [custom tools](/docs/modules/tools/custom_tools/).\n",
"You'll need to sign up for an API key and set it as `process.env.TAVILY_API_KEY`.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Tools\n",
"\n",
"We will first create a tool that can search the web:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"tools = [TavilySearchResults(max_results=1)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Agent\n",
"\n",
"Next, let's initialize our tool calling agent:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Make sure to use the tavily_search_results_json tool for information.\",\n",
" ),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" (\"placeholder\", \"{agent_scratchpad}\"),\n",
" ]\n",
")\n",
"\n",
"# Construct the Tools agent\n",
"agent = create_tool_calling_agent(llm, tools, prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent\n",
"\n",
"Now, let's initialize the executor that will run our agent and invoke it!"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:347: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
" warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `tavily_search_results_json` with `{'query': 'LangChain'}`\n",
"responded: [{'id': 'toolu_01QxrrT9srzkYCNyEZMDhGeg', 'input': {'query': 'LangChain'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://github.com/langchain-ai/langchain', 'content': 'About\\n⚡ Building applications with LLMs through composability ⚡\\nResources\\nLicense\\nCode of conduct\\nSecurity policy\\nStars\\nWatchers\\nForks\\nReleases\\n291\\nPackages\\n0\\nUsed by 39k\\nContributors\\n1,848\\nLanguages\\nFooter\\nFooter navigation Latest commit\\nGit stats\\nFiles\\nREADME.md\\n🦜️🔗 LangChain\\n⚡ Building applications with LLMs through composability ⚡\\nLooking for the JS/TS library? ⚡ Building applications with LLMs through composability ⚡\\nLicense\\nlangchain-ai/langchain\\nName already in use\\nUse Git or checkout with SVN using the web URL.\\n 📖 Documentation\\nPlease see here for full documentation, which includes:\\n💁 Contributing\\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\\n What can you build with LangChain?\\n❓ Retrieval augmented generation\\n💬 Analyzing structured data\\n🤖 Chatbots\\nAnd much more!'}]\u001b[0m"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:347: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
" warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3mLangChain is an open-source Python library that helps developers build applications with large language models (LLMs) through composability. Some key features of LangChain include:\n",
"\n",
"- Retrieval augmented generation - Allowing LLMs to retrieve and utilize external data sources when generating outputs.\n",
"\n",
"- Analyzing structured data - Tools for working with structured data like databases, APIs, PDFs, etc. and allowing LLMs to reason over this data.\n",
"\n",
"- Building chatbots and agents - Frameworks for building conversational AI applications.\n",
"\n",
"- Composability - LangChain allows you to chain together different LLM capabilities and data sources in a modular and reusable way.\n",
"\n",
"The library aims to make it easier to build real-world applications that leverage the power of large language models in a scalable and robust way. It provides abstractions and primitives for working with LLMs from different providers like OpenAI, Anthropic, Cohere, etc. LangChain is open-source and has an active community contributing new features and improvements.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'what is LangChain?',\n",
" 'output': 'LangChain is an open-source Python library that helps developers build applications with large language models (LLMs) through composability. Some key features of LangChain include:\\n\\n- Retrieval augmented generation - Allowing LLMs to retrieve and utilize external data sources when generating outputs.\\n\\n- Analyzing structured data - Tools for working with structured data like databases, APIs, PDFs, etc. and allowing LLMs to reason over this data.\\n\\n- Building chatbots and agents - Frameworks for building conversational AI applications.\\n\\n- Composability - LangChain allows you to chain together different LLM capabilities and data sources in a modular and reusable way.\\n\\nThe library aims to make it easier to build real-world applications that leverage the power of large language models in a scalable and robust way. It provides abstractions and primitives for working with LLMs from different providers like OpenAI, Anthropic, Cohere, etc. LangChain is open-source and has an active community contributing new features and improvements.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create an agent executor by passing in the agent and tools\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"agent_executor.invoke({\"input\": \"what is LangChain?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{=mdx}\n",
":::tip\n",
"[LangSmith trace](https://smith.langchain.com/public/2f956a2e-0820-47c4-a798-c83f024e5ca1/r)\n",
":::\n",
"```\n",
"\n",
"## Using with chat history\n",
"\n",
"This type of agent can optionally take chat messages representing previous conversation turns. It can use that previous history to respond conversationally. For more details, see [this section of the agent quickstart](/docs/modules/agents/quick_start#adding-in-memory)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:347: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
" warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3mBased on what you told me, your name is Bob. I don't need to use any tools to look that up since you directly provided your name.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"what's my name? Don't use tools to look this up unless you NEED to\",\n",
" 'chat_history': [HumanMessage(content='hi! my name is bob'),\n",
" AIMessage(content='Hello Bob! How can I assist you today?')],\n",
" 'output': \"Based on what you told me, your name is Bob. I don't need to use any tools to look that up since you directly provided your name.\"}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"what's my name? Don't use tools to look this up unless you NEED to\",\n",
" \"chat_history\": [\n",
" HumanMessage(content=\"hi! my name is bob\"),\n",
" AIMessage(content=\"Hello Bob! How can I assist you today?\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{=mdx}\n",
":::tip\n",
"[LangSmith trace](https://smith.langchain.com/public/e21ececb-2e60-49e5-9f06-a91b0fb11fb8/r)\n",
":::\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

0 comments on commit d68af26

Please sign in to comment.