Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AsyncCallbackManagerForToolRun improperly casts on_tool_end to string #20372

Open
5 tasks done
adreo00 opened this issue Apr 12, 2024 · 0 comments
Open
5 tasks done

AsyncCallbackManagerForToolRun improperly casts on_tool_end to string #20372

adreo00 opened this issue Apr 12, 2024 · 0 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations Ɑ: Runnables Related to Runnables

Comments

@adreo00
Copy link
Contributor

adreo00 commented Apr 12, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from typing import Any, AsyncIterator, List, Sequence, cast
from langchain_core.runnables.schema import StreamEvent
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder,
    PromptTemplate,
)
import langchain_core
import typing
from langchain_core.documents import Document
from langchain_core.tools import tool


def foo(x: int) -> dict:
    """Foo"""
    return {"x": 5}


@tool
def get_docs(x: int) -> list[Document]:
    """get_docs"""
    return [Document(page_content="hello")]


def _with_nulled_run_id(events: Sequence[StreamEvent]) -> List[StreamEvent]:
    """Removes the run ids from events."""
    return cast(List[StreamEvent], [{**event, "run_id": ""} for event in events])


async def _collect_events(events: AsyncIterator[StreamEvent]) -> List[StreamEvent]:
    """Collect the events and remove the run ids."""
    materialized_events = [event async for event in events]
    events_ = _with_nulled_run_id(materialized_events)
    for event in events_:
        event["tags"] = sorted(event["tags"])
    return events_


prompt_obj = {
    "name": None,
    "input_variables": ["agent_scratchpad", "input"],
    "input_types": {
        "chat_history": typing.List[
            typing.Union[
                langchain_core.messages.ai.AIMessage,
                langchain_core.messages.human.HumanMessage,
                langchain_core.messages.chat.ChatMessage,
                langchain_core.messages.system.SystemMessage,
                langchain_core.messages.function.FunctionMessage,
                langchain_core.messages.tool.ToolMessage,
            ]
        ],
        "agent_scratchpad": typing.List[
            typing.Union[
                langchain_core.messages.ai.AIMessage,
                langchain_core.messages.human.HumanMessage,
                langchain_core.messages.chat.ChatMessage,
                langchain_core.messages.system.SystemMessage,
                langchain_core.messages.function.FunctionMessage,
                langchain_core.messages.tool.ToolMessage,
            ]
        ],
    },
    "output_parser": None,
    "partial_variables": {},
    "metadata": {
        "lc_hub_owner": "hwchase17",
        "lc_hub_repo": "openai-tools-agent",
        "lc_hub_commit_hash": "c18672812789a3b9697656dd539edf0120285dcae36396d0b548ae42a4ed66f5",
    },
    "tags": None,
    "messages": [
        SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="You are a helpful assistant")),
        MessagesPlaceholder(variable_name="chat_history", optional=True),
        HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=["input"], template="{input}")),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ],
    "validate_template": False,
}

prompt = ChatPromptTemplate.parse_obj(prompt_obj)
tools = [get_docs]
llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0)

# Construct the OpenAI Tools agent
agent = create_openai_tools_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
events = await _collect_events(
    agent_executor.astream_events({"input": "call get_docs."}, version="v1", include_names=["get_docs"])
)
assert events == [
    {
        "event": "on_tool_start",
        "name": "get_docs",
        "run_id": "",
        "tags": [],
        "metadata": {},
        "data": {"input": {"x": 5}},
    },
    {
        "event": "on_tool_end",
        "name": "get_docs",
        "run_id": "",
        "tags": [],
        "metadata": {},
        "data": {"input": {"x": 5}, "output": [Document(page_content="hello")]},
    },
]

Error Message and Stack Trace (if applicable)

Assertion error:

---------------------------------------------------------------------------
{
	"name": "AssertionError",
	"message": "",
	"stack": "---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[4], line 96
     92 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
     93 events = await _collect_events(
     94     agent_executor.astream_events({\"input\": \"call get_docs.\"}, version=\"v1\", include_names=[\"get_docs\"])
     95 )
---> 96 assert events == [
     97     {
     98         \"event\": \"on_tool_start\",
     99         \"name\": \"get_docs\",
    100         \"run_id\": \"\",
    101         \"tags\": [],
    102         \"metadata\": {},
    103         \"data\": {\"input\": {\"x\": 5}},
    104     },
    105     {
    106         \"event\": \"on_tool_end\",
    107         \"name\": \"get_docs\",
    108         \"run_id\": \"\",
    109         \"tags\": [],
    110         \"metadata\": {},
    111         \"data\": {\"input\": {\"x\": 5}, \"output\": [Document(page_content=\"hello\")]},
    112     },
    113 ]

AssertionError: "
}

Description

When using an agent executor, and we call a tool, I expect the actual output of the tool, rather than the output being cast to a string.

This bug was originally raised here, and partially fixed except for when using an agent executor in this PR.

This comment shows the cause of the issue: #18932 (comment)

System Info

System Information
------------------
> OS:  Linux
> OS Version:  #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version:  3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0]

Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.38
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve
@dosubot dosubot bot added Ɑ: Runnables Related to Runnables 🔌: openai Primarily related to OpenAI integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Apr 12, 2024
eyurtsev pushed a commit that referenced this issue May 13, 2024
…ject to string (#20374)

- **Description:** Stops `AsyncCallbackManagerForToolRun` from
converting the output to str
- **Issue:** #20372
- **Dependencies:** None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations Ɑ: Runnables Related to Runnables
Projects
None yet
Development

No branches or pull requests

1 participant