-
Notifications
You must be signed in to change notification settings - Fork 17k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
experimental: LLMGraphTransformer add missing conditional adding restrictions to prompts for LLM that do not support function calling #22793
experimental: LLMGraphTransformer add missing conditional adding restrictions to prompts for LLM that do not support function calling #22793
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
cc @tomasonjo |
I don't really understand what was changed. New prompt seems identical to the previous one? |
What changed is that I made the following sentences only included if the variables
The current version includes these restrictions even if the user does not provide |
Ok, looks great. Please fix the linting errors and we can merge it in. |
@jordyantunes ping |
I'm sorry for the delay. I'll fix the linting errors today. |
Thanks! Ping @ccurme |
create_unstructured_prompt
(which is called for LLMs that do not support function calling) by adding conditional checks that verify if restrictions on entity types and rel_types should be added to the prompt. If the user provides a sufficiently large text, the current prompt may fail to produce results in some LLMs. I have first seen this issue when I implemented a custom LLM class that did not support Function Calling and used Gemini 1.5 Pro, but I was able to replicate this issue using OpenAI models.By loading a sufficiently large text
And using the chat class (that has function calling)
It works:
But if you try to use the non-chat LLM class (that does not support function calling)
It uses the prompt that has issues and sometimes does not produce any result
After implementing the changes, I was able to use both classes more consistently:
The results are a little inconsistent because the GPT 3.5 model may produce incomplete json due to the token limit, but that could be solved (or mitigated) by checking for a complete json when parsing it.