Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Experimental]: Async agenerate method ollama functions #21682

Merged

Conversation

keenborder786
Copy link
Contributor

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Copy link

vercel bot commented May 14, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Jun 2, 2024 11:18pm

@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label May 14, 2024
@keenborder786
Copy link
Contributor Author

@baskaryan, @efriis, @eyurtsev, @hwchase17 please review

@dosubot dosubot bot added the 🤖:improvement Medium size change to existing code to handle new use-cases label May 14, 2024
keenborder786 and others added 4 commits May 15, 2024 01:44
…generate method
@@ -354,6 +357,86 @@ def _generate(
generations=[ChatGeneration(message=response_message_with_functions)]
)

async def _agenerate(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this duplicating the implementation of the sync method? I thought BaseModel already takes care of this automatically under the hood?

Copy link
Collaborator

@eyurtsev eyurtsev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't tell if this code is doing anything that the default base implementation isn't doing already.

It looks like the original issue has to do with serialization and that something is not serializable. Have you verified that adding an async implementation makes a difference?

system_message = system_message_prompt_template.format(
tools=json.dumps(functions, indent=2)
)
response_message = await super()._agenerate(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eyurtsev this is the difference, I am calling the async generate method which solves the serialization issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I tested the async method on my machine and it solves the issue: #2222

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can try it on your machine as well!!!

@keenborder786
Copy link
Contributor Author

@eyurtsev please see

@keenborder786
Copy link
Contributor Author

@eyurtsev

@keenborder786
Copy link
Contributor Author

@eyurtsev

@eyurtsev eyurtsev merged commit 7fcef25 into langchain-ai:master Jun 5, 2024
24 checks passed
@falmanna
Copy link
Contributor

what is the release schedule for this?

hinthornw pushed a commit that referenced this pull request Jun 20, 2024
- **Description:** :
Added Async method for Generate for OllamaFunctions which was missing
and was raising errors for the users.
   
- **Issue:** 
#21422
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants