-
Notifications
You must be signed in to change notification settings - Fork 13.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow LLMs async streaming to fallback on sync streaming #18920
Labels
good first issue
Good for newcomers
help wanted
Good issue for contributors
🤖:improvement
Medium size change to existing code to handle new use-cases
Ɑ: models
Related to LLMs or chat model modules
Comments
eyurtsev
added
help wanted
Good issue for contributors
Ɑ: models
Related to LLMs or chat model modules
good first issue
Good for newcomers
labels
Mar 11, 2024
dosubot
bot
added
the
🤖:improvement
Medium size change to existing code to handle new use-cases
label
Mar 11, 2024
@eyurtsev Trying to implement this. |
maximeperrindev
pushed a commit
to maximeperrindev/langchain
that referenced
this issue
Mar 12, 2024
Is closed? I've been diligently learning Langchain recently .. |
thank you.. I'm off to code right now. |
Changed are merged now with nice simplification done in: #19332 |
rahul-trip
pushed a commit
to daxa-ai/langchain
that referenced
this issue
Mar 27, 2024
…langchain-ai#18960) - **Description:** Handling fallbacks when calling async streaming for a LLM that doesn't support it. - **Issue:** langchain-ai#18920 - **Twitter handle:**@maximeperrin_ --------- Co-authored-by: Maxime Perrin <mperrin@doing.fr>
bechbd
pushed a commit
to bechbd/langchain
that referenced
this issue
Mar 29, 2024
…langchain-ai#18960) - **Description:** Handling fallbacks when calling async streaming for a LLM that doesn't support it. - **Issue:** langchain-ai#18920 - **Twitter handle:**@maximeperrin_ --------- Co-authored-by: Maxime Perrin <mperrin@doing.fr>
gkorland
pushed a commit
to FalkorDB/langchain
that referenced
this issue
Mar 30, 2024
…langchain-ai#18960) - **Description:** Handling fallbacks when calling async streaming for a LLM that doesn't support it. - **Issue:** langchain-ai#18920 - **Twitter handle:**@maximeperrin_ --------- Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
good first issue
Good for newcomers
help wanted
Good issue for contributors
🤖:improvement
Medium size change to existing code to handle new use-cases
Ɑ: models
Related to LLMs or chat model modules
Privileged issue
Issue Content
Goal
When using
astream()
, LLMs should fallback to sync streaming if an async streaming implementation is not available.Context
Implementation of LLMs often include a sync implementation of streaming, but are missing an async implementation.
LLMs currently do not fallback on the sync streaming implementation.
For reference here's the BaseLLM implementation.
The current fallback sequence is:
The fallback sequence should be:
This PR shows how the same problem was fixed for chat models: #18748
Acceptance criteria
This PR will not be accepted without unit-tests since this is critical functionality!
The text was updated successfully, but these errors were encountered: