Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow LLMs async streaming to fallback on sync streaming #18920

Closed
1 task done
eyurtsev opened this issue Mar 11, 2024 · 5 comments
Closed
1 task done

Allow LLMs async streaming to fallback on sync streaming #18920

eyurtsev opened this issue Mar 11, 2024 · 5 comments
Labels
good first issue Good for newcomers help wanted Good issue for contributors 🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules

Comments

@eyurtsev
Copy link
Collaborator

Privileged issue

  • I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.

Issue Content

Goal

When using astream(), LLMs should fallback to sync streaming if an async streaming implementation is not available.

Context

Implementation of LLMs often include a sync implementation of streaming, but are missing an async implementation.

LLMs currently do not fallback on the sync streaming implementation.

For reference here's the BaseLLM implementation.

The current fallback sequence is:

  1. If _astream is defined use it
  2. if _astream is not defined fallback on ainvoke

The fallback sequence should be:

  1. if _astream is defined use it
  2. if _stream is defined fallback to it
  3. Finally if neither _astream or _stream are defined, fallback to ainvoke

This PR shows how the same problem was fixed for chat models: #18748

Acceptance criteria

  • Fallback sequence is correctly implemented
  • Unit-tests confirm that the fallback sequence works correctly (see the PR for the unit-tests)

This PR will not be accepted without unit-tests since this is critical functionality!

@eyurtsev eyurtsev added help wanted Good issue for contributors Ɑ: models Related to LLMs or chat model modules good first issue Good for newcomers labels Mar 11, 2024
@dosubot dosubot bot added the 🤖:improvement Medium size change to existing code to handle new use-cases label Mar 11, 2024
@maximeperrindev
Copy link
Contributor

@eyurtsev Trying to implement this.

@devkan
Copy link

devkan commented Mar 14, 2024

Is closed? I've been diligently learning Langchain recently ..

@maximeperrindev
Copy link
Contributor

Is closed? I've been diligently learning Langchain recently ..

@devkan the PR is opened #18960 and waiting for a review. Feel free to check the code and tell me what you think

@devkan
Copy link

devkan commented Mar 14, 2024

thank you.. I'm off to code right now.

eyurtsev pushed a commit that referenced this issue Mar 15, 2024
…#18960)

- **Description:** Handling fallbacks when calling async streaming for a
LLM that doesn't support it.
- **Issue:** #18920 
- **Twitter handle:**@maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
@eyurtsev
Copy link
Collaborator Author

Changed are merged now with nice simplification done in: #19332

rahul-trip pushed a commit to daxa-ai/langchain that referenced this issue Mar 27, 2024
…langchain-ai#18960)

- **Description:** Handling fallbacks when calling async streaming for a
LLM that doesn't support it.
- **Issue:** langchain-ai#18920 
- **Twitter handle:**@maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
bechbd pushed a commit to bechbd/langchain that referenced this issue Mar 29, 2024
…langchain-ai#18960)

- **Description:** Handling fallbacks when calling async streaming for a
LLM that doesn't support it.
- **Issue:** langchain-ai#18920 
- **Twitter handle:**@maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
gkorland pushed a commit to FalkorDB/langchain that referenced this issue Mar 30, 2024
…langchain-ai#18960)

- **Description:** Handling fallbacks when calling async streaming for a
LLM that doesn't support it.
- **Issue:** langchain-ai#18920 
- **Twitter handle:**@maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
hinthornw pushed a commit that referenced this issue Apr 26, 2024
…#18960)

- **Description:** Handling fallbacks when calling async streaming for a
LLM that doesn't support it.
- **Issue:** #18920 
- **Twitter handle:**@maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Good issue for contributors 🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

3 participants