Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

community: Remove model limitation on Anyscale LLM #17662

Merged
merged 3 commits into from
Feb 26, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
18 changes: 7 additions & 11 deletions libs/community/langchain_community/llms/anyscale.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,7 @@
from langchain_community.utils.openai import is_openai_v1

DEFAULT_BASE_URL = "https://api.endpoints.anyscale.com/v1"
DEFAULT_MODEL = "Meta-Llama/Llama-Guard-7b"

# Completion models support by Anyscale Endpoints
COMPLETION_MODELS = ["Meta-Llama/Llama-Guard-7b"]
DEFAULT_MODEL = "mistralai/Mixtral-8x7B-Instruct-v0.1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing default model is a breaking change, would it be better to add a warning that llama guard is deprecated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the model is not really affect the output, still will generate LLM responses for the prompt.
From Anyscale backend, we can see the previous Llama guide model has very low traffic, so the change won't be breaking to users ( That's also the reason we deprecate this model from public hosting)



def update_token_usage(
Expand Down Expand Up @@ -113,12 +110,6 @@ def validate_environment(cls, values: Dict) -> Dict:
"MODEL_NAME",
default=DEFAULT_MODEL,
)
if values["model_name"] not in COMPLETION_MODELS:
raise ValueError(
"langchain_community.llm.Anyscale ONLY works \
with completions models.For Chat models, please use \
langchain_community.chat_model.ChatAnyscale"
)

try:
import openai
Expand All @@ -135,7 +126,12 @@ def validate_environment(cls, values: Dict) -> Dict:
# "default_query": values["default_query"],
# "http_client": values["http_client"],
}
values["client"] = openai.OpenAI(**client_params).completions
if not values.get("client"):
values["client"] = openai.OpenAI(**client_params).completions
if not values.get("async_client"):
values["async_client"] = openai.AsyncOpenAI(
**client_params
).completions
else:
values["openai_api_base"] = values["anyscale_api_base"]
values["openai_api_key"] = values["anyscale_api_key"].get_secret_value()
Expand Down