New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI flavor #8155
OpenAI flavor #8155
Conversation
Documentation preview for 357f841 will be available here when this CircleCI job completes successfully. More info
|
:param path: Local filesystem path to the MLflow Model with the ``openai`` flavor. | ||
""" | ||
wrapper_cls = _TestOpenAIWrapper if _MLFLOW_OPENAI_TESTING.get() else _OpenAIWrapper | ||
return wrapper_cls(_load_model(path)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could not find a good way to mock requests
in the UDF
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense :)
8e00ffd
to
70af162
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verified basic functionality works as intended. This is awesome, @harupy ! https://e2-dogfood.staging.cloud.databricks.com/?o=6051921418418893#mlflow/experiments/469643393101378/runs/3991bf0fa56f4cf0ba8864bfc3a60a47
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Great work @harupy
6934854
to
bee4e70
Compare
Signed-off-by: harupy <hkawamura0130@gmail.com>
Signed-off-by: harupy <hkawamura0130@gmail.com>
Signed-off-by: harupy <hkawamura0130@gmail.com>
Signed-off-by: harupy <hkawamura0130@gmail.com>
Signed-off-by: harupy <hkawamura0130@gmail.com>
mlflow/openai/__init__.py
Outdated
_save_example(mlflow_model, input_example, path) | ||
if metadata is not None: | ||
mlflow_model.metadata = metadata | ||
model_data_subpath = "model.json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_data_subpath = "model.json" | |
model_data_subpath = "model.yaml" |
@jinzhang21 maybe we should use yaml here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but @sunishsheth2009 mentioned it doesn't work with YAML because Langchain doesn't serialize / format it properly. Does it work with OpenAI? I'd prefer to use YAML for consistency here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Langchain doesn't serialize / format it properly
@sunishsheth2009 Can you elaborate on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!! So in Langchain, I think its a bug with Langchain, but this is how it stores the yaml
file
_type: !!python/object/apply:langchain.agents.agent_types.AgentType
- zero-shot-react-description
allowed_tools:
- Search
- Calculator
llm_chain:
_type: llm_chain
llm:
_type: openai
best_of: 1
frequency_penalty: 0
logit_bias: {}
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0
request_timeout: null
temperature: 0.0
top_p: 1
memory: null
output_key: text
prompt:
_type: prompt
input_variables:
- input
- agent_scratchpad
output_parser: null
partial_variables: {}
template: 'Answer the following questions as best you can. You have access to
the following tools:
but json is stored like this:
{
"llm_chain": {
"memory": null,
"verbose": false,
"prompt": {
"input_variables": [
"input",
"agent_scratchpad"
],
"output_parser": null,
"partial_variables": {},
"template": "Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: {input}\nThought:{agent_scratchpad}",
"template_format": "f-string",
"validate_template": true,
"_type": "prompt"
},
"llm": {
"model_name": "text-davinci-003",
"temperature": 0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
},
"output_key": "text",
"_type": "llm_chain"
},
"allowed_tools": [
"Search",
"Calculator"
],
"_type": "zero-shot-react-description"
}
See that the type is incorrect in yaml but it is correct in json.
Maybe we can create a bug report with Langchain if we decide to go yaml
with Langchain Agent. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
_type: !!python/object/apply:langchain.agents.agent_types.AgentType
does seem incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
(Please fill in changes proposed in this fix)
How is this patch tested?
Does this PR change the documentation?
Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes