-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invert the default output parsing for TextGenerator subtypes #8279
Conversation
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Documentation preview for 9e1507f will be available here when this CircleCI job completes successfully. More info
|
mlflow/transformers.py
Outdated
for to_replace, replace in replacements.items(): | ||
data_out = data_out.replace(to_replace, replace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if a user doesn't want the prompt, but wants to preserve \n
in the output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add an additional kwargs entry for that
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
mlflow/transformers.py
Outdated
): | ||
""" | ||
Parse the output from instruction pipelines to conform with other text generator | ||
pipeline types and remove line feed characters and other confusing outputs | ||
""" | ||
replacements = {"\n\n": " "} | ||
replacements = {"\n\n": " ", "\n": " "} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd use a regular expression here to shrink consecutive newline characters into a space.
\n+
I think that conveys the intention more clearly than running replacement twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. I'll update!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great point. Updated and added a spacing collapse for a weird edge case that can happen
docs/source/models.rst
Outdated
saving or logging the model: `"include_prompt": False`. To remove the newline characters from within the body | ||
of the generated text output, you can add the `"remove_newlines": True` option to the `inference_config` dictionary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about the option name like shrink_newlines
? To me, remove_newlines
sounds like replace("\n", "")
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other candidates (rejected in me):
replace_newlines
: makes me wonder "replace with what?"replace_newlines_with_space
: clear but too long
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it :) changing!
include_prompt = ( | ||
self.inference_config.pop("include_prompt", False) if self.inference_config else False | ||
self.inference_config.pop("include_prompt", True) if self.inference_config else True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does include_prompt
need to be popped out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately, yes.
transformers pipeline execution includes validation for kwargs submitted. If we leave that inference kwarg entry in (by using self.inference_config.get(...)
, we get:
E ValueError: The following model_kwargs
are not used by the model: ['include_prompt'] (note: typos in the generate arguments will also show up in this list)
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
): | ||
""" | ||
Parse the output from instruction pipelines to conform with other text generator | ||
pipeline types and remove line feed characters and other confusing outputs | ||
""" | ||
replacements = {"\n\n": " "} | ||
replacements = {"\n+": " ", "\\s+": " "} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is \\s+
a newline? If the flag name is shrink_newlines
, we should just shrink new lines.
Shrinking multiple spaces also has a risk. For example, you ask Dolly to give python code and Dolly produces the following code:
a = " "
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw why do we need to replace \\s+
? Does Dolly produce a response like I am<tab>Dolly
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're not declaring the match condition as a raw string (i.e., r"\s+"
, which looks odd as a dict key), then the literal escape sequence is equivalent if using either a single \s+
or double \\s+
.
example:
data = "Just\n\n testing\n something\n\n out\n\n\nhere.\n\n"
print("raw:")
print(data)
data = re.sub("\n+", " ", data)
print("remove newlines:")
print(data)
data_single = re.sub("\s+", " ", data)
print("remove extra spaces:")
print(data_single)
data_double = re.sub("\\s+", " ", data)
print("remove extra spaces double escape:")
print(data_double)
assert data_single == data_double
outputs:
raw:
Just
testing
something
out
here.
remove newlines:
Just testing something out here.
remove extra spaces:
Just testing something out here.
remove extra spaces double escape:
Just testing something out here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks @BenWilson2 !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once we address what we discussed in the standup!
Adding: convert to |
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
…8279) * Invert the default output parsing for TextGenerator subtypes Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> * PR feedback Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> * PR feedback Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> * final feedback on naming Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> --------- Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> Signed-off-by: Larry O’Brien <larry.obrien@databricks.com>
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Inverts the output parsing default configuration making newline removal and prompt stripping opt-in via an inference_config setting.
How is this patch tested?
Does this PR change the documentation?
Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes