Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prefer_splitting_right_hand_side_of_assignments preview style #8943

Merged
merged 1 commit into from Dec 13, 2023

Conversation

MichaReiser
Copy link
Member

@MichaReiser MichaReiser commented Dec 1, 2023

Summary

This PR implements Black's prefer_splitting_right_hand_side_of_assignments preview style.

The gist of the new style is to prefer breaking the value before breaking the target or type annotation of an assignment:

aaaa["long_index"] = some_type

Before

aaaa[
	"long_index"
] = some_type

New

aaaa["long_index"] = (
	some_type
)

Closes #6975

Details

It turned out that there are some more rules involved than just splitting the value before the targets.I For example:

  • The first target never gets parenthesized, even if it is the only target
  • If the target right before the value (or type annotations, or type parameters) split, avoid parenthesizing the value because it leads to unnecessary parentheses
  • For call expression: Prefer breaking after the call expressions opening parentheses and only parenthesize the entire call expression if that's insufficient.

I added extensive documentation to FormatStatementLastExpression

/// Formats the last expression in statements that start with a keyword (like `return`) or after an operator (assignments).
///
/// The implementation avoids parenthesizing unsplittable values (like `None`, `True`, `False`, Names, a subset of strings)
/// if the value won't fit even when parenthesized.
///
/// ## Trailing comments
/// Trailing comments are inlined inside the `value`'s parentheses rather than formatted at the end
/// of the statement for unsplittable values if the `value` gets parenthesized.
///
/// Inlining the trailing comments prevent situations where the parenthesized value
/// still exceeds the configured line width, but parenthesizing helps to make the trailing comment fit.
/// Instead, it only parenthesizes `value` if it makes both the `value` and the trailing comment fit.
/// See [PR 8431](https://github.com/astral-sh/ruff/pull/8431) for more details.
///
/// The implementation formats the statement's and value's trailing end of line comments:
/// * after the expression if the expression needs no parentheses (necessary or the `expand_parent` makes the group never fit).
/// * inside the parentheses if the expression exceeds the line-width.
///
/// ```python
/// a = loooooooooooooooooooooooooooong # with_comment
/// b = (
/// short # with_comment
/// )
/// ```
///
/// Which gets formatted to:
///
/// ```python
/// # formatted
/// a = (
/// loooooooooooooooooooooooooooong # with comment
/// )
/// b = short # with comment
/// ```
///
/// The long name gets parenthesized because it exceeds the configured line width and the trailing comment of the
/// statement gets formatted inside (instead of outside) the parentheses.
///
/// No parentheses are added for `short` because it fits into the configured line length, regardless of whether
/// the comment exceeds the line width or not.
///
/// This logic isn't implemented in [`place_comment`] by associating trailing statement comments to the expression because
/// doing so breaks the suite empty lines formatting that relies on trailing comments to be stored on the statement.
pub(super) enum FormatStatementsLastExpression<'a> {
/// Prefers to split what's left of `value` before splitting the value.
///
/// ```python
/// aaaaaaa[bbbbbbbb] = some_long_value
/// ```
///
/// This layout splits `aaaaaaa[bbbbbbbb]` first assuming the whole statements exceeds the line width, resulting in
///
/// ```python
/// aaaaaaa[
/// bbbbbbbb
/// ] = some_long_value
/// ```
///
/// This layout is preferred over [`RightToLeft`] if the left is unsplittable (single keyword like `return` or a Name)
/// because it has better performance characteristics.
LeftToRight {
/// The right side of an assignment or the value returned in a return statement.
value: &'a Expr,
/// The parent statement that encloses the `value` expression.
statement: AnyNodeRef<'a>,
},
/// Prefers parenthesizing the value before splitting the left side. Specific to assignments.
///
/// Formats what's left of `value` together with the assignment operator and the assigned `value`.
/// This layout prefers parenthesizing the value over parenthesizing the left (target or type annotation):
///
/// ```python
/// aaaaaaa[bbbbbbbb] = some_long_value
/// ```
///
/// gets formatted to...
///
/// ```python
/// aaaaaaa[bbbbbbbb] = (
/// some_long_value
/// )
/// ```
///
/// ... regardless whether the value will fit or not.
///
/// The left only gets parenthesized if the left exceeds the configured line width on its own or
/// is forced to split because of a magical trailing comma or contains comments:
///
/// ```python
/// aaaaaaa[bbbbbbbb_exceeds_the_line_width] = some_long_value
/// ```
///
/// gets formatted to
/// ```python
/// aaaaaaa[
/// bbbbbbbb_exceeds_the_line_width
/// ] = some_long_value
/// ```
///
/// The layout avoids parenthesizing the value when the left splits to avoid
/// unnecessary parentheses. Adding the parentheses, as shown in the below example, reduces readability.
///
/// ```python
/// aaaaaaa[
/// bbbbbbbb_exceeds_the_line_width
/// ] = (
/// some_long_value
/// )
///
/// ## Non-fluent Call Expressions
/// Non-fluent call expressions in the `value` position are only parenthesized if the opening parentheses
/// exceeds the configured line length. The layout prefers splitting after the opening parentheses
/// if the `callee` expression and the opening parentheses fit.
/// fits on the line.
RightToLeft {
/// The expression that comes before the assignment operator. This is either
/// the last target, or the type annotation of an annotated assignment.
before_operator: AnyBeforeOperator<'a>,
/// The assignment operator. Either `Assign` (`=`) or the operator used by the augmented assignment statement.
operator: AnyAssignmentOperator,
/// The assigned `value`.
value: &'a Expr,
/// The assignment statement.
statement: AnyNodeRef<'a>,
},
}

Differences to Black

Black doesn't seem to implement this behavior for type alias statements. We do this to ensure all assignment-like statements are formatted the same.

Performance

The new right-to-left layout is more expensive than our existing layout because it requires using BestFitting (allocates, needs to try multiple different variants). The good news is that this layout is only necessary when the assignment has:

  • The target or type annotation can split (e.g. subscription)
  • multiple targets
  • type parameters

This is rare in comparison to most assignments that are of the form a = b or a: b = c.

I checked Codspeed and our micro benchmarks only regress by 1-2%

Test Plan

I added new tests, reviewed the Black related preview style tests (that we now match except for comment handling).

The poetry compatibility improves from 0.96208 to 0.96224.

@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch from 79b44e8 to e6977a2 Compare December 1, 2023 07:23
@MichaReiser MichaReiser added formatter Related to the formatter preview Related to preview mode features labels Dec 1, 2023
@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch from e6977a2 to 9c4e2b1 Compare December 1, 2023 07:43
Copy link

github-actions bot commented Dec 1, 2023

ruff-ecosystem results

Formatter (stable)

✅ ecosystem check detected no format changes.

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+753 -797 lines in 101 files in 41 projects)

PostHog/HouseWatch (+10 -10 lines across 1 file)

ruff format --preview

housewatch/clickhouse/backups.py~L36

     for shard, node in nodes:
         params["shard"] = shard
         if base_backup:
-            query_settings[
-                "base_backup"
-            ] = f"S3('{base_backup}/{shard}', '{aws_key}', '{aws_secret}')"
+            query_settings["base_backup"] = (
+                f"S3('{base_backup}/{shard}', '{aws_key}', '{aws_secret}')"
+            )
         final_query = query % (params or {}) if substitute_params else query
         client = Client(
             host=node["host_address"],

housewatch/clickhouse/backups.py~L123

     TO S3('https://%(bucket)s.s3.amazonaws.com/%(path)s', '%(aws_key)s', '%(aws_secret)s')
     ASYNC"""
     if base_backup:
-        query_settings[
-            "base_backup"
-        ] = f"S3('{base_backup}', '{aws_key}', '{aws_secret}')"
+        query_settings["base_backup"] = (
+            f"S3('{base_backup}', '{aws_key}', '{aws_secret}')"
+        )
     return run_query(
         QUERY,
         {

housewatch/clickhouse/backups.py~L178

                 TO S3('https://%(bucket)s.s3.amazonaws.com/%(path)s', '%(aws_key)s', '%(aws_secret)s')
                 ASYNC"""
     if base_backup:
-        query_settings[
-            "base_backup"
-        ] = f"S3('{base_backup}', '{aws_key}', '{aws_secret}')"
+        query_settings["base_backup"] = (
+            f"S3('{base_backup}', '{aws_key}', '{aws_secret}')"
+        )
     return run_query(
         QUERY,
         {

RasaHQ/rasa (+107 -107 lines across 14 files)

ruff format --preview

rasa/cli/utils.py~L132

 
         # add random value for assistant id, overwrite config file
         time_format = "%Y%m%d-%H%M%S"
-        config_data[
-            ASSISTANT_ID_KEY
-        ] = f"{time.strftime(time_format)}-{randomname.get_name()}"
+        config_data[ASSISTANT_ID_KEY] = (
+            f"{time.strftime(time_format)}-{randomname.get_name()}"
+        )
 
         rasa.shared.utils.io.write_yaml(
             data=config_data, target=config_file, should_preserve_key_order=True

rasa/core/policies/rule_policy.py~L774

         trackers_as_actions = rule_trackers_as_actions + story_trackers_as_actions
 
         # negative rules are not anti-rules, they are auxiliary to actual rules
-        self.lookup[
-            RULES_FOR_LOOP_UNHAPPY_PATH
-        ] = self._create_loop_unhappy_lookup_from_states(
-            trackers_as_states, trackers_as_actions
+        self.lookup[RULES_FOR_LOOP_UNHAPPY_PATH] = (
+            self._create_loop_unhappy_lookup_from_states(
+                trackers_as_states, trackers_as_actions
+            )
         )
 
     def train(

rasa/core/policies/ted_policy.py~L1264

             )
             self._prepare_encoding_layers(name)
 
-        self._tf_layers[
-            f"transformer.{DIALOGUE}"
-        ] = rasa_layers.prepare_transformer_layer(
-            attribute_name=DIALOGUE,
-            config=self.config,
-            num_layers=self.config[NUM_TRANSFORMER_LAYERS][DIALOGUE],
-            units=self.config[TRANSFORMER_SIZE][DIALOGUE],
-            drop_rate=self.config[DROP_RATE_DIALOGUE],
-            # use bidirectional transformer, because
-            # we will invert dialogue sequence so that the last turn is located
-            # at the first position and would always have
-            # exactly the same positional encoding
-            unidirectional=not self.max_history_featurizer_is_used,
+        self._tf_layers[f"transformer.{DIALOGUE}"] = (
+            rasa_layers.prepare_transformer_layer(
+                attribute_name=DIALOGUE,
+                config=self.config,
+                num_layers=self.config[NUM_TRANSFORMER_LAYERS][DIALOGUE],
+                units=self.config[TRANSFORMER_SIZE][DIALOGUE],
+                drop_rate=self.config[DROP_RATE_DIALOGUE],
+                # use bidirectional transformer, because
+                # we will invert dialogue sequence so that the last turn is located
+                # at the first position and would always have
+                # exactly the same positional encoding
+                unidirectional=not self.max_history_featurizer_is_used,
+            )
         )
 
         self._prepare_label_classification_layers(DIALOGUE)

rasa/core/policies/ted_policy.py~L1307

         # Attributes with sequence-level features also have sentence-level features,
         # all these need to be combined and further processed.
         if attribute_name in SEQUENCE_FEATURES_TO_ENCODE:
-            self._tf_layers[
-                f"sequence_layer.{attribute_name}"
-            ] = rasa_layers.RasaSequenceLayer(
-                attribute_name, attribute_signature, config_to_use
+            self._tf_layers[f"sequence_layer.{attribute_name}"] = (
+                rasa_layers.RasaSequenceLayer(
+                    attribute_name, attribute_signature, config_to_use
+                )
             )
         # Attributes without sequence-level features require some actual feature
         # processing only if they have sentence-level features. Attributes with no
         # sequence- and sentence-level features (dialogue, entity_tags, label) are
         # skipped here.
         elif SENTENCE in attribute_signature:
-            self._tf_layers[
-                f"sparse_dense_concat_layer.{attribute_name}"
-            ] = rasa_layers.ConcatenateSparseDenseFeatures(
-                attribute=attribute_name,
-                feature_type=SENTENCE,
-                feature_type_signature=attribute_signature[SENTENCE],
-                config=config_to_use,
+            self._tf_layers[f"sparse_dense_concat_layer.{attribute_name}"] = (
+                rasa_layers.ConcatenateSparseDenseFeatures(
+                    attribute=attribute_name,
+                    feature_type=SENTENCE,
+                    feature_type_signature=attribute_signature[SENTENCE],
+                    config=config_to_use,
+                )
             )
 
     def _prepare_encoding_layers(self, name: Text) -> None:

rasa/engine/graph.py~L107

         nodes = {}
         for node_name, serialized_node in serialized_graph_schema["nodes"].items():
             try:
-                serialized_node[
-                    "uses"
-                ] = rasa.shared.utils.common.class_from_module_path(
-                    serialized_node["uses"]
+                serialized_node["uses"] = (
+                    rasa.shared.utils.common.class_from_module_path(
+                        serialized_node["uses"]
+                    )
                 )
 
                 resource = serialized_node["resource"]

rasa/engine/recipes/default_recipe.py~L150

             else:
                 unique_types = set(component_types)
 
-            cls._registered_components[
-                registered_class.__name__
-            ] = cls.RegisteredComponent(
-                registered_class, unique_types, is_trainable, model_from
+            cls._registered_components[registered_class.__name__] = (
+                cls.RegisteredComponent(
+                    registered_class, unique_types, is_trainable, model_from
+                )
             )
             return registered_class
 

rasa/graph_components/validators/default_recipe_validator.py~L294

         Both of these look for the same entities based on the same training data
         leading to ambiguity in the results.
         """
-        extractors_in_configuration: Set[
-            Type[GraphComponent]
-        ] = self._component_types.intersection(TRAINABLE_EXTRACTORS)
+        extractors_in_configuration: Set[Type[GraphComponent]] = (
+            self._component_types.intersection(TRAINABLE_EXTRACTORS)
+        )
         if len(extractors_in_configuration) > 1:
             rasa.shared.utils.io.raise_warning(
                 f"You have defined multiple entity extractors that do the same job "

rasa/nlu/classifiers/diet_classifier.py~L1446

         # everything using a transformer and optionally also do masked language
         # modeling.
         self.text_name = TEXT
-        self._tf_layers[
-            f"sequence_layer.{self.text_name}"
-        ] = rasa_layers.RasaSequenceLayer(
-            self.text_name, self.data_signature[self.text_name], self.config
+        self._tf_layers[f"sequence_layer.{self.text_name}"] = (
+            rasa_layers.RasaSequenceLayer(
+                self.text_name, self.data_signature[self.text_name], self.config
+            )
         )
         if self.config[MASKED_LM]:
             self._prepare_mask_lm_loss(self.text_name)

rasa/nlu/classifiers/diet_classifier.py~L1468

                 DENSE_INPUT_DROPOUT: False,
             })
 
-            self._tf_layers[
-                f"feature_combining_layer.{self.label_name}"
-            ] = rasa_layers.RasaFeatureCombiningLayer(
-                self.label_name, self.label_signature[self.label_name], label_config
+            self._tf_layers[f"feature_combining_layer.{self.label_name}"] = (
+                rasa_layers.RasaFeatureCombiningLayer(
+                    self.label_name, self.label_signature[self.label_name], label_config
+                )
             )
 
             self._prepare_ffnn_layer(

rasa/nlu/featurizers/sparse_featurizer/lexical_syntactic_featurizer.py~L336

 
                 token = tokens[absolute_position]
                 for feature_name in self._feature_config[window_position]:
-                    token_features[
-                        (window_position, feature_name)
-                    ] = self._extract_raw_features_from_token(
-                        token=token,
-                        feature_name=feature_name,
-                        token_position=absolute_position,
-                        num_tokens=len(tokens),
+                    token_features[(window_position, feature_name)] = (
+                        self._extract_raw_features_from_token(
+                            token=token,
+                            feature_name=feature_name,
+                            token_position=absolute_position,
+                            num_tokens=len(tokens),
+                        )
                     )
 
             sentence_features.append(token_features)

rasa/nlu/selectors/response_selector.py~L430

         self, message: Message, prediction_dict: Dict[Text, Any], selector_key: Text
     ) -> None:
         message_selector_properties = message.get(RESPONSE_SELECTOR_PROPERTY_NAME, {})
-        message_selector_properties[
-            RESPONSE_SELECTOR_RETRIEVAL_INTENTS
-        ] = self.all_retrieval_intents
+        message_selector_properties[RESPONSE_SELECTOR_RETRIEVAL_INTENTS] = (
+            self.all_retrieval_intents
+        )
         message_selector_properties[selector_key] = prediction_dict
         message.set(
             RESPONSE_SELECTOR_PROPERTY_NAME,

rasa/nlu/selectors/response_selector.py~L793

             (self.text_name, self.config),
             (self.label_name, label_config),
         ]:
-            self._tf_layers[
-                f"sequence_layer.{attribute}"
-            ] = rasa_layers.RasaSequenceLayer(
-                attribute, self.data_signature[attribute], config
+            self._tf_layers[f"sequence_layer.{attribute}"] = (
+                rasa_layers.RasaSequenceLayer(
+                    attribute, self.data_signature[attribute], config
+                )
             )
 
         if self.config[MASKED_LM]:

rasa/shared/core/domain.py~L1496

                 if not response_text or "\n" not in response_text:
                     continue
                 # Has new lines, use `LiteralScalarString`
-                final_responses[utter_action][i][
-                    KEY_RESPONSES_TEXT
-                ] = LiteralScalarString(response_text)
+                final_responses[utter_action][i][KEY_RESPONSES_TEXT] = (
+                    LiteralScalarString(response_text)
+                )
 
         return final_responses
 

rasa/shared/nlu/training_data/formats/rasa_yaml.py~L529

             )
 
             if examples_have_metadata or example_texts_have_escape_chars:
-                intent[
-                    key_examples
-                ] = RasaYAMLWriter._render_training_examples_as_objects(converted)
+                intent[key_examples] = (
+                    RasaYAMLWriter._render_training_examples_as_objects(converted)
+                )
             else:
                 intent[key_examples] = RasaYAMLWriter._render_training_examples_as_text(
                     converted

rasa/utils/tensorflow/model_data.py~L735

         # if a label was skipped in current batch
         skipped = [False] * num_label_ids
 
-        new_data: DefaultDict[
-            Text, DefaultDict[Text, List[List[FeatureArray]]]
-        ] = defaultdict(lambda: defaultdict(list))
+        new_data: DefaultDict[Text, DefaultDict[Text, List[List[FeatureArray]]]] = (
+            defaultdict(lambda: defaultdict(list))
+        )
 
         while min(num_data_cycles) == 0:
             if shuffle:

rasa/utils/tensorflow/model_data.py~L888

         Returns:
             The test and train RasaModelData
         """
-        data_train: DefaultDict[
-            Text, DefaultDict[Text, List[FeatureArray]]
-        ] = defaultdict(lambda: defaultdict(list))
+        data_train: DefaultDict[Text, DefaultDict[Text, List[FeatureArray]]] = (
+            defaultdict(lambda: defaultdict(list))
+        )
         data_val: DefaultDict[Text, DefaultDict[Text, List[Any]]] = defaultdict(
             lambda: defaultdict(list)
         )

rasa/utils/tensorflow/models.py~L324

                 # We only need input, since output is always None and not
                 # consumed by our TF graphs.
                 batch_in = next(data_iterator)[0]
-                batch_out: Dict[
-                    Text, Union[np.ndarray, Dict[Text, Any]]
-                ] = self._rasa_predict(batch_in)
+                batch_out: Dict[Text, Union[np.ndarray, Dict[Text, Any]]] = (
+                    self._rasa_predict(batch_in)
+                )
                 if output_keys_expected:
                     batch_out = {
                         key: output

rasa/utils/tensorflow/rasa_layers.py~L442

         for feature_type, present in self._feature_types_present.items():
             if not present:
                 continue
-            self._tf_layers[
-                f"sparse_dense.{feature_type}"
-            ] = ConcatenateSparseDenseFeatures(
-                attribute=attribute,
-                feature_type=feature_type,
-                feature_type_signature=attribute_signature[feature_type],
-                config=config,
+            self._tf_layers[f"sparse_dense.{feature_type}"] = (
+                ConcatenateSparseDenseFeatures(
+                    attribute=attribute,
+                    feature_type=feature_type,
+                    feature_type_signature=attribute_signature[feature_type],
+                    config=config,
+                )
             )
 
     def _prepare_sequence_sentence_concat(

rasa/utils/tensorflow/rasa_layers.py~L851

                 not signature.is_sparse for signature in attribute_signature[SEQUENCE]
             ])
             if not expect_dense_seq_features:
-                self._tf_layers[
-                    self.SPARSE_TO_DENSE_FOR_TOKEN_IDS
-                ] = layers.DenseForSparse(
-                    units=2,
-                    use_bias=False,
-                    trainable=False,
-                    name=f"{self.SPARSE_TO_DENSE_FOR_TOKEN_IDS}.{attribute}",
+                self._tf_layers[self.SPARSE_TO_DENSE_FOR_TOKEN_IDS] = (
+                    layers.DenseForSparse(
+                        units=2,
+                        use_bias=False,
+                        trainable=False,
+                        name=f"{self.SPARSE_TO_DENSE_FOR_TOKEN_IDS}.{attribute}",
+                    )
                 )
 
     def _calculate_output_units(

Snowflake-Labs/snowcli (+4 -4 lines across 1 file)

ruff format --preview

src/snowcli/app/commands_registration/command_plugins_loader.py~L78

             )
             return None
         self._loaded_plugins[plugin_name] = loaded_plugin
-        self._loaded_command_paths[
-            loaded_plugin.command_spec.full_command_path
-        ] = loaded_plugin
+        self._loaded_command_paths[loaded_plugin.command_spec.full_command_path] = (
+            loaded_plugin
+        )
         return loaded_plugin
 
     def _load_plugin_spec(

apache/airflow (+27 -27 lines across 3 files)

ruff format --preview

tests/jobs/test_backfill_job.py~L875

         dag_maker.create_dagrun(state=None)
 
         executor = MockExecutor(parallelism=16)
-        executor.mock_task_results[
-            TaskInstanceKey(dag.dag_id, task1.task_id, DEFAULT_DATE, try_number=1)
-        ] = State.UP_FOR_RETRY
-        executor.mock_task_results[
-            TaskInstanceKey(dag.dag_id, task1.task_id, DEFAULT_DATE, try_number=2)
-        ] = State.UP_FOR_RETRY
+        executor.mock_task_results[TaskInstanceKey(dag.dag_id, task1.task_id, DEFAULT_DATE, try_number=1)] = (
+            State.UP_FOR_RETRY
+        )
+        executor.mock_task_results[TaskInstanceKey(dag.dag_id, task1.task_id, DEFAULT_DATE, try_number=2)] = (
+            State.UP_FOR_RETRY
+        )
         job = Job(executor=executor)
         job_runner = BackfillJobRunner(
             job=job,

tests/jobs/test_backfill_job.py~L903

         dr = dag_maker.create_dagrun(state=None)
 
         executor = MockExecutor(parallelism=16)
-        executor.mock_task_results[
-            TaskInstanceKey(dag.dag_id, task1.task_id, dr.run_id, try_number=1)
-        ] = State.UP_FOR_RETRY
+        executor.mock_task_results[TaskInstanceKey(dag.dag_id, task1.task_id, dr.run_id, try_number=1)] = (
+            State.UP_FOR_RETRY
+        )
         executor.mock_task_fail(dag.dag_id, task1.task_id, dr.run_id, try_number=2)
         job = Job(executor=executor)
         job_runner = BackfillJobRunner(

tests/providers/amazon/aws/executors/ecs/test_ecs_executor.py~L856

 
         os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.REGION_NAME}".upper()] = "us-west-1"
         os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CLUSTER}".upper()] = "some-cluster"
-        os.environ[
-            f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CONTAINER_NAME}".upper()
-        ] = "container-name"
-        os.environ[
-            f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.TASK_DEFINITION}".upper()
-        ] = "some-task-def"
+        os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CONTAINER_NAME}".upper()] = (
+            "container-name"
+        )
+        os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.TASK_DEFINITION}".upper()] = (
+            "some-task-def"
+        )
         os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.LAUNCH_TYPE}".upper()] = "FARGATE"
         os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.PLATFORM_VERSION}".upper()] = "LATEST"
         os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.ASSIGN_PUBLIC_IP}".upper()] = "False"

tests/providers/amazon/aws/executors/ecs/test_ecs_executor.py~L872

         assert raised.match("At least one subnet is required to run a task.")
 
     def test_config_defaults_are_applied(self, assign_subnets):
-        os.environ[
-            f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CONTAINER_NAME}".upper()
-        ] = "container-name"
+        os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CONTAINER_NAME}".upper()] = (
+            "container-name"
+        )
         from airflow.providers.amazon.aws.executors.ecs import ecs_executor_config
 
         task_kwargs = _recursive_flatten_dict(ecs_executor_config.build_task_kwargs())

tests/providers/amazon/aws/executors/ecs/test_ecs_executor.py~L1078

 
         executor.ecs = ecs_mock
 
-        os.environ[
-            f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CHECK_HEALTH_ON_STARTUP}".upper()
-        ] = "False"
+        os.environ[f"AIRFLOW__{CONFIG_GROUP_NAME}__{AllEcsConfigKeys.CHECK_HEALTH_ON_STARTUP}".upper()] = (
+            "False"
+        )
 
         executor.start()
 

tests/system/providers/papermill/conftest.py~L49

 
 @pytest.fixture(scope="session", autouse=True)
 def airflow_conn(remote_kernel):
-    os.environ[
-        "AIRFLOW_CONN_JUPYTER_KERNEL_DEFAULT"
-    ] = '{"host": "localhost", "extra": {"shell_port": 60316} }'
+    os.environ["AIRFLOW_CONN_JUPYTER_KERNEL_DEFAULT"] = (
+        '{"host": "localhost", "extra": {"shell_port": 60316} }'
+    )

aws/aws-sam-cli (+24 -24 lines across 6 files)

ruff format --preview

samcli/commands/_utils/options.py~L748

     def hook_name_processer_wrapper(f):
         configuration_setup_params = ()
         configuration_setup_attrs = {}
-        configuration_setup_attrs[
-            "help"
-        ] = "This is a hidden click option whose callback function to run the provided hook package."
+        configuration_setup_attrs["help"] = (
+            "This is a hidden click option whose callback function to run the provided hook package."
+        )
         configuration_setup_attrs["is_eager"] = True
         configuration_setup_attrs["expose_value"] = False
         configuration_setup_attrs["hidden"] = True

samcli/hook_packages/terraform/hooks/prepare/resource_linking.py~L158

     cfn_resource_update_call_back_function: Callable[[Dict, List[ReferenceType]], None]
     linking_exceptions: ResourcePairExceptions
     # function to extract the terraform destination value from the linking field value
-    tf_destination_value_extractor_from_link_field_value_function: Callable[
-        [str], str
-    ] = _default_tf_destination_value_id_extractor
+    tf_destination_value_extractor_from_link_field_value_function: Callable[[str], str] = (
+        _default_tf_destination_value_id_extractor
+    )
 
 
 class ResourceLinker:

samcli/lib/list/endpoints/endpoints_producer.py~L469

             resource.get(RESOURCE_TYPE, "") == AWS_APIGATEWAY_DOMAIN_NAME
             or resource.get(RESOURCE_TYPE, "") == AWS_APIGATEWAY_V2_DOMAIN_NAME
         ):
-            response_domain_dict[
-                resource.get(LOGICAL_RESOURCE_ID, "")
-            ] = f'https://{resource.get(PHYSICAL_RESOURCE_ID, "")}'
+            response_domain_dict[resource.get(LOGICAL_RESOURCE_ID, "")] = (
+                f'https://{resource.get(PHYSICAL_RESOURCE_ID, "")}'
+            )
     return response_domain_dict
 
 

tests/integration/buildcmd/test_build_terraform_applications.py~L80

             command_list_parameters["use_container"] = True
             command_list_parameters["build_image"] = self.docker_tag
             if self.override:
-                command_list_parameters[
-                    "container_env_var"
-                ] = "TF_VAR_HELLO_FUNCTION_SRC_CODE=./artifacts/HelloWorldFunction2"
+                command_list_parameters["container_env_var"] = (
+                    "TF_VAR_HELLO_FUNCTION_SRC_CODE=./artifacts/HelloWorldFunction2"
+                )
 
         environment_variables = os.environ.copy()
         if self.override:

tests/unit/commands/_utils/test_template.py~L286

                 self.expected_result,
             )
 
-            expected_template_dict["Resources"]["MyResourceWithRelativePath"]["Metadata"][
-                "aws:asset:path"
-            ] = self.expected_result
+            expected_template_dict["Resources"]["MyResourceWithRelativePath"]["Metadata"]["aws:asset:path"] = (
+                self.expected_result
+            )
 
             result = _update_relative_paths(template_dict, self.src, self.dest)
 

tests/unit/commands/deploy/test_auth_utils.py~L56

         ]
         # setup authorizer and auth explicitly on the event properties.
         event_properties["Auth"] = {"ApiKeyRequired": True, "Authorizer": None}
-        self.template_dict["Resources"]["HelloWorldFunction"]["Properties"]["Events"]["HelloWorld"][
-            "Properties"
-        ] = event_properties
+        self.template_dict["Resources"]["HelloWorldFunction"]["Properties"]["Events"]["HelloWorld"]["Properties"] = (
+            event_properties
+        )
         _auth_per_resource = auth_per_resource([Stack("", "", "", {}, self.template_dict)])
         self.assertEqual(_auth_per_resource, [("HelloWorldFunction", True)])
 

commaai/openpilot (+7 -7 lines across 1 file)

ruff format --preview

tools/replay/lib/ui_helpers.py~L246

 def get_blank_lid_overlay(UP):
     lid_overlay = np.zeros((UP.lidar_x, UP.lidar_y), "uint8")
     # Draw the car.
-    lid_overlay[
-        int(round(UP.lidar_car_x - UP.car_hwidth)) : int(round(UP.lidar_car_x + UP.car_hwidth)), int(round(UP.lidar_car_y - UP.car_front))
-    ] = UP.car_color
-    lid_overlay[
-        int(round(UP.lidar_car_x - UP.car_hwidth)) : int(round(UP.lidar_car_x + UP.car_hwidth)), int(round(UP.lidar_car_y + UP.car_back))
-    ] = UP.car_color
+    lid_overlay[int(round(UP.lidar_car_x - UP.car_hwidth)) : int(round(UP.lidar_car_x + UP.car_hwidth)), int(round(UP.lidar_car_y - UP.car_front))] = (
+        UP.car_color
+    )
+    lid_overlay[int(round(UP.lidar_car_x - UP.car_hwidth)) : int(round(UP.lidar_car_x + UP.car_hwidth)), int(round(UP.lidar_car_y + UP.car_back))] = (
+        UP.car_color
+    )
     lid_overlay[int(round(UP.lidar_car_x - UP.car_hwidth)), int(round(UP.lidar_car_y - UP.car_front)) : int(round(UP.lidar_car_y + UP.car_back))] = UP.car_color
     lid_overlay[int(round(UP.lidar_car_x + UP.car_hwidth)), int(round(UP.lidar_car_y - UP.car_front)) : int(round(UP.lidar_car_y + UP.car_back))] = UP.car_color
     return lid_overlay

demisto/content (+65 -79 lines across 8 files)

ruff format --preview --exclude Packs/ThreatQ/Integrations/ThreatQ/ThreatQ.py

Packs/Base/Scripts/DBotMLFetchData/DBotMLFetchData.py~L1098

         durations = []
     else:
         load_external_resources()
-        (
-            X,
-            exceptions_log,
-            short_text_indices,
-            exception_indices,
-            timeout_indices,
-            durations,
-        ) = extract_features_from_all_incidents(incidents_df, label_fields)
+        X, exceptions_log, short_text_indices, exception_indices, timeout_indices, durations = (
+            extract_features_from_all_incidents(incidents_df, label_fields)
+        )
 
     return {
         "X": X,

Packs/ExportIndicators/Integrations/ExportIndicators/ExportIndicators.py~L806

             ],
         )
         resp.cache_control.max_age = max_age
-        resp.cache_control[
-            "stale-if-error"
-        ] = "600"  # number of seconds we are willing to serve stale content when there is an error
+        resp.cache_control["stale-if-error"] = (
+            "600"  # number of seconds we are willing to serve stale content when there is an error
+        )
         return resp
 
     except Exception:

Packs/GoogleChronicleBackstory/Integrations/GoogleChronicleBackstory/GoogleChronicleBackstory.py~L2291

         return [], curatedrule_detection_to_process, curatedrule_detection_to_pull, pending_curatedrule_id, simple_backoff_rules
 
     # get curated rule detections using API call.
-    (
-        curatedrule_detection_to_process,
-        curatedrule_detection_to_pull,
-        pending_curatedrule_id,
-        simple_backoff_rules,
-    ) = get_max_fetch_curatedrule_detections(
-        client_obj,
-        start_time,
-        end_time,
-        max_fetch,
-        curatedrule_detection_to_process,
-        curatedrule_detection_to_pull,
-        pending_curatedrule_id,
-        alert_state,
-        simple_backoff_rules,
-        fetch_detection_by_list_basis,
+    (curatedrule_detection_to_process, curatedrule_detection_to_pull, pending_curatedrule_id, simple_backoff_rules) = (
+        get_max_fetch_curatedrule_detections(
+            client_obj,
+            start_time,
+            end_time,
+            max_fetch,
+            curatedrule_detection_to_process,
+            curatedrule_detection_to_pull,
+            pending_curatedrule_id,
+            alert_state,
+            simple_backoff_rules,
+            fetch_detection_by_list_basis,
+        )
     )
 
     if len(curatedrule_detection_to_process) > max_fetch:

Packs/GoogleChronicleBackstory/Integrations/GoogleChronicleBackstory/GoogleChronicleBackstory_test.py~L3063

     detection_to_pull = {"rule_id": "rule_1", "next_page_token": "next_page_token"}
     simple_backoff_rules = {}
     for _ in range(93):
-        (
-            detection_incidents,
-            detection_to_pull,
-            pending_rule_or_version_id,
-            simple_backoff_rules,
-        ) = get_max_fetch_curatedrule_detections(
-            client,
-            "st_dummy",
-            "et_dummy",
-            5,
-            [],
-            detection_to_pull,
-            pending_rule_or_version_id,
-            "",
-            simple_backoff_rules,
-            "CREATED_TIME",
+        detection_incidents, detection_to_pull, pending_rule_or_version_id, simple_backoff_rules = (
+            get_max_fetch_curatedrule_detections(
+                client,
+                "st_dummy",
+                "et_dummy",
+                5,
+                [],
+                detection_to_pull,
+                pending_rule_or_version_id,
+                "",
+                simple_backoff_rules,
+                "CREATED_TIME",
+            )
         )
 
     assert client.http_client.request.call_count == 93

Packs/GoogleChronicleBackstory/Integrations/GoogleChronicleBackstory/GoogleChronicleBackstory_test.py~L3115

 
     simple_backoff_rules = {}
     for _ in range(5):
-        (
-            detection_incidents,
-            detection_to_pull,
-            pending_rule_or_version_id,
-            simple_backoff_rules,
-        ) = get_max_fetch_curatedrule_detections(
-            client,
-            "st_dummy",
-            "et_dummy",
-            15,
-            [],
-            detection_to_pull,
-            pending_rule_or_version_id,
-            "",
-            simple_backoff_rules,
-            "CREATED_TIME",
+        detection_incidents, detection_to_pull, pending_rule_or_version_id, simple_backoff_rules = (
+            get_max_fetch_curatedrule_detections(
+                client,
+                "st_dummy",
+                "et_dummy",
+                15,
+                [],
+                detection_to_pull,
+                pending_rule_or_version_id,
+                "",
+                simple_backoff_rules,
+                "CREATED_TIME",
+            )
         )
 
 

Packs/HealthCheck/Scripts/HealthCheckSystemDiagnostics/HealthCheckSystemDiagnostics.py~L78

         elif dataSource == "bigTasks":
             taskId = re.match(r"(?P<incidentid>\d+)##(?P<taskid>[\d+])##(?P<pbiteration>-\d+|\d+)", entry["taskId"])
             if taskId is not None:
-                newEntry[
-                    "details"
-                ] = f"Playbook:{entry['playbookName']},\n TaskName:{entry['taskName']},\n TaskID:{taskId['taskid']}"
+                newEntry["details"] = (
+                    f"Playbook:{entry['playbookName']},\n TaskName:{entry['taskName']},\n TaskID:{taskId['taskid']}"
+                )
                 newEntry["size"] = FormatSize(entry["taskSize"])
                 newEntry["incidentid"] = entry["investigationId"]
                 newFormat.append(newEntry)

Packs/PAN-OS/Integrations/Panorama/Panorama.py~L10744

         """
         result = []
         if style == "device group":
-            commit_groups: Union[
-                List[DeviceGroupInformation], List[TemplateStackInformation]
-            ] = PanoramaCommand.get_device_groups(topology, resolve_host_id(device))
+            commit_groups: Union[List[DeviceGroupInformation], List[TemplateStackInformation]] = (
+                PanoramaCommand.get_device_groups(topology, resolve_host_id(device))
+            )
             commit_group_names = set([x.name for x in commit_groups])
         elif style == "template stack":
             commit_groups = PanoramaCommand.get_template_stacks(topology, resolve_host_id(device))

Packs/TOPdesk/Integrations/TOPdesk/TOPdesk.py~L586

                     elif isinstance(sub_value, dict):
                         capitalized_output[capitalize(field)][capitalize(sub_field)] = {}
                         for sub_sub_field, sub_sub_value in sub_value.items():
-                            capitalized_output[capitalize(field)][capitalize(sub_field)][
-                                capitalize(sub_sub_field)
-                            ] = sub_sub_value  # Support up to dict[x: dict[y: dict]]
+                            capitalized_output[capitalize(field)][capitalize(sub_field)][capitalize(sub_sub_field)] = (
+                                sub_sub_value  # Support up to dict[x: dict[y: dict]]
+                            )
         capitalized_outputs.append(capitalized_output)
 
     return capitalized_outputs

Tests/Marketplace/upload_packs.py~L1158

         f'{GCPConfig.versions_metadata_contents["version_map"][override_corepacks_server_version]["file_version"]} to'
         f'{override_corepacks_file_version}'
     )
-    GCPConfig.versions_metadata_contents["version_map"][override_corepacks_server_version][
-        "file_version"
-    ] = override_corepacks_file_version
+    GCPConfig.versions_metadata_contents["version_map"][override_corepacks_server_version]["file_version"] = (
+        override_corepacks_file_version
+    )
 
 
 def upload_server_versions_metadata(artifacts_dir: str):

fronzbot/blinkpy (+2 -2 lines across 1 file)

ruff format --preview

tests/test_sync_module.py~L32

         self.blink: Blink = Blink(motion_interval=0, session=mock.AsyncMock())
         self.blink.last_refresh = 0
         self.blink.urls = BlinkURLHandler("test")
-        self.blink.sync["test"]: (BlinkSyncModule) = BlinkSyncModule(
+        self.blink.sync["test"]: BlinkSyncModule = BlinkSyncModule(
             self.blink, "test", "1234", []
         )
         self.blink.sync["test"].network_info = {"network": {"armed": True}}

latchbio/latch (+4 -5 lines across 1 file)

ruff format --preview

latch_cli/centromere/ctx.py~L283

                 self.public_key = generate_temporary_ssh_credentials(self.ssh_key_path)
 
                 if use_new_centromere:
-                    (
-                        self.internal_ip,
-                        self.username,
-                    ) = self.provision_register_deployment()
+                    self.internal_ip, self.username = (
+                        self.provision_register_deployment()
+                    )
                 else:
                     self.internal_ip, self.username = self.get_old_centromere_info()
 

mlflow/mlflow (+39 -39 lines across 4 files)

ruff format --preview

mlflow/langchain/api_request_parallel_processor.py~L172

             status_tracker.complete_task(success=True)
             self.results.append((self.index, response))
         except Exception as e:
-            self.errors[
-                self.index
-            ] = f"error: {e!r} {traceback.format_exc()}\n request payload: {self.request_json}"
+            self.errors[self.index] = (
+                f"error: {e!r} {traceback.format_exc()}\n request payload: {self.request_json}"
+            )
             status_tracker.increment_num_api_errors()
             status_tracker.complete_task(success=False)
 

mlflow/pyspark/ml/init.py~L950

             )
             artifact_dict[param_search_estimator_name] = {}
 
-            artifact_dict[param_search_estimator_name][
-                "tuning_parameter_map_list"
-            ] = _get_tuning_param_maps(
-                param_search_estimator, autologging_metadata.uid_to_indexed_name_map
+            artifact_dict[param_search_estimator_name]["tuning_parameter_map_list"] = (
+                _get_tuning_param_maps(
+                    param_search_estimator, autologging_metadata.uid_to_indexed_name_map
+                )
             )
 
-            artifact_dict[param_search_estimator_name][
-                "tuned_estimator_parameter_map"
-            ] = _get_instance_param_map_recursively(
-                param_search_estimator.getEstimator(),
-                1,
-                autologging_metadata.uid_to_indexed_name_map,
+            artifact_dict[param_search_estimator_name]["tuned_estimator_parameter_map"] = (
+                _get_instance_param_map_recursively(
+                    param_search_estimator.getEstimator(),
+                    1,
+                    autologging_metadata.uid_to_indexed_name_map,
+                )
             )
 
         if artifact_dict:

mlflow/server/auth/init.py~L569

         len(response_message.registered_models) < request_message.max_results
         and response_message.next_page_token != ""
     ):
-        refetched: PagedList[
-            RegisteredModel
-        ] = _get_model_registry_store().search_registered_models(
-            filter_string=request_message.filter,
-            max_results=request_message.max_results,
-            order_by=request_message.order_by,
-            page_token=response_message.next_page_token,
+        refetched: PagedList[RegisteredModel] = (
+            _get_model_registry_store().search_registered_models(
+                filter_string=request_message.filter,
+                max_results=request_message.max_results,
+                order_by=request_message.order_by,
+                page_token=response_message.next_page_token,
+            )
         )
         refetched = refetched[
             : request_message.max_results - len(response_message.registered_models)

mlflow/store/tracking/sqlalchemy_store.py~L145

                 # inefficiency from multiple threads waiting for the lock to check for engine
                 # existence if it has already been created.
                 if db_uri not in SqlAlchemyStore._db_uri_sql_alchemy_engine_map:
-                    SqlAlchemyStore._db_uri_sql_alchemy_engine_map[
-                        db_uri
-                    ] = mlflow.store.db.utils.create_sqlalchemy_engine_with_retry(db_uri)
+                    SqlAlchemyStore._db_uri_sql_alchemy_engine_map[db_uri] = (
+                        mlflow.store.db.utils.create_sqlalchemy_engine_with_retry(db_uri)
+                    )
         self.engine = SqlAlchemyStore._db_uri_sql_alchemy_engine_map[db_uri]
         # On a completely fresh MLflow installation against an empty database (verify database
         # emptiness by checking that 'experiments' etc aren't in the list of table names), run all

mlflow/store/tracking/sqlalchemy_store.py~L1412

             )
             dataset_uuids = {}
             for existing_dataset in existing_datasets:
-                dataset_uuids[
-                    (existing_dataset.name, existing_dataset.digest)
-                ] = existing_dataset.dataset_uuid
+                dataset_uuids[(existing_dataset.name, existing_dataset.digest)] = (
+                    existing_dataset.dataset_uuid
+                )
 
             # collect all objects to write to DB in a single list
             objs_to_write = []

mlflow/store/tracking/sqlalchemy_store.py~L1423

             for dataset_input in dataset_inputs:
                 if (dataset_input.dataset.name, dataset_input.dataset.digest) not in dataset_uuids:
                     new_dataset_uuid = uuid.uuid4().hex
-                    dataset_uuids[
-                        (dataset_input.dataset.name, dataset_input.dataset.digest)
-                    ] = new_dataset_uuid
+                    dataset_uuids[(dataset_input.dataset.name, dataset_input.dataset.digest)] = (
+                        new_dataset_uuid
+                    )
                     objs_to_write.append(
                         SqlDataset(
                             dataset_uuid=new_dataset_uuid,

mlflow/store/tracking/sqlalchemy_store.py~L1451

             )
             input_uuids = {}
             for existing_input in existing_inputs:
-                input_uuids[
-                    (existing_input.source_id, existing_input.destination_id)
-                ] = existing_input.input_uuid
+                input_uuids[(existing_input.source_id, existing_input.destination_id)] = (
+                    existing_input.input_uuid
+                )
 
             # add input edges to objs_to_write
             for dataset_input in dataset_inputs:

mlflow/store/tracking/sqlalchemy_store.py~L1462

                 ]
                 if (dataset_uuid, run_id) not in input_uuids:
                     new_input_uuid = uuid.uuid4().hex
-                    input_uuids[
-                        (dataset_input.dataset.name, dataset_input.dataset.digest)
-                    ] = new_input_uuid
+                    input_uuids[(dataset_input.dataset.name, dataset_input.dataset.digest)] = (
+                        new_input_uuid
+                    )
                     objs_to_write.append(
                         SqlInput(
                             input_uuid=new_input_uuid,

pandas-dev/pandas (+31 -31 lines across 4 files)

ruff format --preview

pandas/core/ops/docstrings.py~L420

     if reverse_op is not None:
         _op_descriptions[reverse_op] = _op_descriptions[key].copy()
         _op_descriptions[reverse_op]["reverse"] = key
-        _op_descriptions[key][
-            "see_also_desc"
-        ] = f"Reverse of the {_op_descriptions[key]['desc']} operator, {_py_num_ref}"
-        _op_descriptions[reverse_op][
-            "see_also_desc"
-        ] = f"Element-wise {_op_descriptions[key]['desc']}, {_py_num_ref}"
+        _op_descriptions[key]["see_also_desc"] = (
+            f"Reverse of the {_op_descriptions[key]['desc']} operator, {_py_num_ref}"
+        )
+        _op_descriptions[reverse_op]["see_also_desc"] = (
+            f"Element-wise {_op_descriptions[key]['desc']}, {_py_num_ref}"
+        )
 
 _flex_doc_SERIES = """
 Return {desc} of series and other, element-wise (binary operator `{op_name}`).

pandas/core/reshape/melt.py~L122

     if frame.shape[1] > 0 and not any(
         not isinstance(dt, np.dtype) and dt._supports_2d for dt in frame.dtypes
     ):
-        mdata[value_name] = concat([
-            frame.iloc[:, i] for i in range(frame.shape[1])
-        ]).values
+        mdata[value_name] = (
+            concat([frame.iloc[:, i] for i in range(frame.shape[1])]).values
+        )
     else:
         mdata[value_name] = frame._values.ravel("F")
     for i, col in enumerate(var_name):

pandas/io/formats/style_render.py~L314

             max_cols,
         )
 
-        self.cellstyle_map_columns: DefaultDict[
-            tuple[CSSPair, ...], list[str]
-        ] = defaultdict(list)
+        self.cellstyle_map_columns: DefaultDict[tuple[CSSPair, ...], list[str]] = (
+            defaultdict(list)
+        )
         head = self._translate_header(sparse_cols, max_cols)
         d.update({"head": head})
 

pandas/io/formats/style_render.py~L329

         self.cellstyle_map: DefaultDict[tuple[CSSPair, ...], list[str]] = defaultdict(
             list
         )
-        self.cellstyle_map_index: DefaultDict[
-            tuple[CSSPair, ...], list[str]
-        ] = defaultdict(list)
+        self.cellstyle_map_index: DefaultDict[tuple[CSSPair, ...], list[str]] = (
+            defaultdict(list)
+        )
         body: list = self._translate_body(idx_lengths, max_rows, max_cols)
         d.update({"body": body})
 

pandas/io/formats/style_render.py~L776

             )
 
             if self.cell_ids:
-                header_element[
-                    "id"
-                ] = f"{self.css['level']}{c}_{self.css['row']}{r}"  # id is given
+                header_element["id"] = (
+                    f"{self.css['level']}{c}_{self.css['row']}{r}"  # id is given
+                )
             if (
                 header_element_visible
                 and (r, c) in self.ctx_index

pandas/tests/io/excel/test_writers.py~L1226

         }
 
         if PY310:
-            msgs[
-                "openpyxl"
-            ] = "Workbook.__init__() got an unexpected keyword argument 'foo'"
-            msgs[
-                "xlsxwriter"
-            ] = "Workbook.__init__() got an unexpected keyword argument 'foo'"
+            msgs["openpyxl"] = (
+                "Workbook.__init__() got an unexpected keyword argument 'foo'"
+            )
+            msgs["xlsxwriter"] = (
+                "Workbook.__init__() got an unexpected keyword argument 'foo'"
+            )
 
         # Handle change in error message for openpyxl (write and append mode)
         if engine == "openpyxl" and not os.path.exists(path):
-            msgs[
-                "openpyxl"
-            ] = r"load_workbook() got an unexpected keyword argument 'foo'"
+            msgs["openpyxl"] = (
+                r"load_workbook() got an unexpected keyword argument 'foo'"
+            )
 
         with pytest.raises(TypeError, match=re.escape(msgs[engine])):
             df.to_excel(

prefecthq/prefect (+138 -138 lines across 21 files)

ruff format --preview

src/prefect/_internal/pydantic/annotations/pendulum.py~L13

 
 
 class _PendulumDateTimeAnnotation:
-    _pendulum_type: t.Type[
-        t.Union[pendulum.DateTime, pendulum.Date, pendulum.Time]
-    ] = pendulum.DateTime
+    _pendulum_type: t.Type[t.Union[pendulum.DateTime, pendulum.Date, pendulum.Time]] = (
+        pendulum.DateTime
+    )
 
     _pendulum_types_to_schemas = {
         pendulum.DateTime: core_schema.datetime_schema(),

src/prefect/_vendor/fastapi/routing.py~L408

             methods = ["GET"]
         self.methods: Set[str] = {method.upper() for method in methods}
         if isinstance(generate_unique_id_function, DefaultPlaceholder):
-            current_generate_unique_id: Callable[
-                ["APIRoute"], str
-            ] = generate_unique_id_function.value
+            current_generate_unique_id: Callable[["APIRoute"], str] = (
+                generate_unique_id_function.value
+            )
         else:
             current_generate_unique_id = generate_unique_id_function
         self.unique_id = self.operation_id or current_generate_unique_id(self)

src/prefect/_vendor/fastapi/routing.py~L433

             # would pass the validation and be returned as is.
             # By being a new field, no inheritance will be passed as is. A new model
             # will be always created.
-            self.secure_cloned_response_field: Optional[
-                ModelField
-            ] = create_cloned_field(self.response_field)
+            self.secure_cloned_response_field: Optional[ModelField] = (
+                create_cloned_field(self.response_field)
+            )
         else:
             self.response_field = None  # type: ignore
             self.secure_cloned_response_field = None

src/prefect/_vendor/fastapi/utils.py~L39

     from .routing import APIRoute
 
 # Cache for `create_cloned_field`
-_CLONED_TYPES_CACHE: MutableMapping[
-    Type[BaseModel], Type[BaseModel]
-] = WeakKeyDictionary()
+_CLONED_TYPES_CACHE: MutableMapping[Type[BaseModel], Type[BaseModel]] = (
+    WeakKeyDictionary()
+)
 
 
 def is_body_allowed_for_status_code(status_code: Union[int, str, None]) -> bool:

src/prefect/blocks/core.py~L257

                                     type_._to_block_schema_reference_dict(),
                                 ]
                             else:
-                                refs[
-                                    field.name
-                                ] = type_._to_block_schema_reference_dict()
+                                refs[field.name] = (
+                                    type_._to_block_schema_reference_dict()
+                                )
 
     def __init__(self, *args, **kwargs):
         super().__init__(*args, **kwargs)

src/prefect/blocks/notifications.py~L24

     An abstract class for sending notifications using Apprise.
     """
 
-    notify_type: Literal[
-        "prefect_default", "info", "success", "warning", "failure"
-    ] = Field(
-        default=PREFECT_NOTIFY_TYPE_DEFAULT,
-        description=(
-            "The type of notification being performed; the prefect_default "
-            "is a plain notification that does not attach an image."
-        ),
+    notify_type: Literal["prefect_default", "info", "success", "warning", "failure"] = (
+        Field(
+            default=PREFECT_NOTIFY_TYPE_DEFAULT,
+            description=(
+                "The type of notification being performed; the prefect_default "
+                "is a plain notification that does not attach an image."
+            ),
+        )
     )
 
     def __init__(self, *args, **kwargs):

src/prefect/cli/_prompts.py~L482

                 import prefect_docker
 
             credentials_block = prefect_docker.DockerRegistryCredentials
-            push_step[
-                "credentials"
-            ] = "{{ prefect_docker.docker-registry-credentials.docker_registry_creds_name }}"
+            push_step["credentials"] = (
+                "{{ prefect_docker.docker-registry-credentials.docker_registry_creds_name }}"
+            )
         else:
             credentials_block = DockerRegistry
-            push_step[
-                "credentials"
-            ] = "{{ prefect.docker-registry.docker_registry_creds_name }}"
+            push_step["credentials"] = (
+                "{{ prefect.docker-registry.docker_registry_creds_name }}"
+            )
         docker_registry_creds_name = f"deployment-{slugify(deployment_config['name'])}-{slugify(deployment_config['work_pool']['name'])}-registry-creds"
         create_new_block = False
         try:

src/prefect/context.py~L137

     )
 
     # Failures will be a tuple of (exception, instance, args, kwargs)
-    _instance_init_failures: Dict[
-        Type[T], List[Tuple[Exception, T, Tuple, Dict]]
-    ] = PrivateAttr(default_factory=lambda: defaultdict(list))
+    _instance_init_failures: Dict[Type[T], List[Tuple[Exception, T, Tuple, Dict]]] = (
+        PrivateAttr(default_factory=lambda: defaultdict(list))
+    )
 
     block_code_execution: bool = False
     capture_failures: bool = False

src/prefect/deployments/deployments.py~L440

             )
         )
         if all_fields["storage"]:
-            all_fields["storage"][
-                "_block_type_slug"
-            ] = self.storage.get_block_type_slug()
+            all_fields["storage"]["_block_type_slug"] = (
+                self.storage.get_block_type_slug()
+            )
         if all_fields["infrastructure"]:
-            all_fields["infrastructure"][
-                "_block_type_slug"
-            ] = self.infrastructure.get_block_type_slug()
+            all_fields["infrastructure"]["_block_type_slug"] = (
+                self.infrastructure.get_block_type_slug()
+            )
         return all_fields
 
     # top level metadata

src/prefect/filesystems.py~L694

     def filesystem(self) -> RemoteFileSystem:
         settings = {}
         if self.azure_storage_connection_string:
-            settings[
-                "connection_string"
-            ] = self.azure_storage_connection_string.get_secret_value()
+            settings["connection_string"] = (
+                self.azure_storage_connection_string.get_secret_value()
+            )
         if self.azure_storage_account_name:
-            settings[
-                "account_name"
-            ] = self.azure_storage_account_name.get_secret_value()
+            settings["account_name"] = (
+                self.azure_storage_account_name.get_secret_value()
+            )
         if self.azure_storage_account_key:
             settings["account_key"] = self.azure_storage_account_key.get_secret_value()
         if self.azure_storage_tenant_id:

src/prefect/filesystems.py~L708

         if self.azure_storage_client_id:
             settings["client_id"] = self.azure_storage_client_id.get_secret_value()
         if self.azure_storage_client_secret:
-            settings[
-                "client_secret"
-            ] = self.azure_storage_client_secret.get_secret_value()
+            settings["client_secret"] = (
+                self.azure_storage_client_secret.get_secret_value()
+            )
         settings["anon"] = self.azure_storage_anon
         self._remote_file_system = RemoteFileSystem(
             ba...*[Comment body truncated]*

@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch 2 times, most recently from 0550d40 to 1a0e9e8 Compare December 1, 2023 10:19
Base automatically changed from refactor-last-statement-expression-comment-formatting to main December 4, 2023 05:12
@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch 8 times, most recently from e465bfc to deae320 Compare December 8, 2023 15:39
@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch from deae320 to 7a3c504 Compare December 11, 2023 08:51
@MichaReiser MichaReiser changed the title WIP: prefer_splitting_right_hand_side_of_assignments preview style prefer_splitting_right_hand_side_of_assignments preview style Dec 11, 2023
@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch 3 times, most recently from 2d125ca to 1fe91c8 Compare December 11, 2023 09:26
@MichaReiser
Copy link
Member Author

I like the changes that I see in the ecosystem check

@MichaReiser MichaReiser marked this pull request as ready for review December 11, 2023 09:40
Copy link
Member

@konstin konstin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks much better!

@@ -17,3 +17,10 @@ pub(crate) const fn is_hug_parens_with_braces_and_square_brackets_enabled(
) -> bool {
context.is_preview()
}

/// Returns `true` if the [`prefer_splitting_right_hand_side_of_assignments`](https://github.com/astral-sh/ruff/issues/6975) preview style is enabled.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed that previously, but do we also want to do an enum here, and then have is_enabled(PreviewStyles::PreferSplittingRhsOfAssignments)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have an opinion. We could use f.context().is_enabled(PreviewStyle::LongName), which may be easier to document and add. For me, any approach that allows identifying the call sites easily works for me.

@MichaReiser MichaReiser force-pushed the prefer_splitting_right_hand_side_of_assignments branch from ed58d9b to 7c7f6b9 Compare December 12, 2023 03:26
@MichaReiser MichaReiser merged commit 45f6030 into main Dec 13, 2023
17 checks passed
@MichaReiser MichaReiser deleted the prefer_splitting_right_hand_side_of_assignments branch December 13, 2023 03:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
formatter Related to the formatter preview Related to preview mode features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Formatter: prefer_splitting_right_hand_side_of_assignments preview style
2 participants