-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[Breaking Change] Make the context for await expressions consistent #55418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
breaking-change-approved
breaking-change-request
This tracks requests for feedback on breaking changes
Comments
Sounds good to me; the change seems like an improvement. |
cc @leonsenft who will handle breaking change requests for ACX going forward. |
LGTM 👍 |
lgtm |
@stereotype441 your breaking change request is approved! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
breaking-change-approved
breaking-change-request
This tracks requests for feedback on breaking changes
The context used by the compiler front end to perform type inference on the operand of
await
expressions will be changed to match the behavior of the analyzer. The change is as follows: when the context for the entireawait
expression isdynamic
, the context for the operand of theawait
expression will beFutureOr<_>
.Although this is technically a breaking change, it's not expected to have any effect on real-world code.
Background
When the compiler needs to perform type inference on an expression, it does so using a type schema known as the "context". A type schema is a generalization of the normal Dart type syntax, in which
_
(called the "unknown type") can appear where a type is expected.In the analyzer, any time an expression would be analyzed with a context of
dynamic
, the context is coerced to_
before performing the analysis; this causes contexts ofdynamic
and_
to behave identically1. In the compiler front end,dynamic
is coerced to_
when analyzing a generic invocation, butdynamic
and_
behave differently for a few expression types. This breaking change addresses one of those differences, which is in the behavior ofawait
expressions.The current behavior for
await
expressions is as follows. If the context for theawait
expression isK
, then the operand of theawait
expression will be inferred using a context ofL
, whereL
is computed as follows:K
isFutureOr<P>
orFutureOr<P>?
for someP
, thenL
isK
.K
isdynamic
, then in the compiler front end,L
isdynamic
; in the analyzer,L
isFutureOr<_>
.2L
isFutureOr<K>
.Intended Change
The above rules will be changed so that in the compiler front end, if
K
isdynamic
, thenL
will beFutureOr<_>
, as it is in the analyzer.Justification
Any difference in type inference behavior between the analyzer and the compiler front end is a bug. In this case the bug has a low user impact, because the type schemas
dynamic
andFutureOr<_>
behave very similarly in type inference (see dart-lang/language#3649 for further discussion about this). To reduce the impact of the bug fix, it makes sense to standardize on either the analyzer's behavior or the compiler front end's behavior. In this case, standardizing on the analyzer behavior is better, since the analyzer is more self-consistent (it always treats contexts ofdynamic
and_
the same). Once this change is made, there will be only one remaining scenario in which the compiler front end treatsdynamic
and_
contexts differently, which I plan to address in a future breaking change (see dart-lang/language#3650).Expected Impact
A prototype of this change caused zero test failures in Google's internal codebase, so the impact is expected to be extremely low for real-world code.
But it is theoretically possible for a program to behave differently with the change. Here is a contrived example:
Today, this program prints
num
; with the change, it will printint
.await h(f ?? (g(0)..foo()))
is inferred using a context ofdynamic
.h(f ?? g(0)..foo()))
is inferred using a context ofdynamic
.h
is a generic invocation, the contextdynamic
is changed to_
, sof ?? (g(0)..foo())
is inferred using a context of_
.??
) expression is inferred using a context of_
, the static type of the left hand side (f
) is used as the context for inferring the right hand side (g(0)..foo()
).g(0)..foo()
is inferred using a context ofFuture<num>?
(the static type off
).g(0)
is inferred using a context ofFuture<num>?
.g
isFuture<T>
, and that satisfiesFutureOr<num>?
only ifT <: num
, the type ofT
is set tonum
during downwards inference.g(0)
has static typeFuture<num>
...foo()
is invoked with the type parameterT
bound tonum
.With the change, here's what will happen instead:
await h(f ?? (g(0)..foo()))
will be inferred using a context ofdynamic
.h(f ?? g(0)..foo()))
will be inferred using a context ofFutureOr<_>
.f ?? (g(0)..foo())
will be inferred using a context ofFutureOr<_>
.??
) expression is inferred using a context other than_
, that context is propagated to the right hand side (g(0)..foo()
).g(0)..foo()
will be inferred using a context ofFutureOr<_>
.g(0)
will be inferred using a context ofFutureOr<_>
.g
isFuture<T>
, and that satisfiesFutureOr<_>
for allT
, downwards inference ofg(0)
won't constrain the type ofT
. So the type ofT
will be set toint
during upwards inference.g(0)
will have static typeFuture<int>
...foo()
will be invoked with the type parameterT
bound toint
.Mitigation
In the unlikely event that some real-world customer code is affected, the effect will be limited to type inference. So the old behavior can be restored by supplying explicit types. For example, the above example could be changed to:
(Note that
g(0)
has been changed tog<num>(0)
.)Footnotes
This coercion doesn't happen when
dynamic
appears more deeply inside the context; for example, a context ofList<dynamic>
is not changed toList<_>
. ↩This is a consequence of the fact that the analyzer coerces
dynamic
to_
, therefore rule 3 applies. ↩The text was updated successfully, but these errors were encountered: