-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make cache API PUT limit configurable #1772
Conversation
@@ -1234,7 +1242,7 @@ jsg::Promise<kj::Maybe<IoOwn<kj::AsyncInputStream>>> IoContext::makeCachePutStre | |||
// This Cache API subrequest would exceed the total PUT quota. There used to be a limit on | |||
// individual puts, but now this is only used as a fast path for rejecting PUTs going beyond the | |||
// total quota early. | |||
if (length > MAX_TOTAL_PUT_SIZE) { | |||
if (length > initialPutQuota) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be incorporated into limitEnforcer
instead? ... in other words, instead of initialPutQuota
being a separate field, should this check just call into a method on limitEnforcer
to check the limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be possible, but with that approach every class inheriting from LimitEnforcer would have to define a quota instead of being able to use kj::None for the default. By having the default defined in here and initialPutQuota specifying the actual initial quota, we only need to define the default once. We don't have a clear approach to how limits are being handled in workerd, so instead of trying to handle that thorny question now I sought to keep this close to the existing implementation.
Another approach to handle this would be to define the default limit in a non-virtual function in LimitEnforcer, but I assumed that since the class is entirely virtual so far this may have some side effects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps leave a TODO comment to encourage us to revisit this later to see if there's a reasonable way to move this into limitEnforcer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with this work, but if implementing a stateful limit for a request, it might be worth noting that all active requests for a DO share an IoContext. To allow DO requests to effectively share limits, we have a topUpActor()
method in LimitEnforcer
that is called to reset the limits each time a new request arrives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Added a TODO for initialPutQuota
in src/workerd/io/io-context.h
.
965dd9e
to
115e189
Compare
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
After further discussions within the team, we will be lifting the limit entirely as limits defined in the Cache backend will be sufficient. This enables significant cleanup, including the limit functions defined in #1772 and the existing cache stream setup.
Also see the downstream PR.