Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make cache API PUT limit configurable #1772

Merged
merged 1 commit into from
Mar 14, 2024
Merged

Make cache API PUT limit configurable #1772

merged 1 commit into from
Mar 14, 2024

Conversation

fhanau
Copy link
Contributor

@fhanau fhanau commented Mar 6, 2024

Also see the downstream PR.

@fhanau fhanau requested review from a team as code owners March 6, 2024 14:48
@fhanau fhanau requested review from jasnell and vickykont March 6, 2024 14:48
@@ -1234,7 +1242,7 @@ jsg::Promise<kj::Maybe<IoOwn<kj::AsyncInputStream>>> IoContext::makeCachePutStre
// This Cache API subrequest would exceed the total PUT quota. There used to be a limit on
// individual puts, but now this is only used as a fast path for rejecting PUTs going beyond the
// total quota early.
if (length > MAX_TOTAL_PUT_SIZE) {
if (length > initialPutQuota) {
Copy link
Member

@jasnell jasnell Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be incorporated into limitEnforcer instead? ... in other words, instead of initialPutQuota being a separate field, should this check just call into a method on limitEnforcer to check the limit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be possible, but with that approach every class inheriting from LimitEnforcer would have to define a quota instead of being able to use kj::None for the default. By having the default defined in here and initialPutQuota specifying the actual initial quota, we only need to define the default once. We don't have a clear approach to how limits are being handled in workerd, so instead of trying to handle that thorny question now I sought to keep this close to the existing implementation.

Another approach to handle this would be to define the default limit in a non-virtual function in LimitEnforcer, but I assumed that since the class is entirely virtual so far this may have some side effects.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps leave a TODO comment to encourage us to revisit this later to see if there's a reasonable way to move this into limitEnforcer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with this work, but if implementing a stateful limit for a request, it might be worth noting that all active requests for a DO share an IoContext. To allow DO requests to effectively share limits, we have a topUpActor() method in LimitEnforcer that is called to reset the limits each time a new request arrives.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Added a TODO for initialPutQuota in src/workerd/io/io-context.h.

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
@fhanau fhanau force-pushed the felix/cache-put-limit branch from 965dd9e to 115e189 Compare March 8, 2024 16:02
@fhanau fhanau merged commit f8e8e7e into main Mar 14, 2024
10 checks passed
@fhanau fhanau deleted the felix/cache-put-limit branch March 14, 2024 16:46
fhanau added a commit that referenced this pull request Mar 19, 2024

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
fhanau added a commit that referenced this pull request Mar 19, 2024

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
fhanau added a commit that referenced this pull request Mar 20, 2024

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
fhanau added a commit that referenced this pull request Mar 20, 2024

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
fhanau added a commit that referenced this pull request Mar 21, 2024

Verified

This commit was signed with the committer’s verified signature.
alexeagle Alex Eagle
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
fhanau added a commit that referenced this pull request Mar 21, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
After further discussions within the team, we will be lifting the limit
entirely as limits defined in the Cache backend will be sufficient. This
enables significant cleanup, including the limit functions defined in #1772
and the existing cache stream setup.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants