New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(resource): match discovery burst and qps for kubectl
with upstream kubectl binary
#11603
fix(resource): match discovery burst and qps for kubectl
with upstream kubectl binary
#11603
Conversation
300/50 seems too high, I raised argoproj/argo-helm#2195 but 100/50 is only recommended from k8s 1.27, under k8s 1.27 its 20/10 |
I think you are talking about rest client burst/qps. This PR is about discovery client burst/qps. 300/50 is the default upstream value since v1.24 version of Kubernetes. You can find the details in the description of the PR. |
392b6ad
to
5ee93b7
Compare
Makes me think the test failure is not related to this PR. |
I think the PR is ready for review. I don't think the test failure is related to the changes in this PR. Happy to take another look if the reviewers think the failure is related to changes in this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be chore instead of fix?
I don't understand why this would be a chore.
I have outlined the issues this PR can fix in the description of the PR: |
5ee93b7
to
d9fa588
Compare
242a484
to
b06ce13
Compare
Head branch was pushed to by a user without write access
b06ce13
to
179923b
Compare
…eam kubectl binary Signed-off-by: vadasambar <suraj.bankar@acquia.com> docs: re-word the TODO with more details Signed-off-by: vadasambar <suraj.bankar@acquia.com>
179923b
to
45693d4
Compare
@terrytangyuan I think I might need approval with auto-merge or a manual merge from you. CI is passing. |
…eam kubectl binary (#11603) Signed-off-by: vadasambar <suraj.bankar@acquia.com>
…eam kubectl binary (argoproj#11603) Signed-off-by: vadasambar <suraj.bankar@acquia.com> Signed-off-by: Dillen Padhiar <dillen_padhiar@intuit.com>
kubectl
with upstream kubectl binarykubectl
with upstream kubectl binary
This specific change would only help those issues if they were using many |
May fix or alleviate issues like #11156 and #9934
Motivation
We migrated to using
kubectl
as a package starting v3.4.7:Before v3.4.7 we used
kubectl
binary (ref1, ref2)kubectl
binary uses uses higher values for discovery burst and QPS since v1.24.kubectl
passes the new QPS and burst throughNewDefaultKubectlCommand
while in argo workflows we useNewKubectlCommand
instead which uses default burst of100
(we are usingv.0.24.3
version ofkubectl
pkg at the time of writing this)If there are 100 or more CRDs in the cluster, discovery can take time. This leaves less time for pod to run because of which pod can exceed
activeDeadlineSeconds
(check #11156 and #9934). Related: kubernetes/kubectl#1126Modifications
This PR matches the the new discovery burst and QPS set in the upstream.
Related:
Note that in 1.26 version (v0.26.x for kubectl package), default burst across all client-go clients was bumped up to 300 (no increase in the default QPS though) but we can't use this since we are on v0.24.3. Upgrading the
kubectl
package is an option but I am not sure about the consequences of doing that. We might want to re-visit this PR again in the future and remove lines which are already set in the upstream.Verification