-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
waitUntilCondition method throws KubernetesClientException after approximately one hour with sufficiently long timeout #5379
Comments
Later versions use informers, not just watches, which are maintained indefinitely. |
To clarify, In v5.x In v6.x Upgrading to v6.x sshould deal with your problem. |
I'm seeing similar issue with I'm going to remove async Kubernetes client close when access token expires to see what happens. From what I understand, WebSocket does not require re-authentication once opened so access token expiration should not be an issue, right? |
In v6, since it's using Informers, everything is handled internally. So yes, the Informer is restarted from the new resource versions (no events should be missed -except for intermediate events which leave the resource in the same state-). Lines 290 to 303 in 20be3e6
Lines 126 to 132 in 20be3e6
Unless it's a known status error type, the client will try to reconnect using the retry/backoff settings (defaults to 10 retries). Eventually, the informer will stop. Lines 104 to 105 in 9b86adc
If stopped due to a problem, then the Informer completes exceptionally and propagates a You can further customize this behavior at the Informer level by leveraging the Lines 183 to 191 in dfa920e
And by subscribing to the Lines 193 to 202 in dfa920e
Check the |
@manusa Thank you for your explanation. I have run additional tests and I have been able to confirm that when access token expires my HTTP token refresh interceptor is triggered. |
This is related to #5327, the future that is waiting on the condition is aware of if the informer does not start properly - but it's not aware if the informer shuts down early. This issue could be reopened or a new one created for that specifically. |
Describe the bug
I encountered an issue while using the waitUntilCondition method from the Fabric8 Kubernetes Client library.
code:
client.pods().inNamespace(namespace).withName(pod.getMetadata().getName()).waitUntilCondition(
o -> o.getStatus().getPhase().equals(PodConstant.POD_FAILED_PHASE) || o.getStatus().getPhase().equals(PodConstant.POD_SUCCEEDED_PHASE), timeout, TimeUnit.SECONDS);
However, I observed that when the method is used with a timeout value that is long enough (approximately one hour or more), it throws a KubernetesClientException even though the condition has not been met. This behavior is unexpected and prevents the method from working as intended.
my k8s version is v1.19
Fabric8 Kubernetes Client version
5.12.4
Steps to reproduce
Set up a scenario where a Pod's status will change over a longer period of time.
Use the waitUntilCondition method with a timeout value of approximately one hour or more.
Observe that the method throws a KubernetesClientException after the timeout period, even if the condition should be met.
Expected behavior
The waitUntilCondition method should wait for the specified condition to be met within the provided timeout period. If the condition is met, the method should not throw an exception. If the condition is not met within the timeout period, then the method can throw an exception indicating that the condition was not met.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
other (please specify in additional context)
Environment
Linux
Fabric8 Kubernetes Client Logs
Additional context
No response
The text was updated successfully, but these errors were encountered: