-
-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: log producer panics if no logs are produced when using podman #946
Labels
bug
An issue with the library
Comments
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Mar 21, 2023
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Mar 21, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
I have also encounter this issue when using podman and can confirm your fix (#947) has resolves the bug. Any chance we could get this merged into main soon? |
I'm waiting for author's feedback on #947 (comment). But I'll take the PR and try to resolve the conflicts by myself. |
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
mdelapenya
pushed a commit
that referenced
this issue
Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix #946
weeco
pushed a commit
to weeco/testcontainers-go
that referenced
this issue
Apr 24, 2023
…ers#947) This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
Reopening for #1164 |
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Jun 8, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. This should fix testcontainers#946
martin-sucha
added a commit
to kiwicom/testcontainers-go
that referenced
this issue
Jun 8, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond. The panic happened at least with podman when no new logs appear when using follow and since parameters. We keep retrying until the context is canceled (the retry request would fail anyway with canceled context) or the producer is stopped, whichever comes first. This makes the retry behavior consistent with closed connections handling. Outstanding HTTP calls for fetching logs are now interrupted when a producer is stopped. Previously the consumer and StopProducer() waited for the HTTP call to complete. This should fix testcontainers#946
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Testcontainers version
v0.19.0
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host arch
x86_64
Go version
go1.20
Docker version
What happened?
The worker goroutine in DockerContainer.StartLogProducer panics:
This is because:
since
parameter is current timepodman
does not return HTTP response headers until some log output is availableSo the context deadline expires and the code panics.
It seems that instead of a hardcoded 5 second timeout, the context should be cancelled when the producer is stopped.
Unless this is a bug in podman, in that case we can open a bug report there.
Relevant log output
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: