-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vert.x accepting thread causes memory leak #6709
Comments
The underlying Vert.x Vert.x uses the Reactor pattern and creates multiple event-loop threads to handle the workload reactively. However, AFAIR vert.x shares the resources of each of those instances. In any case, in general, I'd advise to use a singleton Maybe @shawkins has some more clues about what's going on here. |
I need to dig deeper in the issue, but on the surface it looks like we we're not misbehaving really: there's a single client that does all sorts of things on the cluster: listing, creating, updating, deleting K8S resources. After each of those calls the client is not explicitly closed or anything (I expect vert.x to handle those things). Elsewhere we do create a client each x minutes (this can be easily improved) and only use this client to load yaml |
vert.x by default is sharing the same threadpool across Vertx instances, but each instance is creating it's own event loop and acceptor threads. This points to a couple of possiblities:
The first is a usage error. The second would be indicative of a problem with fabric8 / vertx. |
I am affected by thread leaks since this upgrade as well. I use the following pattern a lot: KubernetesClientBuilder().withConfig(config).build().use { client ->
// interact with the client
} The |
The issue here is that the vertx http client does not close the vertx instance - there may have been an early assumption that code was reusing the same factory, but that is not the case for the default factory. Also we don't want to close the vext instance when running on quarkus, so we'll need to delegate that to the factory - Quarkus has their own implementation. Alternatively we can make either the default factory or the Vertx instance referenced by the vertx factory static - but generally we've tried to avoid internal static state. In any case usage scenarios were kubernetes clients are being repeatedly created / closed shoud be avoided. They are supposed to be used in a singleton pattern. Some of the docs / examples are probably leading people in the wrong direction. |
It seems to me that the relationship you have with Vert.X is broken with regards to resource allocation. Even using one singleton is not working correctly. Reading the Vert.X documentation: Vert.x Core
If I create a standard Spring Boot application and configure a CLI option: spring.main.web-application-type=none
The program will never exit. The close() call should completely clean up everything that has been allocated by the client - that's what the java.lang.AutoCloseable interface means. |
This problem has been bothering me for a week. in https://github.com/apache/spark/pull/49159/files#diff-16671cf9d0bff95338158f5fe51d661c482abbcf0fba8515d920849effad66ebR114 https://github.com/apache/spark/pull/49159/files#diff-d548b8df6c6e03f0b2c43538c1374a150f2fefb11e03bb36cfbc132a55605c7bR59 |
Unfortunately that's because the vertx client was predominately being used by quarkus and/or as a singleton in containerized applications that would forcably terminate, so it wasn't noticed before how the standalone usage worked - especially compared to okhttp. Note that using the jdk client won't have this issue either. It definitely seems good to mark the threads as daemon. We should have the vert.x folks comment as well on the handling of vertx instances - @vietj can you comment on how many vertx instances we should be creating? |
I created #6726 as a proposal to fix this issue. I'd like to merge a fix as soon as possible so I can cut a v7.0.1 release since it seems quite a critical issue. |
May I ask when the |
Thursday at the latest, in a couple of hours as the soonest :) |
yeah, thank you very much! ❤️ |
Since upstepping to version 7.0.0, my app gets OOMKilled quite rapidly. I attached VisualVM and see >10k
vert.x-eventloop-thread-0
andvert.x-acceptor-thread-0 threads
(so all of them with the same name). While trying to relate this to my code I see there's a method that creates a new K8S client (new KubernetesClientBuilder().build()
) every minute or so. While this obviously can be reduced to a singleton I wonder if this is where the issue stems from... The client is only used to parse some Yaml and then can be GC'd.Does the thread name ring a bell? Where does it gets created?
The text was updated successfully, but these errors were encountered: