You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're randomly seeing vttablet lock up with errors like this: PoolFull: skipped 60 log messages
Once it gets into that state, health checks start failing and query serving never recovers until I restart the whole tablet pod.
This is a complete guess, but I wonder if this is happening in k8s because when vttablet is unhealthy, that gets reported to the k8s service, which then quits routing traffic, and the health check is waiting for new queries to become healthy again.
We haven't seen any connection pool errors in ~5 years (possible that we didn't notice logs, but there was never an outage). The only flag that was removed when upgrading from v18 to v19 was --queryserver-config-query-cache-size 100. @deepthi pointed out this PR from @vmg#14034 that was a major refactor of connection pools as being the first place to look.
Maybe we're in an edge case because of our usage of message tables. Until v15, there was a separate flag/pool --queryserver-config-message-conn-pool-size for messaging connections, so maybe those aren't accounted for?
I did a small review here. I think the PoolFull is actually a red herring! The PoolFull identifier is coming from a throttled logger that is only triggered on RESOURCE_EXHAUSTION messages. You can see the definition of these constants here:
Interestingly, after the changes I introduced in the new Pool PR, none of the possible RESOURCE_EXHAUSTED errors comes from the connection pool. In fact, the new pool was designed to no longer be able to error out because of resource exhaustion (if the pool is too busy to serve a request, the request will cancel on its own because of its context deadline).
What you're seeing here is an error packet coming directly from MySQL, which we've translated and mis-reported as a PoolFull error, but it is not that. Could you do some debugging on your side based on this information?
Potential culprits:
ERConCount/ERTooManyUserConnections do you have a connection limit in your mysqld instance that may be too tight?
ERDiskFull/EROutOfMemory/EROutOfSortMemory: is your mysqld at capacity?
ERNetPacketTooLarge: mysql queries that are larger than usual?
Please let us know what you find. The issue is clearly arising from mysqld, so you should hopefully see it in the logs there!
Overview of the Issue
We're randomly seeing vttablet lock up with errors like this:
PoolFull: skipped 60 log messages
Once it gets into that state, health checks start failing and query serving never recovers until I restart the whole tablet pod.
This is a complete guess, but I wonder if this is happening in k8s because when vttablet is unhealthy, that gets reported to the k8s service, which then quits routing traffic, and the health check is waiting for new queries to become healthy again.
We haven't seen any connection pool errors in ~5 years (possible that we didn't notice logs, but there was never an outage). The only flag that was removed when upgrading from v18 to v19 was
--queryserver-config-query-cache-size 100
. @deepthi pointed out this PR from @vmg #14034 that was a major refactor of connection pools as being the first place to look.Maybe we're in an edge case because of our usage of message tables. Until v15, there was a separate flag/pool
--queryserver-config-message-conn-pool-size
for messaging connections, so maybe those aren't accounted for?Reproduction Steps
vttablet flags
Binary Version
Operating System and Environment details
Log Fragments
The text was updated successfully, but these errors were encountered: