-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RedisCluster in multi process applications #2491
Comments
PhpRedis doesn't understand anything about forking so I would expect this sort of thing (instability, random failures, etc). It may be possible to work around this in the short term by manually "cloning" the A more robust solution would require changes to PhpRedis to keep track of the owning PID so we can reconnect when that changes. |
@michael-grunder thx for reply. This is exactly what I am doing. Post fork in freshly created process I am creating RedisCluster by simply calling new RedisCluster(...). The problem is that in different PIDs I am getting the same spl_object_hash for that fresh instance of RedisCluster. And therefore as far as I understand that is basically the same heap allocation and thus the same object. And because what you mentioned previously now 2 processes accessing the same object. This happens not all the time, like out of 8 processes created I am getting only 2-3 same RedisCluster objects |
If you are creating a new object in the forked child it can't be the same heap allocation. Process memory is duplicated on fork using a copy-on-write sheme. Perhaps there is an edge case where the If you are using persistent connections with I've never really used Swoole. If you could provide a very simple example program that uses the mechanisms you're talking about I can certainly take a look. |
@michael-grunder Here you go! https://github.com/mrAndersen/swoole-phpredis-mutliprocess To run this, execute
Basically you will see the following:
And as you see we are getting the same object hash for each RedisCluster instance, therefore as long as you will increase number of commands per process you will likely encounter some random errors and some other bugs. As far as I know i should get different spl_object_hash for different objects. To be more precise I included symfony, which is indeed used in production, however I doubt that it has any impact on that problem. |
Seems like spl_object_hash is working per process. Seems like even if it shows same hashes, in reality those objects are different objects. You can take a look at last commit - where I am doing the following:
Before each new RedisCluster() Thus incrementing internal php allocator I guess. And therefore I am getting different spl_object_hash in odd\even PIDs ;) |
So in continuation - the exact errors I am getting are
OR
|
Thanks. I'll run your container myself this evening. Those errors are what I would expect if the forked child was attempting to grab a connection actually cached by the parent but I'll need to replicate locally to say for sure. |
@michael-grunder Meanwhile, maybe I can somehow disable that caching mechanic? Maybe I can pass some specific $context into RedisCluster instance? Because really everything works fine, except for that random errors, I really can't exactly replicate when they occur, but surely this is due to some process-process problems. Maybe I can disable persistent connection? Because as far as I understood I really don't need that feature, because my processes simply live forever, and as far as I understood the connection will stay connected as long as process lives even if it isn't set persisted during object creation, because this is not php-fpm with request-response dying pattern. (But I do need that keepalive up to "timeout" from redis-server itself obviously) Anyway thx for help |
You can specify whether or not to use persistent connections when constructing a // change `true` to false in the 5th argument
$this->pool[$pid] = new RedisCluster(
null, ["redis:6379", "redis1:6380", "redis2:6381"], 10, 10, true, ""); There are INI settings involved as well. redis.pconnect.pooling_enabled
redis.clusters.persistent There is also the slot cache, although I'm not sure why that would cause an issue.
|
Thx for advise. For the record. I managed to make it working setting persistent = false during object creation and setting redis.pconnect.pooling_enabled = 0, however the actual TCP connections established from each process are indeed not getting closed and working fine, so as I mentioned I really don't need persistent to be set to true, because processes are not dying anyway. |
I have some strange problems creating RedisCluster() instance in different process environments.
I am using swoole addProcess() mechanics to create new processes inside my app. And I want to have different objects of RedisCluster in each process to maintain different connects. However sometimes I am getting the same spl_object_hash for created RedisCluster instance in different PIDs, and because of that on that I am getting some random errors "Error processing response from Redis node!"
Any advises how I can force every time new manager object, or maybe some hints of using cluster in multi process enviornments?
Basically what swoole addProcess does - is fork
The text was updated successfully, but these errors were encountered: