Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4.6.0 - ConnectionError max number of clients reached #2831

Closed
nsteinmetz opened this issue Jul 3, 2023 · 37 comments
Closed

4.6.0 - ConnectionError max number of clients reached #2831

nsteinmetz opened this issue Jul 3, 2023 · 37 comments

Comments

@nsteinmetz
Copy link

nsteinmetz commented Jul 3, 2023

Version: 4.6.0

Platform: Python 3.10 / Docker or Linux

Server : Redis 7.0.11

Description:

We have an application using redis in an async context for months now. With the ugrade of redis-py from 4.5.x to 4.6.0, we saw our app on staging environment starts failing with ConnectionError max number of clients reached

I first run a review of the code to add the missing await redis_conn.close() statements that were missing but the issue still occurs.

It seems that we reach the 10.000 max clients settings in one hour approx whereas we never used to so far. Unfortunately on this environment, I don't track this metric.

On staging last week, I used a workaround to set a client timeout to pass the one hour issue (config set timeout 60)

Doing it again a few minutes ago, I could drop connection from 8k to 160 and it's stable then around 160.

On production environment (Python 3.9, redis 4.5.5), we have 25 connected clients on avergage over the last 6 months with a similar activity to our staging environment. And there is no timeout configuration set, so it's disabled (default configuration)

On staging, reverting to 4.5.5 version of redis-py and with timeout disabled (aka value to 0), after a few minutes it's stable around 20 connections whereas it was to be 1000+ in the same amount of time with 4.6.0.

My code used to be basically :

from redis.asyncio import Redis
from app import settings

async def get_redis_connection():
    redis = Redis.from_url(settings.REDIS_URL)
    return redis

async def list_queued_job_ids():
    redis_conn = await get_redis_connection()
    ... do something with redis data and related code ...
    await redis_conn.close()
    return something

So I don't know if it's a regression in 4.6.0 with the AbstractClass maybe ? or something to add/change in my code but I didn't find any documentation that could help me on the changes to be made.

Let me know if you need further details.

@nsteinmetz
Copy link
Author

hi @woutdenolf @dvora-h,

As you worked on and validated the PR #2734 about extracting abstract async connection class would this issue make sense to you ?

@woutdenolf
Copy link
Contributor

woutdenolf commented Jul 18, 2023

Could you create a full example that reproduces the issue?

You can hit maxclients like this but that's to be expected

import asyncio
from redis.asyncio import Redis


async def get_redis_connection():
    redis = Redis.from_url("redis://localhost:6379/0")
    return redis


async def list_queued_job_ids():
    redis_conn = await get_redis_connection()
    result = await redis_conn.incr("counter")
    await asyncio.sleep(1)
    await redis_conn.close()
    return result


async def main(fail=False):
    aws = (list_queued_job_ids() for _ in range(10000 + int(fail)))
    print(max(await asyncio.gather(*aws)))


asyncio.run(main(fail=False))
asyncio.run(main(fail=True))

@nsteinmetz
Copy link
Author

nsteinmetz commented Jul 18, 2023

Hi,

I'll try before the start of my summer break at the end of week or I'll do once I'm back at the latest. I'll also try to test to see if it's related to the backend/API or one of our workers

And again, on production environment with 4.5.5 release, I'm stable with ~20 connections. So our code is definitely not supposed to work this way to hit the 10.000 maxclients limits as your sample above ;-)

@woutdenolf
Copy link
Contributor

Issue #2814 reports connections not being released to their pool. Maybe related?

@nsteinmetz
Copy link
Author

nsteinmetz commented Jul 19, 2023

Issue #2814 reports connections not being released to their pool. Maybe related?

I'm not in a cluster context (single redis instance) but who knows... and I think I have some Pipelines in the code btw

@nsteinmetz
Copy link
Author

nsteinmetz commented Jul 19, 2023

Ok so digging in the code right now and doiing some tests.

So the workflow:

  • The IoT device send somes information to our MQTT backend
  • The MQTTListener task will get the message and create and arq job for the MQTT worker
  • the MQTT worer will see the task and either process it for some functions or will create a telemetry arq task for telemetry worker
  • Telemetry worker will see the task to run if any and process it
  • end of process

As telemetry worker is the most active, I can see the issue with it.

Telemetry worker is structured as below:

async def device_callback_telemetry(session, device_id, payload):
    logging.debug("Received telemetry from device %s" % device_id)

    redis_conn = await get_redis_connection()

    device_infos = await get_device_infos(session, redis_conn, device_id)
    if device_infos.get("state") is not None and device_infos["state"] == DeviceState.ready:
        telemetry = Telemetry()
        telemetry.ParseFromString(payload)
        await process_telemetry(
            session,
            redis_conn,
            device_infos["id"],
            telemetry,
            device_infos["component"],
        )
    else:
        logging.warning("Received telemetry from %s but this device has not been initialised" % device_id)

    await redis_conn.close()

Then, the redis_conn is passed to a multiple of child functions for yout information till end of process.

It seems that a new client is created every time a new telemetry is sent whereas it was not the case with 4.5.5 release. With my first try with 4.6.0 I also added the await redis_conn.close() which were missing in the code.

So I really wonder why this change happens and why clients are not closed as expected.

@nsteinmetz
Copy link
Author

You can see the number of connected clients increase with sample below:

from redis.asyncio import Redis
import asyncio

async def get_redis_connection():
    redis = Redis.from_url("redis://localhost:6379/0")
    return redis


async def do_something():
    redis_conn = await get_redis_connection()
    result = await redis_conn.incr("counter")
    await asyncio.sleep(1)
    await redis_conn.close()
    return result


async def main():
    while True:
        res = await do_something()
        print(res)

asyncio.run(main())

and on the other side:

watch redis-cli info clients

@nsteinmetz
Copy link
Author

nsteinmetz commented Jul 19, 2023

Ok so the issue is with from_url:

Connected clients is stable with:

from redis.asyncio import Redis
import asyncio

async def get_redis_connection():
    redis = Redis(host="localhost", port=6379, db=0)
    return redis


async def do_something():
    redis_conn = await get_redis_connection()
    result = await redis_conn.incr("counter")
    await asyncio.sleep(1)
    await redis_conn.close()
    return result


async def main():
    while True:
        res = await do_something()
        print(res)

asyncio.run(main())

I also noticed that:

  • when I Ctrl+C my script with from_url - clients may not be freed immediatly. If some are immediately freed, it's not all of them but only a part of them and the remaining ones some times later. If I also re-run the script, then seems it creates 20 new connections instead of 1 till around 100 and then it's back to 11 and increase progressively ?!
  • when I Ctrl+C my script without from_url- the single connected client is immedialty freed

@woutdenolf
Copy link
Contributor

woutdenolf commented Jul 19, 2023

Nice. I can reproduce it. I compared the behavior with the synchronous version and I noticed this:

If you make a sync equivalent of the code in #2831 (comment), the clients are disconnected when the connections are garbage collected:

#redis/connection.py

class AbstractConnection:

    def __del__(self):
        try:
            self.disconnect()
        except Exception:
            pass

There is no __del__ for the async version so disconnect never gets called. This means all connections you make stay connected.

@woutdenolf
Copy link
Contributor

woutdenolf commented Jul 19, 2023

When I checkout v4.5.5 the connection does have a __del__.

It was removed in #2755

@kristjanvalur Can you provide some insight in why __del__ was removed? The use-case above needs it.

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Jul 19, 2023

The reason was that the async version was doing all kinds of nasty stuff, including creating a new event loop, in order to perform a disconnect. __del__ is a synchronous method, whereas disconnect() is async.

The reasoning was also, that if there is a del call, there is no need to disconnect cleanly (i.e a socket disconnect will happen when the socket objects gets collected).
What I failed to take into account was that in case of connection pooling, this is not a disconnect, but a return to the pool.
connections need to return to the pool, not disconnect. We can do this more cleanly. I will create a pr which achieves this without having to invoke all the async stuff.

@kristjanvalur
Copy link
Contributor

update: The difference in behaviour is that when Redis.from_url() is called, then Redis.auto_close_connection == False, whereas it is True otherwise.
I think this is a bug, since a private connection pool is indeed created, and should be closed.

When it is False, the pool is simply garbage collected, and also the connections in it. For some reason, however, without an explicit disconnect, the connection stays open. I need to investigate how that happens.

@nsteinmetz
Copy link
Author

Hey @kristjanvalur

I also noticed that from_urlsignature is not the same for sync & async version

For async it's from_url(url, **kwargs) - as stated in the doc : https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url

For sync, it's

def from_url(cls,
             url: str,
             *,
             host: str | None = ...,
             port: int | None = ...,
             db: int | None = ...,
             password: str | None = ...,
             socket_timeout: float | None = ...,
             socket_connect_timeout: float | None = ...,
             socket_keepalive: bool | None = ...,
             socket_keepalive_options: Mapping[str, int | str] | None = ...,
             connection_pool: ConnectionPool | None = ...,
             unix_socket_path: str | None = ...,
             encoding: str = ...,
             encoding_errors: str = ...,
             charset: str | None = ...,
             errors: str | None = ...,
             decode_responses: Literal[True],
             retry_on_timeout: bool = ...,
             ssl: bool = ...,
             ssl_keyfile: str | None = ...,
             ssl_certfile: str | None = ...,
             ssl_cert_reqs: str | int | None = ...,
             ssl_ca_certs: str | None = ...,
             ssl_check_hostname: bool = ...,
             max_connections: int | None = ...,
             single_connection_client: bool = ...,
             health_check_interval: float = ...,
             client_name: str | None = ...,
             username: str | None = ...) -> Redis[str]
Return a Redis client object configured from the given URL

Back to async version, if I try:

async def get_redis_connection():
    redis = Redis.from_url("redis://localhost:6379/0", auto_close_connection_pool=False)
    return redis

I would get:

TypeError: AbstractConnection.__init__() got an unexpected keyword argument 'auto_close_connection_pool'

whereas I would expect it to be the same as:

async def get_redis_connection():
    redis = Redis(host="localhost", port=6379, db=0, auto_close_connection_pool=False)
    return redis

@kristjanvalur
Copy link
Contributor

Interesting, I can have a look.

@nsteinmetz
Copy link
Author

do you want me to open a new issue for that ?

@kristjanvalur
Copy link
Contributor

So, no, I fixed this use case in the pr.

@nsteinmetz
Copy link
Author

Awesome, thanks !

@dvora-h
Copy link
Collaborator

dvora-h commented Aug 6, 2023

Can someone confirm that #2859 fixed it and we can close this issue?

@nsteinmetz
Copy link
Author

nsteinmetz commented Aug 6, 2023

As it's a rainy day, I just tested it and it works as expected.

❯ poetry add git+https://github.com:redis/redis-py.git#master

Updating dependencies
Resolving dependencies... (0.5s)

Package operations: 0 installs, 1 update, 0 removals

  • Updating redis (4.6.0 -> 5.0.0rc2 3e50d28)

Writing lock file

then, run the sample code below :

from redis.asyncio import Redis
import asyncio


async def get_redis_connection():
    redis = Redis.from_url("redis://localhost:6379/0")
    # redis = Redis(host="localhost", port=6379, db=0)
    return redis


async def do_something():
    redis_conn = await get_redis_connection()
    result = await redis_conn.incr("counter")
    await asyncio.sleep(1)
    await redis_conn.close()
    return result


async def main():
    while True:
        res = await do_something()
        print(res)

The two versions of the redis variable definition works as expected with same behaviour.

   redis = Redis.from_url("redis://localhost:6379/0")
   # or
   redis = Redis(host="localhost", port=6379, db=0)

So from my point of view, all the issues identified in this issue are fixed.

Thanks for your support, fixes and follow-up @kristjanvalur @woutdenolf @dvora-h !

@nsteinmetz
Copy link
Author

btw, will there be a 4.6.1 release with this fix or will it be 5.0 only ?

@nsteinmetz
Copy link
Author

nsteinmetz commented Aug 16, 2023

Hi there,

Seems the issue is still present even if less visible. I just deployed on my staging environment the code with the 5.0.0 release (only changes from previous version) and it keeps increasing even if much slower than before.

Deployment was 3 hours ago, and I have ~1100/1200 connected clients (vs 1000 clients in a few minutres previously)

Could there be a remaining bug with the Redis.from_url("redis://localhost:6379/0") syntax ?

@nsteinmetz nsteinmetz reopened this Aug 16, 2023
@kristjanvalur
Copy link
Contributor

What is the exact pattern you are using, i.e. is there a code loop which can repro this?

@kristjanvalur
Copy link
Contributor

Or, if this is a larger application, are there possibly code paths where you do not close your Redis object or use it in a context manager? You should be, in this case, be getting resource usage warnings in your logs..

@nsteinmetz
Copy link
Author

Hi @kristjanvalur

It's a FastAPI app with arq workers also - so extracting the code for easy reproduction is not that easy 😢

I'll try to investigate further today on my local machine and see if I can find a culprit.

I already checked several times if I did not forget to close a connection but maybe I missed one somewhere 🤔

I don't use a context manager afaik. Do you have a sample somewhere of a context manager (ideally in an async context)

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Aug 17, 2023

The proper pattern, with the small example you gave me is:

async def do_something():
    async with Redis.from_url("redis://localhost:6379/0") as redis:
        await do_something_with_the_redis(redis)

Or for a worker:

async def do_something():
    redis = Redis.from_url("redis://localhost:6379/0")
    try:
        start_worker(work_handler, redis)
    except Exception:
        await redis.close()
        raise

async def work_handler(redis):
    async with redis:
        await do_something_with_the_redis(redis)

Basically, making sure that whenever you have created a Redis connection pool, (implicit inside the Redis object), that it is appropriately closed afterwards.

If you have a more complex pattern, such as a separately managed ConnectionPool (which is wise), then you similarly need to close that, when you have stopped using it (using pool.disconnect(*) curiously. It should have an aclose() method too, for consistency.

In async python, doing any sort of IO related cleanup during garbage collection is frowned upon. Instead, it is the responibility of the programmer to ensure that resources are freed, typically using a context manager pattern as above. This is because __del__ handlers can be called at arbitrary places and often are very limited in that what they can do.

Synchronous python is more forgiving, in that blocking IO operations can generally be done anywyere.

Standard library python generally does not do any cleanup from del. Synchronous redis-py did put safety belts in the sync code. The async library also had safety belts in place but they were nasty: They would create a new event loop and do things that generally should not be done in __del__ methods.

@kristjanvalur
Copy link
Contributor

Interestingly, in async python, the canonical method to use is aclose(). I'll probably create PR to add those, so that stdlib context managers and other tools play well with redis.

@Harry-Lees
Copy link

It's a FastAPI app with arq workers also - so extracting the code for easy reproduction is not that easy 😢

I haven't tried 5.0 in our production application since I had issues with 4.6 which capped our Redis instance connections but I also saw this issue on 4.6 with a FastAPI application. Our pattern was essentially

async def get_redis():
    """
    Acquire a Redis connection.
    """
    if config.REDIS_URI is None:
        raise ValueError("REDIS_URI must be set in the environment")
    connection = redis.from_url(config.REDIS_URI, decode_responses=True)
    yield connection
    await connection.close()

which was then called in endpoints using FastAPI's dependency injection.

from fastapi import Depends

async def foo(conn = Depends(get_redis)):
   ...

@nsteinmetz
Copy link
Author

nsteinmetz commented Aug 17, 2023

@kristjanvalur seems I found the last remaining culprits where redis connection was not correctly closed.

Thanks for your detailed explaination about context managers. It helped to fix few ones this afternoon.

On my local machine, seems quite stable so far - will run it all night long to see if it changes but seems good so far this time 😄

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Aug 18, 2023

I haven't tried 5.0 in our production application since I had issues with 4.6 which capped our Redis instance connections but I also saw this issue on 4.6 with a FastAPI application. Our pattern was essentially

Right, @Harry-Lees , I'd rewrite that as:

    connection = redis.from_url(config.REDIS_URI, decode_responses=True)
    try:
       yield connection
    finally:
        await connection.close()

or

    async with redis.from_url(config.REDIS_URI, decode_responses=True) as connection:
       yield connection

this ensures that the connection is closed even if there is an error when using it.

@nsteinmetz
Copy link
Author

For the record, I could deploy fix on staging and production environments with 5.0 and number of connected clients is stable over time as for 4.5.5 version.

So if you have such a case, you need:

  • to upgrade to 5.0.0+ release
  • review your code to ensure your connections are closed via a context manager or an explicit close statement.

@DmitriyL02
Copy link

DmitriyL02 commented Aug 19, 2023

Redis version: 6.2
lib Redis version: 4.6.0

Hi! I have same problem when i used the Redis Sentinel. If i used next code then i increment my client connections counter every time when i call the redis command.

from redis.asyncio import Sentinel

sentinel = Sentinel(sentinels=config.REDIS_SENTINELS, password=config.REDIS_PASSWORD, client_name="local_test")

client = sentinel.master_for(config.REDIS_CLUSTER_NAME)

# After context manager connection will not closed
async with client as redis_master:
    await redis_master.set("key", "value")
    await redis_master.get("key")

# same example without context manager. Close is not working

await client.set("key", "value")
await client.get("key", "value")
await client.close()

I wrote mine context manager and it works but i'm not really sure it is correct. Because i want to release used connection but now i close connection. I try to use auto_close_connection_pool but with Sentinel is not working(check __aexit__ in Redis module).

def setup(self) -> None:
    if not self.sentinel: # class environment
        self.sentinel = Sentinel(
            sentinels=self.sentinels,
            password=self.__password,
            socket_timeout=self.timeout,
            client_name=self.client_name,
        )

@asynccontextmanager
async def initialize_master_connection(self):
    self.setup()
    redis = self.sentinel.master_for(self.service_name)
    try:
        yield redis
    except RedisError:
        logger.exception(
            f"Cannot create a connection to redis service_name: {self.service_name} sentinels: {self.sentinels} "
            f"client_name: {self.client_name}"
        )
    finally:
        await redis.connection_pool.disconnect(inuse_connections=False)

How can i solution this problem? This is a same problem?

Petitoto pushed a commit to aeecleclair/Hyperion that referenced this issue Aug 19, 2023
Bumps [redis](https://github.com/redis/redis-py) from 4.6.0 to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>5.0.0</h2>
<h2>What's new?</h2>
<h3>Triggers and Functions support</h3>
<p>Triggers and Functions allow you to execute server-side functions
triggered when key values are modified or created in Redis, a stream
entry arrival, or explicitly calling them. Simply put, you can replace
Lua scripts with easy-to-develop JavaScript or TypeScript code. Move
your business logic closer to the data to ensure a lower latency, and
forget about updating dependent key values manually in your code.
<a
href="https://redis.io/docs/interact/programmability/triggers-and-functions/quick_start/">Try
it for yourself with Quick start</a></p>
<h3>Full Redis 7.2 and <a
href="https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md">RESP3
support</a></h3>
<h3>Python 3.7 End-of-Life</h3>
<p><a href="https://devguide.python.org/versions/">Python 3.7 has
reached its end-of-life (EOL) as of June 2023</a>. This means that
starting from this date, Python 3.7 will no longer receive any updates,
including security patches, bug fixes, or improvements. If you continue
to use Python 3.7 post-EOL, you may expose your projects and systems to
potential security vulnerabilities. We ended its support in this version
and strongly recommend migrating to Python 3.10.</p>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix timeout retrying on pipeline execution (<a
href="https://redirect.github.com/redis/redis-py/issues/2812">#2812</a>)</li>
<li>Fix socket garbage collection (<a
href="https://redirect.github.com/redis/redis-py/issues/2859">#2859</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Updating client license to clear, MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li>Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li>Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li>Fix type hints in SearchCommands (<a
href="https://redirect.github.com/redis/redis-py/issues/2817">#2817</a>)</li>
<li>Add sync modules (except search) tests to cluster CI (<a
href="https://redirect.github.com/redis/redis-py/issues/2850">#2850</a>)</li>
<li>Fix a duplicate word in <code>CONTRIBUTING.md</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2848">#2848</a>)</li>
<li>Fixing doc builds (<a
href="https://redirect.github.com/redis/redis-py/issues/2869">#2869</a>)</li>
<li>Change cluster docker to edge and enable debug command (<a
href="https://redirect.github.com/redis/redis-py/issues/2853">#2853</a>)</li>
</ul>
<h2>Contributors</h2>
<p>We'd like to thank all the contributors who worked on this
release!</p>
<p><a href="https://github.com/JoanFM"><code>@​JoanFM</code></a>, <a
href="https://github.com/Ovsyanka83"><code>@​Ovsyanka83</code></a>, <a
href="https://github.com/chayim"><code>@​chayim</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot],
<a href="https://github.com/dvora-h"><code>@​dvora-h</code></a>, <a
href="https://github.com/kristjanvalur"><code>@​kristjanvalur</code></a>,
<a href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a>, <a
href="https://github.com/pall-j"><code>@​pall-j</code></a> and <a
href="https://github.com/shacharPash"><code>@​shacharPash</code></a></p>
<h2>5.0.0rc2</h2>
<h2>Changes</h2>
<h2>🧰 Maintenance</h2>
<ul>
<li>RESP3 response-callbacks cleanup (<a
href="https://redirect.github.com/redis/redis-py/issues/2841">#2841</a>)</li>
<li>Merge master to 5.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2827">#2827</a>)</li>
</ul>
<h2>5.0.0rc1</h2>
<h2>Changes</h2>
<h2>🔥 Breaking Changes</h2>
<ul>
<li>Change <code>SISMEMBER</code> return type to int by (<a
href="https://redirect.github.com/redis/redis-py/issues/2813">#2813</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/blob/master/CHANGES">redis's
changelog</a>.</em></p>
<blockquote>
<pre><code>* Fix [#2831](redis/redis-py#2831),
add auto_close_connection_pool=True arg to asyncio.Redis.from_url()
* Fix incorrect redis.asyncio.Cluster type hint for `retry_on_error`
* Fix dead weakref in sentinel connection causing ReferenceError
([#2767](redis/redis-py#2767))
* Fix [#2768](redis/redis-py#2768), Fix
KeyError: 'first-entry' in parse_xinfo_stream.
* Fix [#2749](redis/redis-py#2749), remove
unnecessary __del__ logic to close connections.
* Fix [#2754](redis/redis-py#2754), adding a
missing argument to SentinelManagedConnection
* Fix `xadd` command to accept non-negative `maxlen` including 0
* Revert [#2104](redis/redis-py#2104),
[#2673](redis/redis-py#2673), add
`disconnect_on_error` option to `read_response()` (issues
[#2506](redis/redis-py#2506),
[#2624](redis/redis-py#2624))
* Add `address_remap` parameter to `RedisCluster`
* Fix incorrect usage of once flag in async Sentinel
* asyncio: Fix memory leak caused by hiredis
([#2693](redis/redis-py#2693))
* Allow data to drain from async PythonParser when reading during a
disconnect()
* Use asyncio.timeout() instead of async_timeout.timeout() for python
&gt;= 3.11 ([#2602](redis/redis-py#2602))
* Add a Dependabot configuration to auto-update GitHub action versions.
* Add test and fix async HiredisParser when reading during a
disconnect() ([#2349](redis/redis-py#2349))
* Use hiredis-py pack_command if available.
* Support `.unlink()` in ClusterPipeline
* Simplify synchronous SocketBuffer state management
* Fix string cleanse in Redis Graph
* Make PythonParser resumable in case of error
([#2510](redis/redis-py#2510))
* Add `timeout=None` in `SentinelConnectionManager.read_response`
* Documentation fix: password protected socket connection
([#2374](redis/redis-py#2374))
* Allow `timeout=None` in `PubSub.get_message()` to wait forever
* add `nowait` flag to `asyncio.Connection.disconnect()`
* Update README.md links
* Fix timezone handling for datetime to unixtime conversions
* Fix start_id type for XAUTOCLAIM
* Remove verbose logging from cluster.py
* Add retry mechanism to async version of Connection
* Compare commands case-insensitively in the asyncio command parser
* Allow negative `retries` for `Retry` class to retry forever
* Add `items` parameter to `hset` signature
* Create codeql-analysis.yml
([#1988](redis/redis-py#1988)). Thanks @chayim
* Add limited support for Lua scripting with RedisCluster
* Implement `.lock()` method on RedisCluster
* Fix cursor returned by SCAN for RedisCluster &amp; change default
target to PRIMARIES
* Fix scan_iter for RedisCluster
* Remove verbose logging when initializing ClusterPubSub,
ClusterPipeline or RedisCluster
* Fix broken connection writer lock-up for asyncio
([#2065](redis/redis-py#2065))
* Fix auth bug when provided with no username
([#2086](redis/redis-py#2086))
* Fix missing ClusterPipeline._lock
([#2189](redis/redis-py#2189))
* Added dynaminc_startup_nodes configuration to RedisCluster
* Fix reusing the old nodes' connections when cluster topology refresh
is being done
* Fix RedisCluster to immediately raise AuthenticationError without a
retry
* ClusterPipeline Doesn't Handle ConnectionError for Dead Hosts
([#2225](redis/redis-py#2225))
* Remove compatibility code for old versions of Hiredis, drop Packaging
dependency
* The `deprecated` library is no longer a dependency
* Failover handling improvements for RedisCluster and Async RedisCluster
([#2377](redis/redis-py#2377))
* Fixed &quot;cannot pickle '_thread.lock' object&quot; bug
([#2354](redis/redis-py#2354),
[#2297](redis/redis-py#2297))
* Added CredentialsProvider class to support password rotation
</code></pre>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/redis/redis-py/commit/28cc65c18cc4fb37ef14497c963eb181dba8d25d"><code>28cc65c</code></a>
Updating all client licenses to clearly be MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/2f679261b7ef0e7372868cacd8ba8721406eb495"><code>2f67926</code></a>
Version 5.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2874">#2874</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/f121cf29e7d7fb5c85c5915ba5ce10a20826e8c0"><code>f121cf2</code></a>
Add support for <code>CLIENT SETINFO</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2857">#2857</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/d5c2d1d42ed9f653d450e6127cb6f673f43fb2d0"><code>d5c2d1d</code></a>
Adding support for triggered functions (TFUNCTION) (<a
href="https://redirect.github.com/redis/redis-py/issues/2861">#2861</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/b0abd555770bba42c57881b488b5fe5b188f088e"><code>b0abd55</code></a>
RESP 3 feature documentation (<a
href="https://redirect.github.com/redis/redis-py/issues/2872">#2872</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/a532f89adcd5b790e2811588a2d7c34a79b095d5"><code>a532f89</code></a>
Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/673617d2cbac265c6c8d43280d5e6898df4572b6"><code>673617d</code></a>
Bump actions/upload-artifact from 2 to 3 (<a
href="https://redirect.github.com/redis/redis-py/issues/2877">#2877</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/0ed807777cfab129904fd72fbada793f21ea0a9c"><code>0ed8077</code></a>
Bump pypa/gh-action-pip-audit from 1.0.0 to 1.0.8 (<a
href="https://redirect.github.com/redis/redis-py/issues/2879">#2879</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/38c7de617a482c9dd2d40699fbdd7ce44736cae9"><code>38c7de6</code></a>
Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/8370c4ac20835002fd1044e1742986072f19289c"><code>8370c4a</code></a>
Add a Dependabot config to auto-update GitHub action versions (<a
href="https://redirect.github.com/redis/redis-py/issues/2847">#2847</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v4.6.0...v5.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=redis&package-manager=pip&previous-version=4.6.0&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@kristjanvalur
Copy link
Contributor

I'll have a look at the Sentinel code, which I honestly very rarely do.

@kristjanvalur
Copy link
Contributor

Indeed, the master_for and slave_for methods do not set up the resulting Redis object correctly. I'll create a PR to fix that.

@kristjanvalur
Copy link
Contributor

kristjanvalur commented Aug 20, 2023

Okay, I've created the above mentioned pr.
@DmitriyL02 , a workaround for you is to set the auto_close_connection_pool attribute manually like this this:

sentinel = Sentinel(sentinels=config.REDIS_SENTINELS, password=config.REDIS_PASSWORD, client_name="local_test")
client = sentinel.master_for(config.REDIS_CLUSTER_NAME)
client.auto_close_connection_pool = True

async with client:
    await client.set("key", "value")
    await client.get("key")

@kristjanvalur
Copy link
Contributor

@nsteinmetz could you re-open this defect? Until #2900 is merged, this is still an issue.

@nsteinmetz nsteinmetz reopened this Aug 23, 2023
@kristjanvalur
Copy link
Contributor

aaand, it got merged :)

@nsteinmetz
Copy link
Author

As several related bugs were fixed with this issue, I suggest that next one starts a new issue 😄

thanks @kristjanvalur for all your fixes

oleobal pushed a commit to Substra/substra-backend that referenced this issue Sep 11, 2023
Bumps [redis](https://github.com/redis/redis-py) from 4.5.4 to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>5.0.0</h2>
<h2>What's new?</h2>
<h3>Triggers and Functions support</h3>
<p>Triggers and Functions allow you to execute server-side functions
triggered when key values are modified or created in Redis, a stream
entry arrival, or explicitly calling them. Simply put, you can replace
Lua scripts with easy-to-develop JavaScript or TypeScript code. Move
your business logic closer to the data to ensure a lower latency, and
forget about updating dependent key values manually in your code.
<a
href="https://redis.io/docs/interact/programmability/triggers-and-functions/quick_start/">Try
it for yourself with Quick start</a></p>
<h3>Full <a href="https://redis.com/blog/introducing-redis-7-2/">Redis
7.2</a> and <a
href="https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md">RESP3
support</a></h3>
<h3>Python 3.7 End-of-Life</h3>
<p><a href="https://devguide.python.org/versions/">Python 3.7 has
reached its end-of-life (EOL) as of June 2023</a>. This means that
starting from this date, Python 3.7 will no longer receive any updates,
including security patches, bug fixes, or improvements. If you continue
to use Python 3.7 post-EOL, you may expose your projects and systems to
potential security vulnerabilities. We ended its support in this version
and strongly recommend migrating to Python 3.10.</p>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix timeout retrying on pipeline execution (<a
href="https://redirect.github.com/redis/redis-py/issues/2812">#2812</a>)</li>
<li>Fix socket garbage collection (<a
href="https://redirect.github.com/redis/redis-py/issues/2859">#2859</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Updating client license to clear, MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li>Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li>Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li>Fix type hints in SearchCommands (<a
href="https://redirect.github.com/redis/redis-py/issues/2817">#2817</a>)</li>
<li>Add sync modules (except search) tests to cluster CI (<a
href="https://redirect.github.com/redis/redis-py/issues/2850">#2850</a>)</li>
<li>Fix a duplicate word in <code>CONTRIBUTING.md</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2848">#2848</a>)</li>
<li>Fixing doc builds (<a
href="https://redirect.github.com/redis/redis-py/issues/2869">#2869</a>)</li>
<li>Change cluster docker to edge and enable debug command (<a
href="https://redirect.github.com/redis/redis-py/issues/2853">#2853</a>)</li>
</ul>
<h2>Contributors</h2>
<p>We'd like to thank all the contributors who worked on this
release!</p>
<p><a href="https://github.com/JoanFM"><code>@​JoanFM</code></a>, <a
href="https://github.com/Ovsyanka83"><code>@​Ovsyanka83</code></a>, <a
href="https://github.com/chayim"><code>@​chayim</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot],
<a href="https://github.com/dvora-h"><code>@​dvora-h</code></a>, <a
href="https://github.com/kristjanvalur"><code>@​kristjanvalur</code></a>,
<a href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a>, <a
href="https://github.com/pall-j"><code>@​pall-j</code></a> and <a
href="https://github.com/shacharPash"><code>@​shacharPash</code></a></p>
<h2>5.0.0rc2</h2>
<h2>Changes</h2>
<h2>🧰 Maintenance</h2>
<ul>
<li>RESP3 response-callbacks cleanup (<a
href="https://redirect.github.com/redis/redis-py/issues/2841">#2841</a>)</li>
<li>Merge master to 5.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2827">#2827</a>)</li>
</ul>
<h2>5.0.0rc1</h2>
<h2>Changes</h2>
<h2>🔥 Breaking Changes</h2>
<ul>
<li>Change <code>SISMEMBER</code> return type to int by (<a
href="https://redirect.github.com/redis/redis-py/issues/2813">#2813</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/blob/master/CHANGES">redis's
changelog</a>.</em></p>
<blockquote>
<pre><code>* Fix [#2831](redis/redis-py#2831),
add auto_close_connection_pool=True arg to asyncio.Redis.from_url()
* Fix incorrect redis.asyncio.Cluster type hint for `retry_on_error`
* Fix dead weakref in sentinel connection causing ReferenceError
([#2767](redis/redis-py#2767))
* Fix [#2768](redis/redis-py#2768), Fix
KeyError: 'first-entry' in parse_xinfo_stream.
* Fix [#2749](redis/redis-py#2749), remove
unnecessary __del__ logic to close connections.
* Fix [#2754](redis/redis-py#2754), adding a
missing argument to SentinelManagedConnection
* Fix `xadd` command to accept non-negative `maxlen` including 0
* Revert [#2104](redis/redis-py#2104),
[#2673](redis/redis-py#2673), add
`disconnect_on_error` option to `read_response()` (issues
[#2506](redis/redis-py#2506),
[#2624](redis/redis-py#2624))
* Add `address_remap` parameter to `RedisCluster`
* Fix incorrect usage of once flag in async Sentinel
* asyncio: Fix memory leak caused by hiredis
([#2693](redis/redis-py#2693))
* Allow data to drain from async PythonParser when reading during a
disconnect()
* Use asyncio.timeout() instead of async_timeout.timeout() for python
&gt;= 3.11 ([#2602](redis/redis-py#2602))
* Add a Dependabot configuration to auto-update GitHub action versions.
* Add test and fix async HiredisParser when reading during a
disconnect() ([#2349](redis/redis-py#2349))
* Use hiredis-py pack_command if available.
* Support `.unlink()` in ClusterPipeline
* Simplify synchronous SocketBuffer state management
* Fix string cleanse in Redis Graph
* Make PythonParser resumable in case of error
([#2510](redis/redis-py#2510))
* Add `timeout=None` in `SentinelConnectionManager.read_response`
* Documentation fix: password protected socket connection
([#2374](redis/redis-py#2374))
* Allow `timeout=None` in `PubSub.get_message()` to wait forever
* add `nowait` flag to `asyncio.Connection.disconnect()`
* Update README.md links
* Fix timezone handling for datetime to unixtime conversions
* Fix start_id type for XAUTOCLAIM
* Remove verbose logging from cluster.py
* Add retry mechanism to async version of Connection
* Compare commands case-insensitively in the asyncio command parser
* Allow negative `retries` for `Retry` class to retry forever
* Add `items` parameter to `hset` signature
* Create codeql-analysis.yml
([#1988](redis/redis-py#1988)). Thanks @chayim
* Add limited support for Lua scripting with RedisCluster
* Implement `.lock()` method on RedisCluster
* Fix cursor returned by SCAN for RedisCluster &amp; change default
target to PRIMARIES
* Fix scan_iter for RedisCluster
* Remove verbose logging when initializing ClusterPubSub,
ClusterPipeline or RedisCluster
* Fix broken connection writer lock-up for asyncio
([#2065](redis/redis-py#2065))
* Fix auth bug when provided with no username
([#2086](redis/redis-py#2086))
* Fix missing ClusterPipeline._lock
([#2189](redis/redis-py#2189))
* Added dynaminc_startup_nodes configuration to RedisCluster
* Fix reusing the old nodes' connections when cluster topology refresh
is being done
* Fix RedisCluster to immediately raise AuthenticationError without a
retry
* ClusterPipeline Doesn't Handle ConnectionError for Dead Hosts
([#2225](redis/redis-py#2225))
* Remove compatibility code for old versions of Hiredis, drop Packaging
dependency
* The `deprecated` library is no longer a dependency
* Failover handling improvements for RedisCluster and Async RedisCluster
([#2377](redis/redis-py#2377))
* Fixed &quot;cannot pickle '_thread.lock' object&quot; bug
([#2354](redis/redis-py#2354),
[#2297](redis/redis-py#2297))
* Added CredentialsProvider class to support password rotation
</code></pre>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/redis/redis-py/commit/28cc65c18cc4fb37ef14497c963eb181dba8d25d"><code>28cc65c</code></a>
Updating all client licenses to clearly be MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/2f679261b7ef0e7372868cacd8ba8721406eb495"><code>2f67926</code></a>
Version 5.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2874">#2874</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/f121cf29e7d7fb5c85c5915ba5ce10a20826e8c0"><code>f121cf2</code></a>
Add support for <code>CLIENT SETINFO</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2857">#2857</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/d5c2d1d42ed9f653d450e6127cb6f673f43fb2d0"><code>d5c2d1d</code></a>
Adding support for triggered functions (TFUNCTION) (<a
href="https://redirect.github.com/redis/redis-py/issues/2861">#2861</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/b0abd555770bba42c57881b488b5fe5b188f088e"><code>b0abd55</code></a>
RESP 3 feature documentation (<a
href="https://redirect.github.com/redis/redis-py/issues/2872">#2872</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/a532f89adcd5b790e2811588a2d7c34a79b095d5"><code>a532f89</code></a>
Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/673617d2cbac265c6c8d43280d5e6898df4572b6"><code>673617d</code></a>
Bump actions/upload-artifact from 2 to 3 (<a
href="https://redirect.github.com/redis/redis-py/issues/2877">#2877</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/0ed807777cfab129904fd72fbada793f21ea0a9c"><code>0ed8077</code></a>
Bump pypa/gh-action-pip-audit from 1.0.0 to 1.0.8 (<a
href="https://redirect.github.com/redis/redis-py/issues/2879">#2879</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/38c7de617a482c9dd2d40699fbdd7ce44736cae9"><code>38c7de6</code></a>
Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/8370c4ac20835002fd1044e1742986072f19289c"><code>8370c4a</code></a>
Add a Dependabot config to auto-update GitHub action versions (<a
href="https://redirect.github.com/redis/redis-py/issues/2847">#2847</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v4.5.4...v5.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=redis&package-manager=pip&previous-version=4.5.4&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
oleobal pushed a commit to Substra/substra-backend that referenced this issue Sep 11, 2023
Bumps [redis](https://github.com/redis/redis-py) from 4.6.0 to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>5.0.0</h2>
<h2>What's new?</h2>
<h3>Triggers and Functions support</h3>
<p>Triggers and Functions allow you to execute server-side functions
triggered when key values are modified or created in Redis, a stream
entry arrival, or explicitly calling them. Simply put, you can replace
Lua scripts with easy-to-develop JavaScript or TypeScript code. Move
your business logic closer to the data to ensure a lower latency, and
forget about updating dependent key values manually in your code.
<a
href="https://redis.io/docs/interact/programmability/triggers-and-functions/quick_start/">Try
it for yourself with Quick start</a></p>
<h3>Full <a href="https://redis.com/blog/introducing-redis-7-2/">Redis
7.2</a> and <a
href="https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md">RESP3
support</a></h3>
<h3>Python 3.7 End-of-Life</h3>
<p><a href="https://devguide.python.org/versions/">Python 3.7 has
reached its end-of-life (EOL) as of June 2023</a>. This means that
starting from this date, Python 3.7 will no longer receive any updates,
including security patches, bug fixes, or improvements. If you continue
to use Python 3.7 post-EOL, you may expose your projects and systems to
potential security vulnerabilities. We ended its support in this version
and strongly recommend migrating to Python 3.10.</p>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix timeout retrying on pipeline execution (<a
href="https://redirect.github.com/redis/redis-py/issues/2812">#2812</a>)</li>
<li>Fix socket garbage collection (<a
href="https://redirect.github.com/redis/redis-py/issues/2859">#2859</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Updating client license to clear, MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li>Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li>Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li>Fix type hints in SearchCommands (<a
href="https://redirect.github.com/redis/redis-py/issues/2817">#2817</a>)</li>
<li>Add sync modules (except search) tests to cluster CI (<a
href="https://redirect.github.com/redis/redis-py/issues/2850">#2850</a>)</li>
<li>Fix a duplicate word in <code>CONTRIBUTING.md</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2848">#2848</a>)</li>
<li>Fixing doc builds (<a
href="https://redirect.github.com/redis/redis-py/issues/2869">#2869</a>)</li>
<li>Change cluster docker to edge and enable debug command (<a
href="https://redirect.github.com/redis/redis-py/issues/2853">#2853</a>)</li>
</ul>
<h2>Contributors</h2>
<p>We'd like to thank all the contributors who worked on this
release!</p>
<p><a href="https://github.com/JoanFM"><code>@​JoanFM</code></a>, <a
href="https://github.com/Ovsyanka83"><code>@​Ovsyanka83</code></a>, <a
href="https://github.com/chayim"><code>@​chayim</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot],
<a href="https://github.com/dvora-h"><code>@​dvora-h</code></a>, <a
href="https://github.com/kristjanvalur"><code>@​kristjanvalur</code></a>,
<a href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a>, <a
href="https://github.com/pall-j"><code>@​pall-j</code></a> and <a
href="https://github.com/shacharPash"><code>@​shacharPash</code></a></p>
<h2>5.0.0rc2</h2>
<h2>Changes</h2>
<h2>🧰 Maintenance</h2>
<ul>
<li>RESP3 response-callbacks cleanup (<a
href="https://redirect.github.com/redis/redis-py/issues/2841">#2841</a>)</li>
<li>Merge master to 5.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2827">#2827</a>)</li>
</ul>
<h2>5.0.0rc1</h2>
<h2>Changes</h2>
<h2>🔥 Breaking Changes</h2>
<ul>
<li>Change <code>SISMEMBER</code> return type to int by (<a
href="https://redirect.github.com/redis/redis-py/issues/2813">#2813</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/blob/master/CHANGES">redis's
changelog</a>.</em></p>
<blockquote>
<pre><code>* Fix [#2831](redis/redis-py#2831),
add auto_close_connection_pool=True arg to asyncio.Redis.from_url()
* Fix incorrect redis.asyncio.Cluster type hint for `retry_on_error`
* Fix dead weakref in sentinel connection causing ReferenceError
([#2767](redis/redis-py#2767))
* Fix [#2768](redis/redis-py#2768), Fix
KeyError: 'first-entry' in parse_xinfo_stream.
* Fix [#2749](redis/redis-py#2749), remove
unnecessary __del__ logic to close connections.
* Fix [#2754](redis/redis-py#2754), adding a
missing argument to SentinelManagedConnection
* Fix `xadd` command to accept non-negative `maxlen` including 0
* Revert [#2104](redis/redis-py#2104),
[#2673](redis/redis-py#2673), add
`disconnect_on_error` option to `read_response()` (issues
[#2506](redis/redis-py#2506),
[#2624](redis/redis-py#2624))
* Add `address_remap` parameter to `RedisCluster`
* Fix incorrect usage of once flag in async Sentinel
* asyncio: Fix memory leak caused by hiredis
([#2693](redis/redis-py#2693))
* Allow data to drain from async PythonParser when reading during a
disconnect()
* Use asyncio.timeout() instead of async_timeout.timeout() for python
&gt;= 3.11 ([#2602](redis/redis-py#2602))
* Add a Dependabot configuration to auto-update GitHub action versions.
* Add test and fix async HiredisParser when reading during a
disconnect() ([#2349](redis/redis-py#2349))
* Use hiredis-py pack_command if available.
* Support `.unlink()` in ClusterPipeline
* Simplify synchronous SocketBuffer state management
* Fix string cleanse in Redis Graph
* Make PythonParser resumable in case of error
([#2510](redis/redis-py#2510))
* Add `timeout=None` in `SentinelConnectionManager.read_response`
* Documentation fix: password protected socket connection
([#2374](redis/redis-py#2374))
* Allow `timeout=None` in `PubSub.get_message()` to wait forever
* add `nowait` flag to `asyncio.Connection.disconnect()`
* Update README.md links
* Fix timezone handling for datetime to unixtime conversions
* Fix start_id type for XAUTOCLAIM
* Remove verbose logging from cluster.py
* Add retry mechanism to async version of Connection
* Compare commands case-insensitively in the asyncio command parser
* Allow negative `retries` for `Retry` class to retry forever
* Add `items` parameter to `hset` signature
* Create codeql-analysis.yml
([#1988](redis/redis-py#1988)). Thanks @chayim
* Add limited support for Lua scripting with RedisCluster
* Implement `.lock()` method on RedisCluster
* Fix cursor returned by SCAN for RedisCluster &amp; change default
target to PRIMARIES
* Fix scan_iter for RedisCluster
* Remove verbose logging when initializing ClusterPubSub,
ClusterPipeline or RedisCluster
* Fix broken connection writer lock-up for asyncio
([#2065](redis/redis-py#2065))
* Fix auth bug when provided with no username
([#2086](redis/redis-py#2086))
* Fix missing ClusterPipeline._lock
([#2189](redis/redis-py#2189))
* Added dynaminc_startup_nodes configuration to RedisCluster
* Fix reusing the old nodes' connections when cluster topology refresh
is being done
* Fix RedisCluster to immediately raise AuthenticationError without a
retry
* ClusterPipeline Doesn't Handle ConnectionError for Dead Hosts
([#2225](redis/redis-py#2225))
* Remove compatibility code for old versions of Hiredis, drop Packaging
dependency
* The `deprecated` library is no longer a dependency
* Failover handling improvements for RedisCluster and Async RedisCluster
([#2377](redis/redis-py#2377))
* Fixed &quot;cannot pickle '_thread.lock' object&quot; bug
([#2354](redis/redis-py#2354),
[#2297](redis/redis-py#2297))
* Added CredentialsProvider class to support password rotation
</code></pre>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/redis/redis-py/commit/28cc65c18cc4fb37ef14497c963eb181dba8d25d"><code>28cc65c</code></a>
Updating all client licenses to clearly be MIT (<a
href="https://redirect.github.com/redis/redis-py/issues/2884">#2884</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/2f679261b7ef0e7372868cacd8ba8721406eb495"><code>2f67926</code></a>
Version 5.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/2874">#2874</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/f121cf29e7d7fb5c85c5915ba5ce10a20826e8c0"><code>f121cf2</code></a>
Add support for <code>CLIENT SETINFO</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/2857">#2857</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/d5c2d1d42ed9f653d450e6127cb6f673f43fb2d0"><code>d5c2d1d</code></a>
Adding support for triggered functions (TFUNCTION) (<a
href="https://redirect.github.com/redis/redis-py/issues/2861">#2861</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/b0abd555770bba42c57881b488b5fe5b188f088e"><code>b0abd55</code></a>
RESP 3 feature documentation (<a
href="https://redirect.github.com/redis/redis-py/issues/2872">#2872</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/a532f89adcd5b790e2811588a2d7c34a79b095d5"><code>a532f89</code></a>
Add py.typed in accordance with PEP-561 (<a
href="https://redirect.github.com/redis/redis-py/issues/2738">#2738</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/673617d2cbac265c6c8d43280d5e6898df4572b6"><code>673617d</code></a>
Bump actions/upload-artifact from 2 to 3 (<a
href="https://redirect.github.com/redis/redis-py/issues/2877">#2877</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/0ed807777cfab129904fd72fbada793f21ea0a9c"><code>0ed8077</code></a>
Bump pypa/gh-action-pip-audit from 1.0.0 to 1.0.8 (<a
href="https://redirect.github.com/redis/redis-py/issues/2879">#2879</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/38c7de617a482c9dd2d40699fbdd7ce44736cae9"><code>38c7de6</code></a>
Dependabot label change (<a
href="https://redirect.github.com/redis/redis-py/issues/2880">#2880</a>)</li>
<li><a
href="https://github.com/redis/redis-py/commit/8370c4ac20835002fd1044e1742986072f19289c"><code>8370c4a</code></a>
Add a Dependabot config to auto-update GitHub action versions (<a
href="https://redirect.github.com/redis/redis-py/issues/2847">#2847</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v4.6.0...v5.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=redis&package-manager=pip&previous-version=4.6.0&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants