Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] AWS Connection.py Using Private Botocore get_response Which Expects JSON But is Being Passed XML #1726

Closed
o-nikolas opened this issue May 16, 2023 · 17 comments

Comments

@o-nikolas
Copy link

Hello Kombu folks!

tl;dr

Kombu is using an internal/private method, get_response, from Botocore which changed behaviour recently due to other Botocore updates to the wire protocol used for SQS, which breaks Kombu.
The change has been temporarily reverted, but Kombu cannot depend on a stable behaviour of this method going forward as the changes will soon be applied again.

Full Context/Explanation

The implementation of the AWS asynchronous connection is currently using the non-public method get_response from botocore, here is an example:

if response.status == self.STATUS_CODE_OK:
_, parsed = get_response(
service_model.operation_model(operation), response.response
)
return parsed

get_response is vendored in here:

"""Amazon boto3 interface."""
from __future__ import annotations
try:
import boto3
from botocore import exceptions
from botocore.awsrequest import AWSRequest
from botocore.response import get_response

Botocore recently switched to JSON for the wire protocol it uses to communicate with SQS here (Note: Commit url doesn't expand nicely, you'll need to follow the link).

Kombu builds XML AWS Query API requests manually (instead of using something like the boto3 SDK client) to communicate with SQS:

def make_request(self, operation, params_, path, verb, callback=None): # noqa
params = params_.copy()
if operation:
params['Action'] = operation
signer = self.sqs_connection._request_signer
# defaults for non-get
signing_type = 'standard'
param_payload = {'data': params}
if verb.lower() == 'get':
# query-based opts
signing_type = 'presignurl'
param_payload = {'params': params}
request = AWSRequest(method=verb, url=path, **param_payload)
signer.sign(operation, request, signing_type=signing_type)
prepared_request = request.prepare()
return self._mexe(prepared_request, callback=callback)

The resulting XML responses are then passed to the aforementioned get_response, which now expects JSON due to the change in Botocore also linked above, for parsing. In this situation, get_response returns a response object with an empty body, which Kombu reads to be a legitimate empty result. This means that a request to get messages, for example, will always return empty. In this way it fails silently.

Botocore has reverted the change here. But this is only a temporary fix, as JSON is the preferred wire protocol.

Possible Paths Forward

  1. Kombu could move to using the public boto3 SDK to interact with SQS (create queues, read messages, etc) instead of hand crafting AWS Query API requests.
    pros: Seamless support from a public interface which will handle the wire protocol migration smoothly
    cons: This is a large refactoring, and presumably, there is some historical reason that we're unaware of for why Kombu is hand crafting requests since it is a complex piece of code and there was likely a reason for the effort to build it this way.
  2. Re-institute the XML parsing that was previously in place before the migration to get_response (which was done in this PR here).
    pros: Smaller scope for the change which could likely be completed sooner. Code worked in the past
    cons: More complex handcrafted request/response handling existing in Kombu rather than simply using boto3 SDK
  3. Something else entirely? Please let us know what you think!

Thanks in advance for your time 🙏

CC:

@open-collective-bot
Copy link

Hey @o-nikolas 👋,
Thank you for opening an issue. We will get back to you as soon as we can.
Also, check out our Open Collective and consider backing us - every little helps!

We also offer priority support for our sponsors.
If you require immediate assistance please consider sponsoring us.

@auvipy
Copy link
Member

auvipy commented May 17, 2023

  1. Kombu could move to using the public boto3 SDK to interact with SQS (create queues, read messages, etc) instead of hand crafting AWS Query API requests.
    pros: Seamless support from a public interface which will handle the wire protocol migration smoothly

I am willing to lean toward the official public API based refactor. is it possible to share the release notes?

@vincbeck
Copy link

Nice :) We also think this is the "best" solution. I am not sure what you are asking by "release notes" though. You can find documentation of boto3 here. Boto3 interacts will all (or at least most) AWS services, you can find the documentation specific to SQS here. The documentation is pretty well done, you have specification for all APIs with examples. Feel free to ping us if you need any help/guidance

@auvipy
Copy link
Member

auvipy commented May 18, 2023

Nice :) We also think this is the "best" solution. I am not sure what you are asking by "release notes" though. You can find documentation of boto3 here. Boto3 interacts will all (or at least most) AWS services, you can find the documentation specific to SQS here. The documentation is pretty well done, you have specification for all APIs with examples. Feel free to ping us if you need any help/guidance

I didn't know that we are using non public API!

@auvipy
Copy link
Member

auvipy commented May 21, 2023

can you aws expert guys have a look at this issue? celery/celery#8015 (comment)

@vincbeck
Copy link

Happy to help on this issue indeed but it is not related to this issue right? At least, to me, it does not look related.

Have you, by any chance, started to work on the get_response issue? Or any plan to address it?

@o-nikolas
Copy link
Author

can you aws expert guys have a look at this issue? celery/celery#8015 (comment)

The above issue is not the same as this issue, since the cause for this issue was rolled back. However, after a quick look, the above does seem like another issue related to hand crafting http requests instead of simply using the boto3 SDK

@auvipy auvipy added this to the 5.3 milestone May 24, 2023
@auvipy auvipy modified the milestones: 5.3, 5.3.x Jun 1, 2023
@shubham22
Copy link

@auvipy - I noticed that this issue is marked for the 5.3.x release, which is due by June 30th. I'm curious to know whether the Kombu community is planning to address this in the next release? If not, is there anything we can do to get this prioritized?

@auvipy
Copy link
Member

auvipy commented Jun 7, 2023

we have lot on the plate already. this is a big overhaul. but I will try. the 5.3.x mile stone will have several bug fix releases

@auvipy auvipy self-assigned this Jun 7, 2023
@rafidka
Copy link
Contributor

rafidka commented Jun 17, 2023

@auvipy , I am from the Amazon MWAA team. I experimented with this and managed to do a PoC for a solution by making kombu use boto3 library instead of manually crafting a request (path no. 1 in @o-nikolas original message). I confirmed it solves the issue, so I am happy to publish a PR for this. However, there are a couple of issues that I want to discuss with you first.

Basically, my solution involves updating the AsyncSQSConnection class to use boto3. One challenge, though, is that boto3 library is not async (and I wonder whether that was a contributing factor initially for deciding to manually craft the AWS request). However, this can be easily addressed by employing kombu's event loop. Below is my re-implementation of the receive_message method employing it:

    def receive_message(
        self, queue_url, number_messages=1, visibility_timeout=None,
        attributes=('ApproximateReceiveCount',), wait_time_seconds=None,
        callback=None
    ):
        def make_request(
            queue_url, number_messages, wait_time_seconds, visibility_timeout,
            attributes, callback
        ):
            kwargs = {
                "QueueUrl": queue_url,
                "MaxNumberOfMessages": number_messages,
                "MessageAttributeNames": attributes,
                "WaitTimeSeconds": wait_time_seconds,
            }
            if visibility_timeout:
                kwargs["VisibilityTimeout"] = visibility_timeout
            resp = self.sqs_connection.receive_message(**kwargs)
            if callback:
                callback(resp)

        # Let kombu's event loop call `make_request` asynchronously.
        return self.hub.call_soon(
            make_request, 
            queue_url, number_messages, wait_time_seconds, visibility_timeout,
            attributes, callback,
        )

I tested this and confirmed it is fixing the issue. However, one thing that still needs to be tested is with regard to multi-threading and multi-processing. kombu's SQS broker uses a boto3 session to communicate with SQS. Unfortunately, boto3 sessions (and resources) are not thread safe; only clients are. Furthermore, when it comes to multi-processing all of them are not safe to be shared.

Based on my testing, the same process and thread execute both the call_soon and make_request methods, which makes sense since the SQS broker relies on the event loop to communicate with SQS, and my modified version of receive_message also relies on the same event loop. However:

  1. Is it possible that the same event loop might end up employing more than one thread or process for some reason? I quickly checked the implementation in hub.py and that doesn't seem to be the case.

  2. Is it possible to have more than one event loop? That doesn't seem to be the case either, since the set_event_loop method uses a global variable, but I cannot be fully confident considering my limited understanding of Celery code base.

  3. Even if the answer to both questions above is "No", I am not very comfortable about SQS.py making the assumption that the event loop is going to be unique, as this is an implicit assumption and, worse, it is very hard to discover without in depth understanding of how async IO is implemented within kombu. So, it is theoretically possible that in the future multiple event loops are introduced (assuming that is not the case already) to achieve more IO concurrency.

Looking forward to hearing from you soon. I am happy to discuss this with you over a meeting to move things faster.

@auvipy
Copy link
Member

auvipy commented Jun 18, 2023

@auvipy , I am from the Amazon MWAA team. I experimented with this and managed to do a PoC for a solution by making kombu use boto3 library instead of manually crafting a request (path no. 1 in @o-nikolas original message). I confirmed it solves the issue, so I am happy to publish a PR for this. However, there are a couple of issues that I want to discuss with you first.

Basically, my solution involves updating the AsyncSQSConnection class to use boto3. One challenge, though, is that boto3 library is not async (and I wonder whether that was a contributing factor initially for deciding to manually craft the AWS request). However, this can be easily addressed by employing kombu's event loop. Below is my re-implementation of the receive_message method employing it:

    def receive_message(
        self, queue_url, number_messages=1, visibility_timeout=None,
        attributes=('ApproximateReceiveCount',), wait_time_seconds=None,
        callback=None
    ):
        def make_request(
            queue_url, number_messages, wait_time_seconds, visibility_timeout,
            attributes, callback
        ):
            kwargs = {
                "QueueUrl": queue_url,
                "MaxNumberOfMessages": number_messages,
                "MessageAttributeNames": attributes,
                "WaitTimeSeconds": wait_time_seconds,
            }
            if visibility_timeout:
                kwargs["VisibilityTimeout"] = visibility_timeout
            resp = self.sqs_connection.receive_message(**kwargs)
            if callback:
                callback(resp)

        # Let kombu's event loop call `make_request` asynchronously.
        return self.hub.call_soon(
            make_request, 
            queue_url, number_messages, wait_time_seconds, visibility_timeout,
            attributes, callback,
        )

I tested this and confirmed it is fixing the issue. However, one thing that still needs to be tested is with regard to multi-threading and multi-processing. kombu's SQS broker uses a boto3 session to communicate with SQS. Unfortunately, boto3 sessions (and resources) are not thread safe; only clients are. Furthermore, when it comes to multi-processing all of them are not safe to be shared.

Based on my testing, the same process and thread execute both the call_soon and make_request methods, which makes sense since the SQS broker relies on the event loop to communicate with SQS, and my modified version of receive_message also relies on the same event loop. However:

  1. Is it possible that the same event loop might end up employing more than one thread or process for some reason? I quickly checked the implementation in hub.py and that doesn't seem to be the case.
  2. Is it possible to have more than one event loop? That doesn't seem to be the case either, since the set_event_loop method uses a global variable, but I cannot be fully confident considering my limited understanding of Celery code base.
  3. Even if the answer to both questions above is "No", I am not very comfortable about SQS.py making the assumption that the event loop is going to be unique, as this is an implicit assumption and, worse, it is very hard to discover without in depth understanding of how async IO is implemented within kombu. So, it is theoretically possible that in the future multiple event loops are introduced (assuming that is not the case already) to achieve more IO concurrency.

Looking forward to hearing from you soon. I am happy to discuss this with you over a meeting to move things faster.

I think you have done a great job here. in future we will swittch to async io based approach. I think we can start with what you have discussed so far. kombu/celery was designed back in 2009 when there was no standard async. the async features here are mostly monkey patched versions which are not so reliable. so we would better off to something more mainstream & standard. if you can come with a draft PR, it will be easier for me to analyze it further alongside you.

@auvipy
Copy link
Member

auvipy commented Jun 18, 2023

Also this could be a great alternative for future https://github.com/terrycain/aioboto3

@auvipy auvipy removed their assignment Jun 18, 2023
@rafidka
Copy link
Contributor

rafidka commented Jun 19, 2023

@auvipy

I think you have done a great job here. in future we will swittch to async io based approach. I think we can start with what you have discussed so far. kombu/celery was designed back in 2009 when there was no standard async. the async features here are mostly monkey patched versions which are not so reliable. so we would better off to something more mainstream & standard. if you can come with a draft PR, it will be easier for me to analyze it further alongside you.

Thanks for the confirmation. I will publish a PR for this soon.

One question I would like to ask is with regards to clean up. I see a lot of functions unnecessary functions in the SQS async client. I am willing to delete them, as opposed to spending time re-implementing them using boto3. I confirmed they are not being used within Celery, but can you confirm external dependencies are not supposed to access those functions directly? (There is still the concern some libraries might still use them regardless, e.g. what happens with kombu accessing internal functions with boto3, but I believe I still believe it is better to clean up the code.)

rafidka added a commit to rafidka/kombu that referenced this issue Jun 21, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Jun 22, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Jun 22, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Jun 22, 2023
@rafidka
Copy link
Contributor

rafidka commented Jun 22, 2023

@auvipy , there you go #1759. Sorry for the delay.

@auvipy
Copy link
Member

auvipy commented Jun 22, 2023

it is always better to use public API. also it will be great to get help from AWS folks to improve SQS support in kombu & celery. currently we lack on that part.

rafidka added a commit to rafidka/kombu that referenced this issue Jun 23, 2023
@rafidka
Copy link
Contributor

rafidka commented Jun 23, 2023

it is always better to use public API. also it will be great to get help from AWS folks to improve SQS support in kombu & celery. currently we lack on that part.

I will need to discuss this my manager before making any commitment. Do you have anything spec you want to get improved, or just in general?

rafidka added a commit to rafidka/kombu that referenced this issue Jun 26, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Jun 26, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Jun 26, 2023
@auvipy
Copy link
Member

auvipy commented Jun 27, 2023

Do you have anything spec you want to get improved, or just in general?

if you guys could guide us regarding the best practices for the SQS and related parts in kombu & celery, it would be great.

@auvipy auvipy closed this as completed in 862d0bc Jun 27, 2023
auvipy added a commit that referenced this issue Sep 26, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Oct 6, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Oct 9, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 9, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 9, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 9, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 9, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
auvipy added a commit that referenced this issue Oct 10, 2023
rafidka added a commit to rafidka/kombu that referenced this issue Oct 11, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 14, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 14, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 14, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 15, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now borrowed the implementation
of `get_response` from botocore and changed the code such that the
protocol is hard-coded to `query`. This way, when botocore changes the
default protocol of SQS to JSON, kombu won't be impacted, since it
crafts its own request and, after my change, it uses a hard-coded
protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. There are two problems with this
approach:

1. It doesn't address the fundamental problem discussed in celery#1726, which
   is that `kombu` is using some stuff that are kind of internal to
   botocore, namely the `StreamingBody` class.
2. It is still making an assumption, namely that the protocol of
   communication is the `query` protocol. While this is true, and likely
   going to be true for some time, in theory nothing stops SQS (the
   backend, not client) from changing the default protocol, rendering
   the hard-coding of protocol in the new `get_response` method to
   `query` to be problematic.

As such, the long term solution should be to completely rely on boto3
for any communication with AWS, and ensuring that all requests all async
in nature (non-blocking.) This, however, is a fundamental change that
requires a lot of testing, in particular performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 17, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now changed the
`AsyncSQSConnection` class such that it crafts either a `query` or a
`json` request depending on the protocol used by the SQS client. Thus,
when botocore changes the default protocol of SQS to JSON, kombu won't
be impacted, since it crafts its own request and, after my change, it
uses a hard-coded protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. The final solution should be to
completely rely on boto3 for any communication with AWS, and ensuring
that all requests are async in nature (non-blocking.) This, however, is
a fundamental change that requires a lot of testing, in particular
performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 23, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now changed the
`AsyncSQSConnection` class such that it crafts either a `query` or a
`json` request depending on the protocol used by the SQS client. Thus,
when botocore changes the default protocol of SQS to JSON, kombu won't
be impacted, since it crafts its own request and, after my change, it
uses a hard-coded protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. The final solution should be to
completely rely on boto3 for any communication with AWS, and ensuring
that all requests are async in nature (non-blocking.) This, however, is
a fundamental change that requires a lot of testing, in particular
performance testing.
rafidka added a commit to rafidka/kombu that referenced this issue Oct 23, 2023
TL;DR - The use of boto3 in celery#1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in celery#1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in celery#1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted celery#1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on celery#1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in celery#1726, I now changed the
`AsyncSQSConnection` class such that it crafts either a `query` or a
`json` request depending on the protocol used by the SQS client. Thus,
when botocore changes the default protocol of SQS to JSON, kombu won't
be impacted, since it crafts its own request and, after my change, it
uses a hard-coded protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. The final solution should be to
completely rely on boto3 for any communication with AWS, and ensuring
that all requests are async in nature (non-blocking.) This, however, is
a fundamental change that requires a lot of testing, in particular
performance testing.
auvipy added a commit that referenced this issue Nov 16, 2023
* Use the correct protocol for SQS requests

TL;DR - The use of boto3 in #1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in #1783.

`kombu` previously used to craft AWS requests manually as explained in
detail in #1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted #1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.

While working on #1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.

The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).

To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in #1726, I now changed the
`AsyncSQSConnection` class such that it crafts either a `query` or a
`json` request depending on the protocol used by the SQS client. Thus,
when botocore changes the default protocol of SQS to JSON, kombu won't
be impacted, since it crafts its own request and, after my change, it
uses a hard-coded protocol based on the crafted requests.

This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. The final solution should be to
completely rely on boto3 for any communication with AWS, and ensuring
that all requests are async in nature (non-blocking.) This, however, is
a fundamental change that requires a lot of testing, in particular
performance testing.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update kombu/asynchronous/aws/sqs/connection.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants