Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add handling of 503 Service Unavailable retries #1713

Merged
merged 8 commits into from
Dec 21, 2023
Merged

Conversation

gmainguet
Copy link
Contributor

Hi,

This is a follow up of #925 and associated PR #1364

As mentioned in https://developer.atlassian.com/cloud/jira/platform/rate-limiting/ , retries should not be limited to 429 responses.

503 responses are slightly different but should follow the same retry logic.

This PR aims at implementing Atlassian's pseudo code written in that page while keeping the already existing behaviour for 429s.

Unit tests have been updated accordingly.

Note: As a matter of fact, 503 responses were received from Jira Cloud (08-2023) and the library wasn't able to handle these. Hence this PR.

Comment on lines +294 to +296
suggested_delay = (
-1
) # Controls return value AND whether we delay or not, Not-recoverable by default

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right now this comment doesn't quite make sense

renaming this var to recovery_delay would make it much more readable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would you change the comment to? (it makes sense to me but I'm ok to add more meaning to it)

I kind of agree with recovery_delay, but then it won't eventually be the recovery delay because of the max limit.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then you go with -1 right? anyhow, this was just a minor suggestion, don't worry about it

Comment on lines +317 to +319
if response.status_code in recoverable_error_codes:
retry_after = response.headers.get("Retry-After")
if retry_after:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like regression: 429 should be still always retried

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

429s are retried, just with a different delay as per previous behaviour -- see line 322

Copy link
Collaborator

@adehad adehad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this high quality contribution

@adehad adehad merged commit f6dfdc8 into pycontribs:main Dec 21, 2023
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants