Don't retry network requests that fail with code 403 #373
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This adds a check to
should_retry()
which immediately raises an exception if a 403 code is received from Databricks and an associated test. Prior to this change, the connector would continue retrying until the retry counter was exceeded.I think this has existed for some time but wasn't obvious to users until exponential backoff was implemented in #349. Because prior to #349, it would retry over and over again until it ran out of retries and then raise an exception. But now that backoff is in-force, it takes longer for it to exhaust the retry counter.
Related Tickets & Documents
Fixes these:
databricks.sql.connect
hangs in a long retrying loop when an invalid access token is used #372