-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pr-451 with additional tests and some documentation editing #631
Conversation
…ILLED Signed-off-by: John Shahid <jvshahid@gmail.com>
I'm -1 to retry. It's not safe.
DB.SetConnMaxlifetime() solves many problems including it. For example,
In this way, you can safely rolling update your DBs behind proxy. |
Can you elaborate why you think it is not safe and also define exactly which action you think is unsafe. /cc @arnehormann for context on the #451 |
See #302. |
Or does MariaDB document guarantee that already sent query doesn't take any effect There are many special cases (e.g. queries which arn't transactional, multi statement, |
One example I can conceive is killed while filesort (I think MariaDB support it):
Even if this case is safe to retry, I think we need documented guarantee about |
@methane i had a chat with
Does it make sense to make this an option in the connection string ? In our use case the queries are idempotent and won't cause issues on retry. |
I don't want to add options only for specific usecase. If your query is idempotent, why not retry at application layer? Additionally, I recommend |
I explained previously, retrying the query on the client side is not sufficient. We have a pool of
Can you elaborate on how we can use this function to achieve a smooth query execution after a server restart ? |
If you have short enough MaxLifetime and long enough sleep between retry, no need to 300 retries. Otherwise, you need to 300 retries at last. |
It close all idle connections which created before 10 seconds ago. If application and MySQL server on same LAN (~5ms latency), even 1 second is efficient enough. |
As was already illustrated, handling this on the driver side with The question is what alternatives there are. The best I can come up with right now is to return a distinct error type in the driver, which can then be used on the client side to react appropriately, e.g. with an attempt to flush the connection pool (setting |
Description
This is another attempt to get the changes in #451 to be merged. We have preserved the author of the original commit and made the suggested documentation changes and added a test case. I will attempt to explain what the issue is trying to fix:
We have a MariaDB cluster with a proxy routing traffic to one node in the cluster. During rolling updates, the active node will send
ER_CONNECTION_CLOSED
err packet and close the connection. This causes the driver to returnErrPktSync
to the client. Given that the driver is not returningErrBadConn
golang will not retry the query (or close the connection). This pushes the responsibility of retrying on the client which is totally acceptable. Except If there are few hundred connections open (in our case300
) any retry logic with reasonable number of retries (in our case3
) isn't sufficient. What will end up happening, golang will pick another connection from the pool of closed connections and the client will get anotherErrPktSync
.I'm happy to talk more about different solutions to the problem we are having.
Checklist