You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to request a feature (as far as I know this does not exist so far).
At the moment when replication hostgroups are enabled the following scenarios can happen:
If accidentally there is going to be two servers with read_only=off ProxySQL will send writes to both server immediately.
If we are using some topology manager or in-house scripts there can be scenarios when there was a network partitioning and old master could not be change to read_only=on. When network comes back, ProxySQL will send traffic to both servers immediately.
These can cause serious problems.
I do not want from ProxySQL to decide who is the right server or make any logical choices.
What I would like to have a configurable parameter which tells ProxySQL if there is "two master" stop sending any writes to the servers. Something like:
write_multiple_masters=true/false
In my opinion and experience a few minutes of downtime because of this is much more acceptable than starting to write multiple servers and make the data inconsistence. Recovering from that is much harder and can take a lot of time.
This could save us from a lot of trouble.
Please let me know what do you think about this or if there is an option like this, please tell me the name of the parameter.
The text was updated successfully, but these errors were encountered:
I need this feature too. I also tried to submit a PR#2019. According to my test, this can avoid writing to mulitple servers.
Writing to multiple servers is very dangerous. The key point is to make ProxySQL keep writing to only one server as long as it's health. Even it is not the right server, data recovery can still be done much easier than writing to multiple servers randomly.
We have stumbled upon this as well when implementing ProxySQL, the condition of read_only seems too "soft" for us.
Since it is very easy to change the slave to be read_only=0 another condition for electing write-eligible nodes might be available: if there is a slave thread running on a server it should not be possible to write to this node. This would increase the confidence while adding possibility to add another "check".
We have similar check in our haproxy for MariaDB, but replacing it with ProxySQL will provide us many other possibilites and this would increase our confidence while moving towards this goal.
It should be possible to turn this on/off as I can imagine scenarios, where this is not a good idea.
Something like: write_running_slave=true/false # defaults to true
I would write a patch, but this is rather complex and I s*ck at C/C++.
Hi,
I would like to request a feature (as far as I know this does not exist so far).
At the moment when replication hostgroups are enabled the following scenarios can happen:
If accidentally there is going to be two servers with
read_only=off
ProxySQL will send writes to both server immediately.If we are using some topology manager or in-house scripts there can be scenarios when there was a network partitioning and old master could not be change to
read_only=on
. When network comes back, ProxySQL will send traffic to both servers immediately.These can cause serious problems.
I do not want from ProxySQL to decide who is the right server or make any logical choices.
What I would like to have a configurable parameter which tells ProxySQL if there is "two master" stop sending any writes to the servers. Something like:
write_multiple_masters=true/false
In my opinion and experience a few minutes of downtime because of this is much more acceptable than starting to write multiple servers and make the data inconsistence. Recovering from that is much harder and can take a lot of time.
This could save us from a lot of trouble.
Please let me know what do you think about this or if there is an option like this, please tell me the name of the parameter.
The text was updated successfully, but these errors were encountered: