-
Notifications
You must be signed in to change notification settings - Fork 763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit number and lifetime of connections #208
Conversation
I don't think the max open/idle is going to do anything. We create a new connection handler for every scrape. There is no connection pooling/recycling involved. The max lifetime could be useful. |
Yes, you are right... however I run into some issues where I had continuously 10 open idle connections and only restart of exporter helped. I looked into the code and couldn't find any obvious bug like not issuing So this is more like a precaution step. In the code this Also |
The only values that make sense there are:
The only way to prevent this problem is to use server-side enforcement with |
With
They all make sense if you want to avoid dev bugs affecting production like this one: f3fb793 |
I read through the bit of the DB driver, it seems like it will keep one connection open for up to 5min before closing it when you set
Prometheus clients must be designed such that there may be more than one server polling it simultaneously. |
There should be only one instance of Also there shouldn't be need for a program to have external requirement (/suggestion) to set So if we consider exporter only uses one connection at a time what would you say about:
and global |
Because of the way we intend the exporter to work we explicitly do want to open a new connection for every scrape. This allows each scrape to act as a full test of the mysql server. Otherwise you will get misleading Setting the per-connection limits as you propose is fine. |
mysqld_exporter.go
Outdated
db.SetMaxIdleConns(3) | ||
// close idle connections after max lifetime | ||
db.SetConnMaxLifetime(5 * time.Minute) | ||
// by design exporter should use maximum one connection at any given time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please capitalize the sentence and end with a .
. Also maybe "one connection per request" might be more correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Right now we don't have control over the concurrent # of connections. We can run out of MySQL connections if multiple clients are scraping the exporter at the same time. I'm 👍 to avoid multiple Please correct me if I misunderstood something @SuperQ 🙇 |
@giaquinti There is no way to avoid multiple concurrent opens, this change is a noop for the number of connections, it only limits the connection lifetime. By definition, the way Prometheus works is that there may be more than one server asking for metrics from a target simultaneously. Every new http request for metrics opens a new connection. This is the only way it can work without breaking the required operational semantics. |
I might be missing something but why can't we use a global I think this exporter is really useful and I don't want to limit its usage to Prometheus scraping only. On the other side I can't increase the concurrent # of MySQL connections linearly to the concurrent # of HTTP requests to make it working 😞 |
The reason is Prometheus does not work that way.
The only proper way to fix this is to adjust |
I still don't get how we can support
if the number of the simultaneous scrapes is directly dependent with an external setting not related with this tool. I'm also not sure if the "full test of a MySQL server" should be a requirement of the exporter. Anyway, I don't think a global |
I'm trying to hunt down some leaking connection issues. While this doesn't nail the issue it's something I was surprised not to see in exporter.
If user forgets to limit number of connections from db perspective, we still won't create hundreds of them. Even if user sets
MAX_USER_CONNECTIONS 3
then golang driver won't even try to create new connection.Also
SetConnMaxLifetime
gets rid of idle and stale/broken connections.Let me know what you think.
https://golang.org/pkg/database/sql/#DB.SetMaxIdleConns
https://golang.org/pkg/database/sql/#DB.SetMaxOpenConns
golang/go@0c516c1