Skip to content
This repository has been archived by the owner on Aug 13, 2019. It is now read-only.

Investigate possibility of tools modifying persisted blocks #5

Closed
fabxc opened this issue Mar 14, 2017 · 14 comments
Closed

Investigate possibility of tools modifying persisted blocks #5

fabxc opened this issue Mar 14, 2017 · 14 comments

Comments

@fabxc
Copy link
Contributor

fabxc commented Mar 14, 2017

There are various use cases to transform/move/... older data in some way. For example downsampling or shipping it off into a LTS. There are many ways to build those tools and ideally it won't be a concern of the core tsdb.

In theory, tsdb should act sufficiently atomic on all file systems we aim to support so that external tool could do those things out of band. We would need a way to toggle compaction (c.f. #4) and to trigger a reload of the file system state in applications using tsdb.

@krasi-georgiev
Copy link
Contributor

@fabxc @Bplotka do you think that now with thanos this issue is addressed?
Not sure if there is an actual cli in thanos for this, but it seems that it solves that particular problem described here.

@bwplotka
Copy link
Contributor

Not really.

If I understand this right, we are talking here about multiple readers and single (or multiple even!) writer problem on TSDB blocks in filesystem.

This is related to this: thanos-io/thanos#206

Because this issue is still open, we cannot use local compaction because we have no idea when Prometheus TSDB starts compaction and removes some blocks while we are potentially in process of uploading those to object store.

This can be mitigated by Snapshot API, but I think it requires some change for Prometheus flag (Admin API?). This is not the best, but at least we have an workaround.

Without this workaround we could add some endpoints to turn of local compaction for short time.. not sure. It might be a smart move to just stick to workaround unless it causes more problems.

Any thoughts @fabxc @gouthamve ?

@bwplotka
Copy link
Contributor

bwplotka commented Aug 31, 2018

I am fine with closing this for now, we can reopen when we will have clear indication that this is serious problem (workaround being not sufficient)

@krasi-georgiev
Copy link
Contributor

krasi-georgiev commented Aug 31, 2018

No let's keep it open as there are obviously some blockers here.

@krasi-georgiev
Copy link
Contributor

@bwplotka tsdb already has an api to disable/enable compactions
https://github.com/prometheus/tsdb/blob/407e12d051f7907e1bde47ccd59258eedbd10715/db.go#L743

so how about

  • disable compaction
  • upload the blocks needed
  • enable compaction

@bwplotka
Copy link
Contributor

🎉 Now is the question... when Prometheus will expose it as API endpoint? (or is it done already?)

@krasi-georgiev
Copy link
Contributor

if you think will work for you I can open a PR

@bwplotka
Copy link
Contributor

That will definitely fix this issue!

@brancz
Copy link
Contributor

brancz commented Nov 27, 2018

I’m not sure it’s that easy, we typically don’t have endpoints mutating the configuration of the running process. I’m just wary of new use cases that could be argued for if we added this. Don’t get me wrong I have use cases for this as well, just trying to make sure we’re not making a mistake here.

@krasi-georgiev
Copy link
Contributor

adding it will be very simple so why not open the PR and build a docker image so you can test it and will take it from there.

@brancz
Copy link
Contributor

brancz commented Nov 27, 2018

I didn’t mean code wise, I meant accepting such functionality maintenance wise :)

@gouthamve
Copy link
Collaborator

Yup, why can you not do this via the Snapshot API?

@bwplotka
Copy link
Contributor

I didn’t mean code wise, I meant accepting such functionality maintenance wise :)

Yup, hard to justify it.

As I said in very beggining, yup - we can use Snapshot API for it, as well as compaction disable API (if there would be any), so it's not blocking. (:

@krasi-georgiev
Copy link
Contributor

I think we can close this one as this overlaps quite a lot with what we are discussing in #346

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants