-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TSDB: Support GET and DELETE and doc versioning #82633
Conversation
This adds support for GET and DELETE and the ids query and Elasticsearch's standard document versioning to TSDB. So you can do things like: ``` POST /tsdb_idx/_doc?filter_path=_id { "@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2 } ``` That'll return `{"_id" : "22d7YQAAAABoMqcHfgEAAA"}` which you can turn around and fetch with ``` GET /tsdb_idx/_doc/22d7YQAAAABoMqcHfgEAAA ``` just like any other document in any other index. You can delete it too! Or fetch it. The ID comes from the dimensions and the `@timestamp`. So you can overwrite the document: ``` POST /tsdb_idx/_bulk {"index": {}} {"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2} ``` Or you can write only if it doesn't already exist: ``` POST /tsdb_idx/_bulk {"create": {}} {"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2} ``` This works by generating an id from the dimensions and the `@timestamp` when parsing the document. The id looks like: * 4 bytes of hash from the routing dimensions * 4 bytes of hash from the non-routing dimensions * 8 bytes of timestamp All that's base 64 encoded so that `Uid` can chew on it fairly efficiently. When it comes time to fetch or delete documents we base 64 decode the id and grab the hash form the routing dimensions. We use that hash to pick the shard. Then we use the entire ID to perform the fetch or delete. We don't implement update actions because we haven't written the infrastructure to make sure the dimensions don't change. It's possible to do, but feels like more than we need now. There *ton* of compromises with this. The long term sad thing is that it locks us into *indexing* the id of the sample. It'll index fairly efficiently because the each time series will have the same first eight bytes. It's also possible we'd share many of the first few bytes in the timestamp as well. So, if we're lucky, we're really only paying, say, six bytes per document for this. But that's six bytes we can't give up easily. In the short term there are lots of problems that I'd like to save for a follow up change: 1. We still generate the automatic `_id` for the document but we don't use it. We should stop generating it. 2. We generated the time series `_id` on each shard and when replaying the translog. It'd be the good kind of paranoid to generate it once on the primary and then keep it forever. 3. We have to encode the `_id` as a string to pass it around Elasticsearch internally. And Elasticsearch assumes that when an id is loaded we always store as bytes encoded the `Uid` - which *does* have nice encoding for base 64 bytes. But this whole thing requires us to make the bytes, base 64 encode them, and then hand them back to `Uid` to base 64 decode them into bytes. It's a bit hacky. And, it's a small thing, but if the first byte of the routing hash encodes to 254 or 255 we `Uid` spends an extra byte to encode it. One that'll always be a common prefix for tsdb indices, but still, it hurts my heart. It's just hard to fix. 4. We store the `_id` in tsdb indices. Now that we're building it from the dimensions and the `@timestamp` we really don't *need* to store it. We could recalculate it when fetching documents. This could save us a few bytes of storage per document. 6? 10? I dunno, it depends how well the compression of stored fields manages. 5. There are several error messages that try to use `_id` right now during parsing but the `_id` isn't available until after the parsing is complete. And, if parsing fails, it may not be possible to know the id at all. All of these error messages will have to change, at least in tsdb mode. I've had to make some changes as part of this that don't feel super expected. The biggest one is changing `Engine.Result` to include the `id`. When the `id` comes from the dimensions it is calculated by the document parsing infrastructure which is happens in `IndexShard#pepareIndex`. Which returns an `Engine.IndexResult`. To make everything clean I made it so `id` is available on all `Engine.Result`s and I made all of the "outer results classes" read from `Engine.Results#id`. Another option, I think, would have been to change the results objects produced by `IndexShard` new objects that have the `id` in them. This may very well be the righ thing to do. I'm not sure. Another option would have been to do a pass over the data to ge the `id` first and then another to get the data. That feels like overkill though. I've had to change the routing calculation for tsdb indices from something clever to something a little simple to calculate from the parsed values. It's *possible* to keep the old routing algorithm, but it'd be complex and, frankly, the old algorithm felt a little too clever on reread and I didn't want to try to back into it. Another option would have been to store the routing as calculated on the coordinating node and read it on the primary when making the id. This felt pretty heavy. We'd have to add the `IndexRequest` or fake it into the `routing` field of the index request. Both just feel silly when the bytes are already available, already parsed, we just have to hash them. I've opted to create two subclasses of `IdFieldMapper`, one for standard indices and one for tsdb indices. This feels like the right way to introduce the distinction, especially if we don't want tsdb to cary around it's old fielddata support. Honestly if we *need* to aggregate on `_id` in tsdb mode we have doc values for the `tsdb` and the `@timestamp` - we could build doc values for `_id` on the fly. But I'm not expecting folks will need to do this. Also! I'd like to stop storing tsdb'd `_id` field (see number 4 above) and the new subclass feels like a good place to put that too.
run elasticsearch-ci/part-1 |
Pinging @elastic/es-analytics-geo (Team:Analytics) |
if (entry.getValue().isRoutingDimension()) { | ||
routingHash = 31 * routingHash + thisHash; | ||
} else { | ||
nonRoutingHash = 31 * nonRoutingHash + thisHash; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I talked with @henningandersen about this one and he linked me to https://preshing.com/20110504/hash-collision-probabilities/ . We had some brainstorming. In the worst case the nonRoutingHash
and the routingHash
are entirely correlated -imagine we route on data_center
and use ip
a the only other dimension. It's not great, but it could happen. In that case the only things saving us from unexpected _id
collisions is the nonRoutingHash
and the timer. And some folks are going to have garbage resolution timers.
So in the worst case the odds of an _id
collision are entirely bases on the odds of collision on nonRoutingHash
. And, that's a birthday problem. Or, so says the link Henning shared. I buy that. That link has a handy table. For the odds of any two hashes colliding, assuming this hashing algorithm is perfect, depend on the number of unique tsids and the number of bits in the hash. For the 32 bit hash I'm using here it takes 30,000 to get about 1:10 odds of any two colliding. That seems bad.
So I'll have to think more here. A 64 bit hash would be a lot better - 1:10 odds takes 1.97 billion unique tsid. Maybe that'd be enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of a hash, it'd be nice if we could use a lookup table to sequentially assign the tsids. Lucene already has a lookup table for global ordinals, but it's global ordinals, an expensive query time thing. It has isn't really for this. It's in the wrong place and has the wrong constraints. We'd have to build something else. And that'd need replication semantics and all that jazz. So, possible, but I think the path of least resistance is to go with a 64 bit hash and live with the rare chance of collisions for a while. I'd love to replace it with something fancier, but that's a bigger project I think.
* stored, but we need to keep it so that its FieldType can be used to generate | ||
* queries. | ||
*/ | ||
public class TsdbIdFieldMapper extends IdFieldMapper { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should call this TimeSeriesIdFieldMapper to keep it consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TimeSeriesIdFieldMapper
class already exists. It generates the _tsid
Naming is hard :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imagine my confusing when I tried to rename the class to that and it didn't work. I tried like three times before reading the error message. So I did rename the class, but I'm not super happy with the names. I'm happy to take suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the TimeSeriesModeIdFieldMapper
approach. Maybe I would rename StandardIdFieldMapper
to StandardModeIdFieldMapper
. I know it looks too long, but it's more consistent.
Alternatively, I'd also like the TimeSeriesIndexIdFieldMapper
vs DefaultIdFieldMapper
(I prefer default to standard`).
I see that
It failed in the TsdbIdFieldMapper.postParse method's below line:
I'm in deep thinking of the index.routing_path setting. So you add a new setting, named is it needed to make the fields of index.routing_path's different from the dimensions fields? But how to deal with the initial dynamic mapping problem, I have an initial idea. And If the user has a custom routing, it use the index.routing_path to config the routing field, the routing field must configured support custom routing search:
|
Ooof. I see it, yeah. Our dots thing. And it lines up with the concerns you had earlier. Got it. I'll have a think.....
I think what you are proposing sounds a lot like my initial try. Like you said, the issue was dynamic mappings. And I think what you are proposing isn't something we want on the coordinating node because it's too heavy. That's most of the parsing stage. I really would like to replace I don't feel to bad about mixing the Now! I see you are concerned about supporting explicit routing for tsdb data - that's something I'd mostly hoped we wouldn't have to do. That whatever folks were doing with custom routing before they'd use a dimension field for in the future. I think we should talk more about that. Would it be ok to use a dimension to get similar behavior? Or are you relying on the grouping aspect more strongly? |
I'm a little confused about this question. Does this mean that the routing should be one dimension and not many dimensions? tsdb not supporting index.routing_partition_size is not a problem for us. As we can increase the frequency of rollover to make
I think the How to handle custom routing requirements? I thought of two ways:
|
Mixed clusters are not compatible.
Mixed clusters are not compatible.
@elasticmachine, update branch |
@elasticmachine update branch |
The failed release tests are #84698 - we'll ignore them for now. |
@elasticmachine update branch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay. LGTM. Left a few naming suggestions to clarify intent of some methods and classes for future readers.
assert autoGeneratedTimestamp == UNSET_AUTO_GENERATED_TIMESTAMP : "timestamp has already been generated!"; | ||
assert ifSeqNo == UNASSIGNED_SEQ_NO; | ||
assert ifPrimaryTerm == UNASSIGNED_PRIMARY_TERM; | ||
autoGeneratedTimestamp = Math.max(0, System.currentTimeMillis()); // extra paranoia |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's copied from above. I think the paranoia is about never setting it to a negative number. I'll dig a little and leave a better mesage.
String uid = UUIDs.base64UUID(); | ||
id(uid); | ||
} | ||
public void autoGenerateId() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit spooky. I think it might be cleaner if we moved id generation into IndexRouting with clear assignment. And generated timestamp in separate method called accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I feel like it's better as a method on IndexRequest
because it's mutating the guts of the IndexRequest
. The whole process
chain is a little spooky to be honest though.
* stored, but we need to keep it so that its FieldType can be used to generate | ||
* queries. | ||
*/ | ||
public class StandardIdFieldMapper extends IdFieldMapper { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should call it NoopIdFieldMapper to better reflect what it does (nothing) instead of where it is currently used (standard index mode).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It exposes the _id
for fetching and querying and sometimes provides field data for it. It isn't really noop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NonGeneratingIdFieldMapper
? 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now you are trolling! But, yeah, I don't like the name either. I'm saving this for a follow up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I wasn't trying to troll, I am just trying to brainstorm here. How about ExistingIdFieldMapper
, ProvidedIdFieldMapper
or SuppliedIdFieldMapper
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was sort of trolling. I know it's just brainstorming. ProvidedIdFieldMapper
seems better. But, like, has the fielddata stuff too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The big difference is that it expects the _id
to be generated by the coordinating node or handed off to us. And it can make fielddata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We went with ProvidedIdFieldMapper
. It isn't the best name, but it's better than "standard". It gives more of a hint about what makes it unique. Naming is hard.
* stored, but we need to keep it so that its FieldType can be used to generate | ||
* queries. | ||
*/ | ||
public class TimeSeriesModeIdFieldMapper extends IdFieldMapper { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Maybe we can rename this into something like TsidExtractingIdFiledMapper
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This I can get behind.
run elasticsearch-ci/release-tests |
Congratulations! It final merged! |
Thanks! Getting something like this in does feel like an accomplishment! Now to do all the follow up changes I promised! |
This adds support for GET and DELETE and the ids query and
Elasticsearch's standard document versioning to TSDB. So you can do
things like:
That'll return
{"_id" : "BsYQJjqS3TnsUlF3aDKnB34BAAA"}
which you can turnaround and fetch with
just like any other document in any other index. You can delete it too!
Or fetch it.
The ID comes from the dimensions and the
@timestamp
. So you canoverwrite the document:
Or you can write only if it doesn't already exist:
This works by generating an id from the dimensions and the
@timestamp
when parsing the document. The id looks like:
All that's base 64 encoded so that
Uid
can chew on it fairlyefficiently.
When it comes time to fetch or delete documents we base 64 decode the id
and grab the routing from the first four bytes. We use that hash to pick
the shard. Then we use the entire ID to perform the fetch or delete.
We don't implement update actions because we haven't written the
infrastructure to make sure the dimensions don't change. It's possible
to do, but feels like more than we need now.
There ton of compromises with this. The long term sad thing is that it
locks us into indexing the id of the sample. It'll index fairly
efficiently because the each time series will have the same first eight
bytes. It's also possible we'd share many of the first few bytes in the
timestamp as well. In our tsdb rally track this costs 8.75 bytes per
document. It's substantial, but not overwhelming.
In the short term there are lots of problems that I'd like to save for a
follow up change:
We still generate the automaticIncluded in this PR based on review comments._id
for the document but we don't useit. We should stop generating it.
_id
on each shard and when replayingthe translog. It'd be the good kind of paranoid to generate it once
on the primary and then keep it forever.
_id
as a string to pass it aroundElasticsearch internally. And Elasticsearch assumes that when an id
is loaded we always store as bytes encoded the
Uid
- which doeshave nice encoding for base 64 bytes. But this whole thing requires
us to make the bytes, base 64 encode them, and then hand them back to
Uid
to base 64 decode them into bytes. It's a bit hacky. And, it'sa small thing, but if the first byte of the routing hash encodes to
254 or 255 we
Uid
spends an extra byte to encode it. One that'llalways be a common prefix for tsdb indices, but still, it hurts my
heart. It's just hard to fix.
_id
in Lucene stored fields for tsdb indices. Nowthat we're building it from the dimensions and the
@timestamp
wereally don't need to store it. We could recalculate it when fetching
documents. In the tsdb rall ytrick this'd save us 6 bytes per document
at the cost of marginally slower fetches. Which is fine.
_id
right nowduring parsing but the
_id
isn't available until after the parsingis complete. And, if parsing fails, it may not be possible to know
the id at all. All of these error messages will have to change,
at least in tsdb mode.
If you specify anIncluded in this PR after review comments._id
on the request right now we just overwriteit. We should send you an error.
Elasticsearch to skip looking up the ids in lucene. This halves
indexing speed. It's substantial. We have to claw that optimization
back somehow. Something like sliding bloom filters or relying on
the increasing timestamps.
parsing fields. We should just build it from to parsed field values.
It looks like that'd improve indexing speed by about 20%.
@timestamp
little endian. This is likely badthe prefix encoded inverted index. It'll prefer big endian. Might shrink it.
_id
inRecoverySourceHandlerTests.java
andEngineTests.java
.I've had to make some changes as part of this that don't feel super
expected. The biggest one is changing
Engine.Result
to include theid
. When theid
comes from the dimensions it is calculated by thedocument parsing infrastructure which is happens in
IndexShard#pepareIndex
. Which returns anEngine.IndexResult
. To makeeverything clean I made it so
id
is available on allEngine.Result
sand I made all of the "outer results classes" read from
Engine.Results#id
. I'm not excited by it. But it works and it's whatwe're going with.
I've opted to create two subclasses of
IdFieldMapper
, one for standardindices and one for tsdb indices. This feels like the right way to
introduce the distinction, especially if we don't want tsdb to cary
around it's old fielddata support. Honestly if we need to aggregate on
_id
in tsdb mode we have doc values for thetsdb
and the@timestamp
- we could build doc values for_id
on the fly. But I'mnot expecting folks will need to do this. Also! I'd like to stop storing
tsdb'd
_id
field (see number 4 above) and the new subclass feels likea good place to put that too.