You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there is an on going in memory translog (per shard) holding all the operations done between flushes. There is a memory monitor to control if there are memory problems and then do a flush to clean it (on the most expensive shards) as well as auto flush each N (5000) number of operations. Still, this memory can be of good use other than the translog.
A file system based translog will hold the changes on a file. There is no need to flush / fsync it since its not really used for full shutdown recovery (the gateway is there for that) so it should be fast enough.
It should be used as the default as well for better out of the box experience.
The text was updated successfully, but these errors were encountered:
With this commit we add a new parameter `placement` in the race config
file that allows to specify per track which benchmark machine should be
targeted when the number of targeted machines is less than the number of
available ones. The motivation for this change is that single-node
benchmarks have historically always targeted the first machine in the
pool leading to frequent disk failures. If we spread the load across
machines the expected lifetime of the disks across machines is more
evenly distributed. By only allowing to specify the placement on the
top level (i.e. per track) we ensure that charts for the same track are
still consistent and don't suffer from machine to machine variation.
In this commit we also modify the race configurations for the existing
nightly benchmarks. We have assigned the placement roughly based on the
expected write load. It is not possible to spread the load completely
evenly because some tracks dominate I/O load (for example `nyc_taxis`
for group-1) but spreading the load a bit is still preferable to have
the disk on the first target machine always fail first.
Relates elastic#259
Currently, there is an on going in memory translog (per shard) holding all the operations done between flushes. There is a memory monitor to control if there are memory problems and then do a flush to clean it (on the most expensive shards) as well as auto flush each N (5000) number of operations. Still, this memory can be of good use other than the translog.
A file system based translog will hold the changes on a file. There is no need to flush / fsync it since its not really used for full shutdown recovery (the gateway is there for that) so it should be fast enough.
It should be used as the default as well for better out of the box experience.
The text was updated successfully, but these errors were encountered: