-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't free up space because there's not enough space? ;) #9260
Labels
Comments
One thing I forgot to mention is that the nodes use ZFS for their storage... Maybe the error about free bytes exceeding total bytes might be somehow related to ZFS compression? |
The workaround:
|
Related to #9249, the workaround there will work for this also until it is fixed. |
Thanks for the pointer! |
dakrone
added a commit
to dakrone/elasticsearch
that referenced
this issue
Jan 26, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes elastic#9249 Relates to elastic#9260
dakrone
added a commit
that referenced
this issue
Jan 26, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes #9249 Relates to #9260
dakrone
added a commit
that referenced
this issue
Jan 26, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes #9249 Relates to #9260
dakrone
added a commit
that referenced
this issue
Jan 26, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes #9249 Relates to #9260 Conflicts: src/main/java/org/elasticsearch/cluster/DiskUsage.java src/test/java/org/elasticsearch/cluster/DiskUsageTests.java
mute
pushed a commit
to mute/elasticsearch
that referenced
this issue
Jul 29, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes elastic#9249 Relates to elastic#9260
mute
pushed a commit
to mute/elasticsearch
that referenced
this issue
Jul 29, 2015
Apparently some filesystems such as ZFS and occasionally NTFS can report filesystem usages that are negative, or above the maximum total size of the filesystem. This relaxes the constraints on `DiskUsage` so that an exception is not thrown. If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will always return 100.0% free (to ensure the disk threshold decider fails open) Fixes elastic#9249 Relates to elastic#9260 Conflicts: src/main/java/org/elasticsearch/cluster/DiskUsage.java src/test/java/org/elasticsearch/cluster/DiskUsageTests.java
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
One of my nodes is 'low' on space. Down to ~16 GB.
So I tried to run curator to remove older logs and I get the following message:
The actual error from curator is:
The text was updated successfully, but these errors were encountered: