You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many NoSQL databases are able to produce responses in less than a millisecond. When this happens YCSB places the result in the 0 millisecond slot in the histogram array and the latency value not increased (since the latency is reported as zero) and the operations variable is incremented. As a result this causes data for some datastores to be skewed. I suggest using System.nanoTime() and then dividing by 1000 to get the time in microseconds. The histogram will still look the same, but the average latency reported will be much more accurate. Then the documentation could be updated to say that when reading the histogram the value for each slot is between the time of that slot and the next greatest slots time. For example, if there were 1000 operations that were recorded in the 0 slot in the histogram then we would interpret that as "1000 operation completed in less than 1 millisecond, but greater than 0 milliseconds.
Also, for machines that cannot produce time results in the microsecond range we could log a warning that says that that granularity is not supported. I can also submit a patch for this if you are open to the idea.
The text was updated successfully, but these errors were encountered:
Many NoSQL databases are able to produce responses in less than a millisecond. When this happens YCSB places the result in the 0 millisecond slot in the histogram array and the latency value not increased (since the latency is reported as zero) and the operations variable is incremented. As a result this causes data for some datastores to be skewed. I suggest using System.nanoTime() and then dividing by 1000 to get the time in microseconds. The histogram will still look the same, but the average latency reported will be much more accurate. Then the documentation could be updated to say that when reading the histogram the value for each slot is between the time of that slot and the next greatest slots time. For example, if there were 1000 operations that were recorded in the 0 slot in the histogram then we would interpret that as "1000 operation completed in less than 1 millisecond, but greater than 0 milliseconds.
Also, for machines that cannot produce time results in the microsecond range we could log a warning that says that that granularity is not supported. I can also submit a patch for this if you are open to the idea.
The text was updated successfully, but these errors were encountered: