-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
graph] Use 0 as the minimum memory/time in the color scale. #437
Conversation
@chihuahua Mind having a look at this? |
Just passing by, but…
The cleanest way is probably to take advantage of the fact that const micros = this.hierarchy.getNodeMap().map(node => node.micros);
const minMicros = Math.min(...micros); // ES5: Math.min.apply(null, micros)
const maxMicros = Math.max(...micros); // ES5: Math.max.apply(null, micros) Note that Just for fun…if by “more efficient” you mean not “cleaner code” but “fewest comparisons,” then you will need to compute the min and max simultaneously. This requires (3 / 2) n − 2 comparisons (for even n) and is provably optimal. Here is a simple algorithm:
|
@ioeric, could you actually break this up into 2 PRs? Thanks! :) It's a bit cleaner that way. For instance, if we later find compute time calculations to broken, we won't have to also rollback logic for naming nodes. |
@chihuahua Done. Pulled the strict name change into #440 |
@wchargin Thanks for the suggestion and analysis! It seems that using the math library would require filtering (both nodeStats and micros can be null) plus two passes of scanning. Maybe a hand-written loop isn't too bad after all? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
let computeTimeExtent = d3.extent(topLevelGraph.nodes(), | ||
(nodeName, index) => { | ||
let node = topLevelGraph.node(nodeName); | ||
// Find the maximum and minimum compute time in the whole graph. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment here noting that, furthermore, the total compute time for a metanode is the sum of the compute time of all nodes within it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it's the max compute time among all nodes within it? At least this is what I saw in the graph I tested with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic within the _.each
block below finds the max getTotalMicros
throughout the graph. In turn, getTotalMicros
is the sum of compute time for all nodes underneath it (for all group nodes).
* Sum of all children if it is a Group node. Null if it is unknown. |
Come to think, I believe that means we don't need a complete graph traversal at all because top-level metanodes must have greater compute time than any of its children.
Maybe we can ignore metanodes? ie,
if (node.type === NodeType.META) {
return;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, lets not ignore metanodes. They get colored too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you can simply do is iterate through the children of the root (the top-level nodes) because group nodes already include the compute times of their children (as before). I think this change could actually be reverted. Sorry, I didn't notice earlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we also need the minimum to calculate the extent right? Does top-level graph have this information too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry. I was out for a while. I think the scale should just range from 0 to max for the sake of graphical integrity. Otherwise, the colors might vary a lot, but the range in compute time might actually be very small.
It's similar to the reason that I prefer bar charts to start at 0:
https://flowingdata.com/2015/08/31/bar-chart-baselines-start-at-zero/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really have strong opinion as long as it works. I'll change the code to scale from 0.
Ping. As mentioned in the inline comment, I still think the minimum compute time is not calculated correctly. |
PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested, and it LGTM. Thank you for the fix.
…w#437) Using 0 as the lower bound for the range seems more in line with graphical integrity.
Using 0 as the lower bound for the range seems more in line with graphical integrity.
When calculating the color scale for the compute time, consider all nodes in the graph instead of just the nodes in the top-level graph.
I have a graph with one single sub-graph, which consumes all the compute time (say X ms), in the top-level graph, so the color scale becomes "X ms -> X ms".
Not sure about the original intention of the top-level graph; let me know if this is the right fix.
(I don't really know typescript. Please let me know if there is more efficient way to get the extent of a map :)