-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Color metanodes by function #467
Conversation
This change makes it so that when the user colors nodes by structure, metanodes are differentiated in color based on function. Previously, functions were assigned an 'unknown' color because there is only 1 instance of a function within the library for each function.
This will be tested internally via integration tests. |
Have we looked into color blindness at any point, regarding these graph explorer colors? cc: @wchargin |
As far as I know, we haven't looked into it. It appears that the colors for non-expanded nodes are
(given by invoking I'm not sure what the right metric is. Here's the (symmetric) contrast ratio matrix, using WCAG definitions:
(source: https://gist.github.com/wchargin/f0c73cf9d38cff7d31cf1fec9019a419) These are quite low contrast values—4.5 is the minimum for text-on-background—but it's not clear that contrast is the right thing to look at, as these are certainly not text-on-background. Probably a better metric is CIE L*a*b* distance:
This shows that the node colors are significantly closer together than the standard TensorBoard accessible palette: the median distance for node colors is less than the minimum distance in the standard palette. (source: https://gist.github.com/wchargin/a0625f9c972485f2d35edad2b829cd8d) Personally, I've had no trouble with distinguishing the colors. But this anecdata should be qualified on two points: (a), my experience is mostly just with small datasets for TensorBoard develoment, so the number of distinct colors used is small (I don't think I've ever seen past the third color), and (b) my deuteranomaly is, I think, pretty mild. Plus, it's anecdata. Thus, my tentative conclusion is: based on the CIE L*a*b* distances, it might not be a bad idea to use the standard TensorBoard palette instead, but I also don't have sufficient expertise to make a decisive suggestion. |
That's a thorough, quantitative analysis! Here are the colors juxtaposed. Each row is a palette. What about a less intense version of TensorBoard's? We also likely want a slightly bigger palette. |
An alternative to decreasing the opacity is to show white on darker colors, as with our existing tensorboard/tensorboard/components/tf_card_heading/util.js Lines 35 to 45 in 5ebc2b9
(jsbin: http://jsbin.com/ginopimupi/edit?html,css,js,output) In my opinion, it is notably easier to distinguish the colors than in the palette with reduced opacity: for instance: color pairs {1,3} and {5,6} and {6,7}. Consider also that these colors will be at a distance from another, not directly juxtaposed. The reduced opacity is precisely the problem with the current palette, not so much the hues! so reintroducing it doesn't make much sense, in my opinion. The "smart text colors" option is perfectly legible to me, and is consistent with the rest of TensorBoard and with WAI. |
Do you have any evidence to suggest that people frequently need to differentiate among more than seven isomorphism classes of subgraphs? (The jump from ten to seven does not seem large.) |
The reason I note that is because each function gets a color now - we can address later when functions become popular. Smart colors look good, although they require nuanced logic, ie sometimes the label is within vs outside. Let me see if I can cover all the cases ... it helps that expanded nodes are lighter. |
This change makes it so that when the user colors nodes by structure, metanodes are differentiated in color based on function. Previously, functions were assigned an 'unknown' color because there is only 1 instance of a function within the library for each function.