-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SceneAlgo::hierarchyHash #6261
SceneAlgo::hierarchyHash #6261
Conversation
85fac54
to
c635c14
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Daniel! The SceneAlgo stuff is feeling pretty good to me - a few comments inline, but not much. Your comments indicate that the performance test isn't any good - would be good to sort that out. USDLayerWriter stuff also seems like it needs a bit of tidying up, and benchmarking against the original - could perhaps do that in a separate PR?
c635c14
to
0de2dd6
Compare
OK, I think I've now addressed all feedback, and added Changelog entries. I've also added a comment to the test, and I'll clarify a bit more here what I meant by the perf test being "not very good": |
0de2dd6
to
c8a2080
Compare
I should flag one last change: I've also added an extra test for hierarchyHash that tries to ensure the internal parallelReduceLocations mechanism is functioning properly. |
Thanks Daniel! |
I'm feeling fairly good about this overall, and how it fits in with what we're planning for Instancer.
I don't really like the current structure of this code though - my initial implementation was pretty clean, but it incorrectly included hashes of full path, not relative paths ( which would cause problems when we want to use this for Instancers with relative prototypes ). To fix that, I really want to do a parallel reduce, similar to what you needed in USDLayerWriter. I realized I can actually do a reduce using the way that functors are allocated by parallelProcessLocations, which is sort of neat, but this workaround is not good for clarity. If we're not currently adding a parallelGatherLocations, I should probably just implement my own walk function, same as is currently done in USDLayerWriter.
The constraint that we want identical subhierarchies to hash the same is an interesting one - we require different valeus to have different hashes, but we don't generally require the same value to have the same hash. I suspect there are a lot of ways of constructing identical hierarchies where their hashes would end up being different, but this code does now seem to be working when the hashes of each element are consistent. ( When I first tested it, I was testing on the root location, and then tried to get the hash of a variety of stuff at the root to match the same stuff under a group ... but I wasn't able to get it to match. I suspect this is because evaluating the
objectPlug
at the root is a special case, and its hash will never match another location - even if that location doesn't have any object either. Anyway, for now I've just changed the test to compare two groups, one of which is nested an extra layer deep, and those do have matching hashes. )