Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep Support in Resample #5639

Merged
merged 9 commits into from
Jan 30, 2024
Merged

Conversation

danieldresser-ie
Copy link
Contributor

It was a bit of a scramble to try and get all the various options explored for this cleaned up into something reviewable.

Hopefully this makes a fair bit of sense, though some of the comments were written pretty hastily.

The main known thing that needs adding is the mode selector to switch between proper filtering or forcing the "nearest" filter for deeps ( and possibly also the option of throwing an exception if you try to deep filter something without choosing "nearest" ). That should be pretty straightforward to add, and shouldn't affect the review much.

@danieldresser-ie danieldresser-ie force-pushed the deepResize branch 3 times, most recently from 8d5a843 to 4b588fb Compare January 23, 2024 01:23
@danieldresser-ie
Copy link
Contributor Author

Cleaned up the missing test files, and added the "deepMode" plug.
I also split the Resample commit, so there's a first commit that adds "nearest" mode ( I didn't want to put too much work into splitting, so it also contains some general cleanup - mostly making sure that plugs that are supposed to be global, like boundingMode, are evaluated without the tileOrigin in the context ).

The remaining test failure is a failed assertion in DeepState. While I have made a few modifications to DeepState, I suspect this failure is likely not new - ResampleTest.testDeepWithExceptionalValues hits a lot of corner cases in DeepState that were previously untested. I had already noted a test that I skipped because DeepState output completely unreasonable values ... the failed assertion is quite possibly related, and a good excuse to clean up DeepState.

If you want to visually inspect what's going on, there's a WIP version of the deep pixel inspector here: https://github.com/danieldresser-ie/gaffer/tree/deepResizeWithViz
It's full of constuction sawdust since I haven't even removed all the code from the earlier version which predated official image comparison and worked quite differently, but it's already useful, and hopefully won't take too long to clean up.

Copy link
Member

@johnhaddon johnhaddon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Daniel!

I've run out of steam a bit on this review, but I think the reality is that even with a full head, I'm not really equipped to review the finer details. So it's mostly a smattering of suggestions for tweaks to the presentation in terms of API and presentation in the nodes. The most useful contribution is probably the snippet with the Resize node that triggers the "this would be a hang if it wasn't an exception" thing.

Another couple of comments that I didn't comment on inline :

  • Cancellation can be excessively slow, with downsizing to a single tile showing the problem particularly badly. Worth tuning the frequency of checks to that case I think, as otherwise the UI can freeze.
  • The fact that Default filter mode chooses a downsizing filter with negative lobes is vexing, since it leads straight to an error as soon as you turn on filtering. "Default" here meant "makes a sensible decision for you based on the operation", so I think we should interpret that to mean "chooses different filters suitable for deep" too.

Cheers...
John

@johnhaddon
Copy link
Member

Oh, also spotted we have this failure in the debug CI :

testDeepWithExceptionalValues (GafferImageTest.ResampleTest.ResampleTest) ... gaffer: src/GafferImage/DeepState.cpp:326: IECore::FloatVectorDataPtr {anonymous}::alphaToLinearWeights(std::vector<float>&, const std::vector<int>&, const std::vector<int>&, const std::vector<float>&, const std::vector<int>&, bool): Assertion `contributionEnd != contributionStart' failed.
gaffer: src/GafferImage/DeepState.cpp:326: IECore::FloatVectorDataPtr {anonymous}::alphaToLinearWeights(std::vector<float>&, const std::vector<int>&, const std::vector<int>&, const std::vector<float>&, const std::vector<int>&, bool): Assertion `contributionEnd != contributionStart' failed.

@danieldresser-ie
Copy link
Contributor Author

Oh, also spotted we have this failure in the debug CI :

Yeah, that's the probably pre-existing buy in DeepState I mentioned in my comment when I updated.

The fact that Default filter mode chooses a downsizing filter with negative lobes is vexing, since it leads straight to an error as soon as you turn on filtering. "Default" here meant "makes a sensible decision for you based on the operation", so I think we should interpret that to mean "chooses different filters suitable for deep" too.

The part we can definitely agree on is that this is incredibly vexing. I'm really not sure what the right solution is those. If you have a flat beauty, then you need to Resize it with a filter that matches your deep alpha. It seem like a pretty nasty trap if you resize both of them with Default, and then get silhouettes that don't match.

@danieldresser-ie
Copy link
Contributor Author

Thinking about how vexing it is to not be able to match a default downsize ... now that the code is structured this way, I think it would actually be easier to do something reasonable with negative filter lobes ( a result that matches anywhere the alpha is in range, and does something reasonable outside of that ) - I could probably get it working in a day.

I know you don't want to spend any more time on this, but it might be more useful than spending time trying to come up with an approximation without negative filter lobes.

@danieldresser-ie
Copy link
Contributor Author

Added a bunch of little fix commits to address feedback ... there's a lot of little things, but I think they're all fairly clear what they're doing, so it might be easier to review this way. Easy to squash if you prefer.

Left to do: useDeepVisibility - you didn't actually confirm that you think it's worth splitting this out to be a ToIncandescent node, and figuring out what to do with negative lobes.

@danieldresser-ie danieldresser-ie force-pushed the deepResize branch 2 times, most recently from 187d0ca to 90eed35 Compare January 27, 2024 02:25
@danieldresser-ie
Copy link
Contributor Author

Add a few more commits, to handle negative lobes, and useDeepVisibility - Murray voted for just moving it to an advanced tab.

@johnhaddon
Copy link
Member

As discussed on Thursday, I've put together a small commit that adds a warning in the UI when filterDeep is on :
91c1fe0. If you could cherry pick that over that'd be great.

Comment on lines +612 to +616
# be crammed into the last segment in order preserve the final result ... if the last
# segment has low visibility, we end up having to put extremely high RGB values in
# the last segment, resulting in poor precision. We could be a bit more specific in
# this test about where this poor precision happens, but currently we just use a large
# tolerance for everything.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it sound like although we might be arriving at the expected flattened result, we could be generating deeps that don't make a great deal of sense in themselves, and might not combine as expected with other deeps. The next segment that we push things into could be a long way away from the original, right? Would that be fair to say? If so, I'm still wondering if it really makes sense to have a negative-lobed downsize be the default, and if allowing negative lobes at all might be a bit of a trap...

Copy link
Contributor Author

@danieldresser-ie danieldresser-ie Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The next segment that we push things into could be a long way away from the original, right?" - only if there are no samples in between that increase the alpha back to an increasing value. Most reasonable images don't have a lot of contribution from segments with low alpha, so it isn't a substantial error.

It seemed convincing when I described this and showed how it worked pretty well on a reasonable test image this morning, so I probably should have just left it there, but it did feel like I should maybe quantify the worst case. Which ended up with me slipping in this little fix: 9a69817, and putting together this test:

Test Script
import Gaffer
import GafferImage
import GafferOSL
import imath

Gaffer.Metadata.registerValue( parent, "serialiser:milestoneVersion", 1, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:majorVersion", 4, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:minorVersion", 0, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:patchVersion", 0, persistent=False )

__children = {}

__children["Constant"] = GafferImage.Constant( "Constant" )
parent.addChild( __children["Constant"] )
__children["Constant"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["OSLImage"] = GafferOSL.OSLImage( "OSLImage" )
parent.addChild( __children["OSLImage"] )
__children["OSLImage"]["channels"].addChild( Gaffer.NameValuePlug( "A", Gaffer.FloatPlug( "value", defaultValue = 1.0, ), True, "channel", Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic ) )
__children["OSLImage"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["OSLCode"] = GafferOSL.OSLCode( "OSLCode" )
parent.addChild( __children["OSLCode"] )
__children["OSLCode"]["out"].addChild( Gaffer.FloatPlug( "output1", direction = Gaffer.Plug.Direction.Out, defaultValue = 0.0, flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["OSLCode"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["DeepMerge1"] = GafferImage.DeepMerge( "DeepMerge1" )
parent.addChild( __children["DeepMerge1"] )
__children["DeepMerge1"]["in"].addChild( GafferImage.ImagePlug( "in2", flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["DeepMerge1"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["FlatToDeep"] = GafferImage.FlatToDeep( "FlatToDeep" )
parent.addChild( __children["FlatToDeep"] )
__children["FlatToDeep"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["Premultiply1"] = GafferImage.Premultiply( "Premultiply1" )
parent.addChild( __children["Premultiply1"] )
__children["Premultiply1"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["FlatToDeep1"] = GafferImage.FlatToDeep( "FlatToDeep1" )
parent.addChild( __children["FlatToDeep1"] )
__children["FlatToDeep1"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["SliceAfter"] = GafferImage.DeepSlice( "SliceAfter" )
parent.addChild( __children["SliceAfter"] )
__children["SliceAfter"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["DeepResize"] = GafferImage.Resize( "DeepResize" )
parent.addChild( __children["DeepResize"] )
__children["DeepResize"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["FlatResize"] = GafferImage.Resize( "FlatResize" )
parent.addChild( __children["FlatResize"] )
__children["FlatResize"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["Dot"] = Gaffer.Dot( "Dot" )
parent.addChild( __children["Dot"] )
__children["Dot"].setup( GafferImage.ImagePlug( "in", ) )
__children["Dot"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["ReasonableDeepImage"] = GafferImage.ImageReader( "ReasonableDeepImage" )
parent.addChild( __children["ReasonableDeepImage"] )
__children["ReasonableDeepImage"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["SliceBefore"] = GafferImage.DeepSlice( "SliceBefore" )
parent.addChild( __children["SliceBefore"] )
__children["SliceBefore"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["Constant"]["format"].setValue( GafferImage.Format( 200, 200, 1.000 ) )
Gaffer.Metadata.registerValue( __children["Constant"]["format"], 'formatPlugValueWidget:mode', 'custom' )
__children["Constant"]["color"].setValue( imath.Color4f( 0, 1, 1, 0 ) )
__children["Constant"]["__uiPosition"].setValue( imath.V2f( -228.642715, 38.4070778 ) )
__children["OSLImage"]["defaultFormat"].setValue( GafferImage.Format( 200, 200, 1.000 ) )
Gaffer.Metadata.registerValue( __children["OSLImage"]["defaultFormat"], 'formatPlugValueWidget:mode', 'custom' )
__children["OSLImage"]["channels"]["channel"]["value"].setInput( __children["OSLCode"]["out"]["output1"] )
__children["OSLImage"]["__uiPosition"].setValue( imath.V2f( -204.351334, 36.8719368 ) )
__children["OSLCode"]["code"].setValue( 'point q = point( u - 0.5, v - 0.5, 0  );\noutput1 = 1 - smoothstep( 0.48, 0.5, length( q ) );' )
__children["OSLCode"]["__uiPosition"].setValue( imath.V2f( -216.451874, 41.0258522 ) )
__children["DeepMerge1"]["in"][0].setInput( __children["FlatToDeep"]["out"] )
__children["DeepMerge1"]["in"][1].setInput( __children["FlatToDeep1"]["out"] )
__children["DeepMerge1"]["__uiPosition"].setValue( imath.V2f( -215.007034, 17.3797455 ) )
__children["FlatToDeep"]["in"].setInput( __children["Constant"]["out"] )
__children["FlatToDeep"]["zBackMode"].setValue( 1 )
__children["FlatToDeep"]["thickness"].setValue( 1.0 )
__children["FlatToDeep"]["__uiPosition"].setValue( imath.V2f( -228.642715, 30.4069881 ) )
__children["Premultiply1"]["in"].setInput( __children["OSLImage"]["out"] )
__children["Premultiply1"]["__uiPosition"].setValue( imath.V2f( -204.351334, 31.2078724 ) )
__children["FlatToDeep1"]["in"].setInput( __children["Premultiply1"]["out"] )
__children["FlatToDeep1"]["depth"].setValue( 0.800000011920929 )
__children["FlatToDeep1"]["__uiPosition"].setValue( imath.V2f( -204.351334, 25.543808 ) )
__children["SliceAfter"]["in"].setInput( __children["DeepResize"]["out"] )
__children["SliceAfter"]["farClip"]["enabled"].setValue( True )
__children["SliceAfter"]["farClip"]["value"].setValue( 6.956500053405762 )
__children["SliceAfter"]["__uiPosition"].setValue( imath.V2f( -215.007034, -10.7726736 ) )
__children["DeepResize"]["in"].setInput( __children["Dot"]["out"] )
__children["DeepResize"]["format"].setValue( GafferImage.Format( 100, 150, 1.000 ) )
Gaffer.Metadata.registerValue( __children["DeepResize"]["format"], 'formatPlugValueWidget:mode', 'custom' )
__children["DeepResize"]["filterDeep"].setValue( True )
__children["DeepResize"]["__uiPosition"].setValue( imath.V2f( -215.007034, 2.63339186 ) )
__children["FlatResize"]["in"].setInput( __children["SliceBefore"]["out"] )
__children["FlatResize"]["format"].setInput( __children["DeepResize"]["format"] )
Gaffer.Metadata.registerValue( __children["FlatResize"]["format"], 'formatPlugValueWidget:mode', 'custom' )
__children["FlatResize"]["filterDeep"].setValue( True )
__children["FlatResize"]["__uiPosition"].setValue( imath.V2f( -198.92955, -10.9095926 ) )
__children["Dot"]["in"].setInput( __children["DeepMerge1"]["out"] )
__children["Dot"]["__uiPosition"].setValue( imath.V2f( -209.648041, 9.96542358 ) )
__children["ReasonableDeepImage"]["fileName"].setValue( '${GAFFER_ROOT}/python/GafferImageTest/images/representativeDeepImage.exr' )
__children["ReasonableDeepImage"]["__uiPosition"].setValue( imath.V2f( -191.552353, 16.9537373 ) )
__children["SliceBefore"]["in"].setInput( __children["Dot"]["out"] )
__children["SliceBefore"]["farClip"].setInput( __children["SliceAfter"]["farClip"] )
__children["SliceBefore"]["flatten"].setValue( True )
__children["SliceBefore"]["__uiPosition"].setValue( imath.V2f( -198.92955, 2.63339233 ) )


del __children

This is intentionally constructed as an absolute worst case - the segments with color are uniform incandescent fog with no alpha, whereas the segments that drive the alpha have no color, and have a hard edge, in behind the fog. This worst case is indeed pretty ugly ... and looking at it now, I probably could come up with a way to give better results in this case of mixed incandescent/opaque with negative lobes. But if you swap to the image reader in the test script, you can also see how it gives reasonable results for a more reasonable image. Unless you want to roll back to just throwing an exception by default for downsampling, so that people know they need to change their flat filtering as well to match, I still think it's the right move to deploy like this - it gives good results in the vast majority of cases, and if people do end up running into issues specifically with downsampling mixtures of incandescent fog and opaque objects, there is probably room to improve this in the future.

@johnhaddon
Copy link
Member

Add a few more commits, to handle negative lobes, and useDeepVisibility - Murray voted for just moving it to an advanced tab.

Thanks Daniel. I've been through and resolved all my previous comments, bar one in Premultiply.cpp that seems unaddressed. Also added a query inline about the negative lobe stuff...

@danieldresser-ie danieldresser-ie force-pushed the deepResize branch 3 times, most recently from 5f1171f to b2d695c Compare January 29, 2024 22:32
@danieldresser-ie
Copy link
Contributor Author

Squashed all prior fixes, added 337e0c9 and b2d695c

I've probably gone too verbose with the requested message about negative lobes, never sure how much detail to include here ... I guess the important thing is that if you are getting weird results with an intense mixture of incandescent and non-incandescent samples, then try Blackman-Harris ... though I currently in the tooltip I haven't described at all the issue with "curve shape error" that I discuss in the tests - the same mixture of incandescent and non-incandescent is likely to introduce some of that error, even without negative lobes.

@johnhaddon
Copy link
Member

Thanks Daniel! I've squashed the latest fixups in and added changelog entries. Merging...

@johnhaddon johnhaddon merged commit c29efbd into GafferHQ:main Jan 30, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants