Skip to content
Eric Hansen edited this page Jun 10, 2016 · 1 revision

Below is cited from issue 7 and helps explain how gradient checks parameter changes.


Here's an example of how things work. Maybe this will help you.

changes = do_lstsq(ma, vb, radii=radii, cutoffs=cutoffs)

This 1st goes through do_method since do_lstsq is wrapped by that function. If opt.OptError is raised, a warning is issued, and no changes are returned. This may be more clear if I added in a line return None following line 352, but that's also unnecessary extra line.

If opt.OptError isn't raised, then it goes through check. It calculates radius, the radius of parameter change, and then either checks for maximum radii or cutoffs.

The maximum radii method essentially scales all parameter changes to be within whatever radii is given by the user. The user can specify as many max radii as he or she wants. If the calculated radius is larger than those specified by the user, it will keep scaling the parameter changes to meet the user specified radii, each time generating another trial FF. It checks in order of smaller to larger user inputted maximum radii. If at some point during this process, the calculated radius is less than the user inputted maximum radius, then it returns the unscaled parameter changes. If the calculated radius starts out smaller than the smallest user requested maximum radius, then it simply returns the original, unscaled trial parameter changes. This check doesn't return None or 0, it will always give you something assuming opt.OptError wasn't raised earlier.

If you instead use check_cutoffs (which happens when the user doesn't specify any maximum radii and does specify cutoffs), and it returns False, then check and do_method returns [] (None, False and [] all mean 0 in Python).

So, opt.OptError (troublesome if that happens) and check_cutoffs (not troublesome, doing what the user asked for) can both end up with not producing any changes.

Clone this wiki locally