-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add alternative parameterization to negative binomial distribution #4126 #4134
Conversation
Codecov Report
@@ Coverage Diff @@
## master #4134 +/- ##
=======================================
Coverage 88.75% 88.75%
=======================================
Files 89 89
Lines 14037 14039 +2
=======================================
+ Hits 12458 12461 +3
+ Misses 1579 1578 -1
|
This is great -- thanks! Definitely tests are required, do you need help figuring that out? |
Yeah, I would need some help with that. I didn't find anything on the developer's guide about testing, and the test code is a bit opaque for me.
|
Thanks for the PR @ricardoV94 ! For the tests, I think you can take inspiration from the tests of Negative Binomial done with the current parametrization. Doing them with the new one should already be a good sample of use-cases |
@twiecki @AlexAndorra would the added test be enough? |
IMO there should also be a test which hits the various error messages you've added, e.g. when you set both |
@MarcoGorelli That makes sense. I am not sure how to do it. Could you direct me to a snippet where this type of test is implemented? For example, the Beta distribution also raises this type of ValueErrors, but I couldn't find if it is being tested anywhere. |
They do something similar in with raises(ValueError, match=r"Some variables not in the model: \['x2', 'y2'\]"):
starting.allinmodel([x2, y2], model1)
with raises(ValueError, match=r"Some variables not in the model: \['x2'\]"):
starting.allinmodel([x2, y1], model1)
with raises(ValueError, match=r"Some variables not in the model: \['x2'\]"):
starting.allinmodel([x2], model1) where |
@MarcoGorelli Thanks for the input. I added some tests that cover the new ValueError exceptions. What do you think? |
@ricardoV94 nice work! I think this can be simplified a bit using @pytest.mark.parametrize(
"mu, n, alpha, expected",
[
(5, None, None, "Incompatible parametrization. Must specify either alpha or n."),
(None, 2, 2, "Incompatible parametrization Can't specify both alpha and n."),
# other two test cases go here
]
)
def test_negative_binomial_init_fail(self, mu, n, alpha, expected):
with Model():
with pytest.raises(ValueError, match=expected):
x = NegativeBinomial("x", mu=mu, n=n, alpha=alpha) |
pymc3/distributions/discrete.py
Outdated
) | ||
elif n is not None: | ||
raise ValueError( | ||
"Incompatible parametrization Can't specify both alpha and n." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing full-stop?
"Incompatible parametrization Can't specify both alpha and n." | |
"Incompatible parametrization. Can't specify both alpha and n." |
@MarcoGorelli Thanks for the suggestion. I refactored the code with the I also added a couple of tests for the different permutations of missing/over-specified parameters, do you think it is too much? |
Thanks @ricardoV94! Is this your first PR? |
You're welcome. Yes it was the first one for me :) |
Congratulations then! :)
…On Thu, Oct 1, 2020 at 9:52 PM ricardoV94 ***@***.***> wrote:
You're welcome. Yes it was the first one for me :)
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#4134 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFETGC75NLNGNEERMDE4R3SITMYJANCNFSM4R2Z2X6A>
.
|
Changes allow the specification of the NegativeBinomial distribution in terms of p (the probability of observing a success in each trial), and n (the target number of successes). This parametrization gives the likelihood for y, the number of failures observed until a target number of successes is reached.
I changed the docstrings and distribution initialization in line with other distribution that have multiple parametrizations.
I didn't figure out yet how the tests work, so I did not implement it (hence the WIP)! This should be straightforward, however, since the scipy negativebinomial distribution is implemented in terms of n and p. Any guidance in how to proceed is much appreciated!
Pinging @twiecki
Ref #4126