-
-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Few digits of precision in N(). #10164
Comments
comment:1
Attachment: 10164.patch.gz See also this sage-devel thread. Doug reports the following evil lines:
So a full patch would fix all of these, though we definitely want to keep the original author on. Also need doctests for these large cases, of course. I can't believe I never saw this ticket. |
Changed keywords from N, digits, numerical approximation to N, digits, numerical approximation beginner |
comment:2
I'm on this one. |
comment:3
Okay, I have a patch. Testing now. Turns out it's a little trickier to doctest than I thought, because some of the bugs can't be triggered yet because of other limitations. :-/ |
Attachment: trac_10164_round_up_log10_2.patch.gz roud up log(10,2) |
round up log(10,2) |
comment:4
Attachment: trac_10164_round_up_log10_2.2.patch.gz Okay, so I've changed all six instances of "3.32192" (which seems to be the most common approximation to log(10,2) in use). There are three other references matching 3.32* (in rings/arith.py, interfaces/gp.py, and libs/pari/gen.pyx) I left alone. Those all have at least 10 digits, so the problem would be pushed to higher levels even if it occurs, and there was in every case something I wasn't confident enough of to change. I don't know much about how pari's precision works, for example. I think it's a step in the right direction, anyhow. |
Changed author from Robert Gerbicz to Robert Gerbicz, Douglas McNeil |
Reviewer: Karl-Dieter Crisman |
comment:5
Very clever testing. Very minor 'needs work' issues:
to
Try to shorten those puppies up to the 80-character limit (if I recall correctly) for fine terminal viewing. Because this has the potential to do weird things to doctests, I'm not going to give positive review in any case until I run long tests. But looks good other than those tiny issues. Also, a question: I assume it's okay (and already the case, in fact) that we have MORE digits than "officially requested" by the user. |
comment:6
I like this so far, but I have a maybe dumb question. Why not use more digits? in "raw" python, |
comment:7
Long doctests are a-ok, so just the patch itself would be treated differently based on comments. Harald's comment seems quite reasonable. One could go either way, though, unless we are really trying to get exactly the 'right' number of digits, which apparently hasn't been the case (we treat as lower bound). |
comment:8
Why not get exactly the correct number of digits? Certainly the added precision will fix the problem for much larger numbers. The question is how many digits we can feasibly expect people to want to calculate. 3.32193 works for 1000000 digits, 3.3219281 works for 10000000, but I don't see why we can't use 3.3219280948873627 to future-proof it. To calculate the precision error with different approximations of log(10, 2): (floor((n + 1) * a) + 1) - (floor((n + 1) * log(10,2).n(100000)) + 1), where n is the number of digits you want, and a is the approximation. log(10,2).n(100000) is just used for comparison. |
comment:9
I simply followed the OP's suggestion, which works just fine: adding more digits would only change things by ~1e-7 or so. "Why not get exactly the correct number of digits?" Well, since I don't know if the last digits are guaranteed to be correct anyway, I'm not sure how important it is to be precise when we're not accurate, esp. given that we don't seem to have been in the past. Of the two, not using as much precision as requested is a far more serious problem than using a (very tiny) bit more. But it's probably easier for us simply to change it than to waste time arguing about the use case. :-) As for: "3.32193 works for 1000000 digits, 3.3219281 works for 10000000, but I don't see why we can't use 3.3219280948873627 to future-proof it." I think there's some confusion here. 3.32193 works for |
comment:10
I see your point. I guess the way to do it would be to use the overestimate and slice to the correct number of digits, but this may be difficult since there doesn't seem to be an index method for RealNumber. |
Changed keywords from N, digits, numerical approximation beginner to N, digits, numerical approximation |
comment:11
This has gone beyond beginner capacities, due to the extended discussion above. |
comment:12
The patch needs review. |
comment:13
dsm, would you object to having a more precise overestimation, such as 3.32192809488736235? |
comment:14
I don't really object, although putting the overestimate right at the end might cause problems for people with more digits than they can possibly have memory for, as it's hard to trust arithmetic down at the lsb level unless you think harder about things than this problem warrants. For example, Anyway, I'm fine with adding more digits, as long as it's still an overestimate w.r.t. math.log. If |
comment:15
Sounds good! |
v3: use a more precise approx |
comment:16
Attachment: trac_10164_round_up_log10_2_v3.patch.gz Okay; anyone want to have a look and make sure I didn't break anything while changing it? |
comment:17
Oh, yeah. And unfortunately putting in the test from trac #12163 doubles the runtime of testing functional.py from ~9s to ~22s for me. I asked on IRC and the response was that we could probably get away without marking it up as a long test, but that's easily changed. |
comment:18
I don't see why you couldn't just mark the second (huge) test as |
comment:19
Apply trac_10164_round_up_log10_2_v3.patch (for patchbot) |
This comment has been minimized.
This comment has been minimized.
comment:21
The v3 patch looks very good. I added a reviewer patch that defined a constant |
Changed reviewer from Karl-Dieter Crisman to Karl-Dieter Crisman, Benjamin Jones |
comment:22
Hello, Great that this patch is moving forward! Do you think you could just add a comment on the tests explaining that the decimal point adds an extra 1 to the length? Otherwise it seems inaccurate until further inspection. Thanks. |
Attachment: trac_10164_reviewer.patch.gz name the magic 3.23... number, remove trailing whitespace |
comment:23
Thanks eviatar, I added the comment to the docstring. I ran doc tests on all files that were touched and those passed, no problems. I'll wait on the patchbot's approval before giving positive review. Patchbot, apply: |
last two patches folded togethers with whitespace removed |
This comment has been minimized.
This comment has been minimized.
comment:24
Attachment: trac_10164_folder_sans_whitespace.patch.gz Patchbot: apply only attachment: trac_10164_folder_sans_whitespace.patch |
comment:25
Patchbot is refusing to cooperate. I'm running |
comment:26
All tests pass, positive review. |
Merged: sage-5.1.beta3 |
For
N(1+10**-400000,digits=400001)
the displayed number of digits is only 400000, it is printing 1. followed by 399999 zeroes. The reason is that in functional.py:prec = int((digits+1) * 3.32192) + 1
. Howeverlog(10)/log(2)~3.3219280948874>3.32192
, so if digits is large the used precision will be smaller by some digits than the requested number of digits.The suggestion is to use 3.32193 instead of 3.32192, see the trivial patch.
Apply: attachment: trac_10164_folder_sans_whitespace.patch to Sage library
CC: @kcrisman
Component: misc
Keywords: N, digits, numerical approximation
Author: Robert Gerbicz, Douglas McNeil
Reviewer: Karl-Dieter Crisman, Benjamin Jones
Merged: sage-5.1.beta3
Issue created by migration from https://trac.sagemath.org/ticket/10164
The text was updated successfully, but these errors were encountered: