-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
decouple inference from Base #11274
decouple inference from Base #11274
Conversation
@@ -1164,7 +1164,7 @@ foo4129(a::Baz4129,b::Bar41291,args...) = foo4129(a,b.f,b,args...) | |||
foo4129(a::Baz4129,b::Bar41292,args...) = foo4129(a,b.f,b,args...) | |||
foo4129(a::Baz4129,args...) = foo4129(a,a.b,args...) | |||
|
|||
@test isa(foo4129(Baz4129(Bar41291(Foo4129())),1,2), Tuple{Baz4129,Bar4129,Foo4129,Int,Int}) | |||
#@test isa(foo4129(Baz4129(Bar41291(Foo4129())),1,2), Tuple{Baz4129,Bar4129,Foo4129,Int,Int}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nevermind, i made a silly typo
💯 Will make it much easier to make disruptive changes, where getting inference working can be the main bottleneck. I bet this speeds |
strange that julia-debug is segfaulting on the travis branch build but not the pr merge build? |
Some seemingly unrelated changes to inlining costs here? Might be nice to not let |
i removed support for floating point numbers and rewrote the algorithm in terms of integers. it should give the same answers however.
good point. in general, we might want to actually distribute the inference.ji file. although bootstrapping off an existing sys.ji file may also work. |
Oh. Right. Not so unrelated then. |
Looks like this cuts off a good 2.5 minutes off of each AppVeyor build. Not too shabby. |
Had a quick look. The inference changes look reasonable (i.e. no semantic changes). I'm mostly afraid of all the potential places where it could be assumed that whatever the top module is, it is at least always the same. The possibilities of more flexible inference and faster bootstrap are very enticing however. |
Is there any way to know about the top module assumptions other than merging and seeing what happens? Btw, I've been in favor of changing the top module from |
@@ -1282,27 +1282,7 @@ jl_value_t *jl_static_eval(jl_value_t *ex, void *ctx_, jl_module_t *mod, | |||
jl_value_t *f = jl_static_eval(jl_exprarg(e,0),ctx,mod,sp,ast,sparams,allow_alloc); | |||
if (f && jl_is_function(f)) { | |||
jl_fptr_t fptr = ((jl_function_t*)f)->fptr; | |||
if (fptr == &jl_apply_generic) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's up with this case getting removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was support for a pre 0.1 deprecation from when ccall could only handle constant pointers. i removed jl_base_module, so i figured it was sensible to remove this special case also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, can this go on master to cut down the diff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Almost all of this diff is valid independent of moving inference itself so I could rearrange into many commits, or leave as one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you have a preference, or can i just merge this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather this be split out as a separate commit to keep commits atomic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how's this now, better?
i think so anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I quite like the granularity now, and the really descriptive commit messages are awesome too.
the top is a module that is assumed to provide certain functions required for basic operation. there is no documented list of these, but there are a few in inference.jl itself (to provide basic constant folding for some important operations) and a few in julia-parser (such as unsafe_cconvert). there is some inconsistency in that sometimes it is used to refer to functionality that is in Core, sometimes it refers to functionality that must be provided by Base for the base language spec to make sense, sometimes it refers to crucial functions that inference needs to optimize. i think that this change actually doesn't need to alter that situation very much. it does however allow for the existence of multiple independent top modules, which hopefully this will make it clearer how to make the distinction between base and top in the future. |
bd96e25
to
3218a0f
Compare
travis broken because of 7ec501d6#commitcomment-11228450 |
Is |
Yes. And it ooks like this (and #11343) are ready to merge. |
+:100: |
Odd to have a warning during normal expected operation. That'll confuse people. |
this was a deprecation to support efficient code generation for ccall(dlsym(x,dlopen(y)),...) from pre-0.1 this syntax is not encouraged and has not been in use for several years
making inference separate from base required modifying the definition of TopNode to be more general, since Base may not even exist at points and it is desirable for Core.Inference to be wholly separate from Base. rather than special-casing Base, this introduces the concept that any module can indicate that it is valid as a top module and will define appropriate functions, as expected by the julia parser, to make this legal. At the start of the module definition, modules can ccall jl_set_istopmod(isprimary) to indicate this. The primary top module will be used to resolve the top module for any code that is not contained in another top module. Only one module can be the primary top module at a time, typically it will be Base.
…ed for floating point numbers in inference.
… and t_ffunc::Array (for builtin functions) this removes the dependence on being able to serialize ObjectIdDict from the inference code
YOLO |
YOMO! (You Only Merge Once) |
We should really get rid of that warning. |
I notice that when one is messing with files that are necessary for inference (e.g., |
Maybe if |
It's more correct, although rarely necessary (unless it impacts the results of inference). But I didn't altogether mind discouraging altering of those files, and even eventually shrinking them further :) |
These were broken by JuliaLang#11274 but no one noticed. Fixes JuliaLang#13350.
having inference inside Base made it hard to modify and slow to compile the sysimg since bootstrapping made two complete passes over the code. additionally, it made inference vulnerable to sabotage -- intentional or unintentional -- by the users program.
to combat those issues, this creates a copy of the essential code required for running the inference into Core.Inference. this allows any part of the system (esp. Base) around inference to change, and even be recompiled or monkey-patched, without impacting the core inference algorithms.
@carnaval this also implies that you can build sysimg.jl without Core.Inference present, and that you can change the inference module in use at runtime.
inspired by @ihnorton's subversive activities in #11148