Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Typeinference depends on execution order but should not #45388

Open
schlichtanders opened this issue May 20, 2022 · 9 comments
Open

Typeinference depends on execution order but should not #45388

schlichtanders opened this issue May 20, 2022 · 9 comments
Labels
compiler:inference Type inference

Comments

@schlichtanders
Copy link

schlichtanders commented May 20, 2022

Hi there,

this issue was already discussed on discourse. While we could gather some clarification, it couldn't be solved. Hence I raise it as an issue here.

I am the author of ExtensibleEffects.jl and TypeClasses.jl and experience crucial performance difficulties due to bad type inference. As I got multiple requests from the community whether ExtensibleEffects.jl could be made fast, I want to tackle these problems.

ExtensibleEffects.jl is an advanced functional package, hence please bear with me if the following example looks not understandable why you ever would like to do something like this. It is a minified version of the actual ExtensibleEffects code, and I am very sorry that I was not able to simplify it anyway further so far. At least it is reproducible and fits into an issue.

My Motivation

if you want to understand the motivation for the example, this might help ```julia using ExtensibleEffects using TypeClasses using Test

vector_of_eff_of_vector = map(x -> noeffect([x]), [1, 20])
e1 = vector_of_eff_of_vector[1]
e2 = vector_of_eff_of_vector[2]

some functional monadic helpers to work WITHIN the effects

mygoal(e1, e2) = @syntax_flatmap begin
v1 = e1
v2 = e2
@pure [v1; v2]
end

```julia
julia> mygoal(e1, e2)
Eff(effectful=NoEffect{Vector{Int64}}([1, 20]), length(cont)=0)

julia> @inferred mygoal(e1, e2)
ERROR: return type ExtensibleEffects.Eff{NoEffect{Vector{Int64}}, Tuple{}} does not match inferred return type ExtensibleEffects.Eff

julia> @code_warntype mygoal(e1, e2) 
# on the terminal this gives nice color output and will show that only the last step does not infer

The case boils down to something like the following

function test_fails(e1, e2)
    combine(v1, v2) = [v1; v2]
    curried_combine(v1) = v2 -> combine(v1, v2)
    
    e1_f = map(curried_combine, e1)
    f_flatmap(f) = TypeClasses.map(v2 -> f(v2), e2)
    TypeClasses.flatmap(f_flatmap, e1_f)
end

@inferred test_fails(e1, e2)  # same as before
# ERROR: return type ExtensibleEffects.Eff{NoEffect{Vector{Int64}}, Tuple{}} does not match inferred return type ExtensibleEffects.Eff
@code_warntype test_fails(e1, e2)  # same as before

If this could be stabilized, much is gained for ExtensibleEffects.jl

Same code, two execution orders, one fails, the other infers

Here the one which works

using ExtensibleEffects
using TypeClasses
using Test

vector_of_eff_of_vector = map(x -> noeffect([x]), [1, 20])
e1 = vector_of_eff_of_vector[1]
e2 = vector_of_eff_of_vector[2]

function prepare_test(e1, e2)
    combine(v1, v2) = [v1; v2]
    curried_combine(v1) = v2 -> combine(v1, v2)
    
    e1_f = map(curried_combine, e1)
    f_flatmap(f) = TypeClasses.map(v2 -> f(v2), e2)
    f_flatmap, e1_f
end

f_flatmap, e1_f = prepare_test(e1, e2)
@inferred TypeClasses.flatmap(f_flatmap, e1_f)  # infers perfectly

function test_infers(e1, e2)
    f_flatmap, e1_f = prepare_test(e1, e2)
    TypeClasses.flatmap(f_flatmap, e1_f)
end

@inferred test_infers(e1, e2)  # infers perfectly

And here the one which fails

using ExtensibleEffects
using TypeClasses
using Test

vector_of_eff_of_vector = map(x -> noeffect([x]), [1, 20])
e1 = vector_of_eff_of_vector[1]
e2 = vector_of_eff_of_vector[2]

function prepare_test(e1, e2)
    combine(v1, v2) = [v1; v2]
    curried_combine(v1) = v2 -> combine(v1, v2)
    
    e1_f = map(curried_combine, e1)
    f_flatmap(f) = TypeClasses.map(v2 -> f(v2), e2)
    f_flatmap, e1_f
end

function test_infers(e1, e2)
    f_flatmap, e1_f = prepare_test(e1, e2)
    TypeClasses.flatmap(f_flatmap, e1_f)
end

@inferred test_infers(e1, e2)
# ERROR: return type ExtensibleEffects.Eff{NoEffect{Vector{Int64}}, Tuple{}} does not match inferred return type ExtensibleEffects.Eff

f_flatmap, e1_f = prepare_test(e1, e2)
@inferred TypeClasses.flatmap(f_flatmap, e1_f) 
# ERROR: return type ExtensibleEffects.Eff{NoEffect{Vector{Int64}}, Tuple{}} does not match inferred return type ExtensibleEffects.Eff

It seems that a top-level call to TypeClasses.flatmap at the right place informs the compiler about things it usually does not have available (but should have available).


This drives me crazy 😄 I feel like a little child: 10 years programming experience are not enough to solve this on my own, I am depending on you deep core Julia developers and hope someone recognizes what is going on here.

(tested on Julia 1.7.1 and Julia 1.8.0-beta3.4) ```julia julia> versioninfo() Julia Version 1.7.1 Commit ac5cc99 (2021-12-22 19:35 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-12.0.1 (ORCJIT, icelake-client) ```
julia> versioninfo()
Julia Version 1.8.0-beta3.4
Commit a4e69c5088 (2022-05-20 09:32 UTC)
Platform Info:
  OS: Linux (x86_64-unknown-linux-gnu)
  CPU: 8 × Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, icelake-client)
  Threads: 1 on 8 virtual cores
Environment:
  LD_LIBRARY_PATH = /run/opengl-driver/lib:/run/opengl-driver-32/lib:/usr/lib:/usr/lib32:/nix/store/0fih0yvy9lwxkaaci06gw0x1f5a5aqld-sane-config/lib/sane
@Seelengrab
Copy link
Contributor

Seelengrab commented May 20, 2022

It's probably due to Iterators.flatten usually making it very hard for the compiler to infer the result type with complex (nested?) iterators, which for your packages happens here. It potentially has to go arbitrarily deep on arbitrary iterators and just calls eltype on its wrapped iterator, so if any iterator in your chain of iterators is type unstable or the resulting union becomes too big, inference probably just gives up. Specifically, something like this kills inference:

julia> eltype((1,2.,""))
Any

since that's what usually ends up in flatten as its iterator. map is eager and can thus be a little bit smarter, by widening the element type as it goes through the mapped iterators. As far as I can tell, that's the only difference between your two versions.

I had the same problem in PropCheck.jl with recursive flatten, which I "solved" by specifying the expected result type. It keeps any potential type instabilities localized and allows inference to keep functioning. Maybe a similar thing could work for you? I had the fortune that I could assume equal element types of all iterators and already knew what I expected ahead of time, which may not be the case for you, as I understand it.

Either way, inference is allowed to fail, so unless someone figures out how to make inference in general smarter for these cases or figures out how to fix Iterators.flatten in general, I don't think much will happen for this. 🤷

@schlichtanders
Copy link
Author

schlichtanders commented May 20, 2022

Hi @Seelengrab that is definitly a good hint to keep in mind. It is however not related to what is run here as far as I understood the issue.

TypeClasses.flatmap is more like Base.map with many different implementations and the once which use Iterators.flatten are not those which are active here.

The relevant implementation of TypeClasses.flatmap is the one for ExtensibleEffects.Eff
https://github.com/JuliaFunctional/ExtensibleEffects.jl/blob/v1.1.1/src/core.jl#L82

using ExtensibleEffects: Eff, Continuation
TypeClasses.flatmap(f, eff::Eff) = Eff(eff.effectful, Continuation(eff.cont.functions..., f))

Nothing with element-types as far as I can see. But with plain constructors and varargs.

@schlichtanders
Copy link
Author

@Seelengrab do you know someone who can help on this?

@AriMKatz
Copy link

Maybe @Keno or @aviatesk?

@N5N3
Copy link
Member

N5N3 commented Jul 19, 2022

As a temporary fix, you can define this with julia >= 1.7

which(ExtensibleEffects.Eff, Tuple{Any,ExtensibleEffects.Continuation}).recursion_relation = (@nospecialize(_...)) -> true

to turn off the recursion check and make sure the inference success.
It's strange that TypeClasses.flatmap(f_flatmap, e1_f)'s inference fail if we run test_infers before, as IIRC we don't cache the inference result in this case.

@aviatesk
Copy link
Member

Seems like it's a same issue as #35800 (is there any place where your package (explicitly or implicitly) uses Core.Compiler.return_type)?

@schlichtanders
Copy link
Author

in the example stated above I am using map and map on Vectors implicitly uses Core.Compiler.return_type I guess 👍

@schlichtanders
Copy link
Author

schlichtanders commented Jul 22, 2022

Also @inferred uses Core.Compiler.return_type I guess.

What I learned by reading through the issue you linked:
In general using return_type before actual execution can lead to coarser inference. So if you can execute an example before using @inferred that may actually help.
(and in Julia 1.8 the problem seems to be less pronounced anyway)

@kshyatt kshyatt added the compiler:inference Type inference label Oct 5, 2022
@schlichtanders
Copy link
Author

I made another attempt at getting into the details of this problem and went a completely different route. Because it is unclear whether the underlying issue is the same or a different one, I created a new issue for it: #47694

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

No branches or pull requests

6 participants