Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Refreshing" functions by re-evaling their definition leads to better performance #28683

Closed
KristofferC opened this issue Aug 16, 2018 · 6 comments
Labels
performance Must go faster

Comments

@KristofferC
Copy link
Member

julia> function printfd(n)
           open("/dev/null", "w") do io
               for i = 1:n
                   print(io, i, UInt32(i+1))
               end
           end
       end
printfd (generic function with 1 method)

julia> @btime printfd(1000)
  511.730 μs (5494 allocations: 226.78 KiB)

julia> @btime printfd(1000)
  512.749 μs (5494 allocations: 226.78 KiB)

julia> @eval Base begin
       show(io::IO, n::Signed) = (write(io, string(n)); nothing)
       print(io::IO, n::Unsigned) = print(io, string(n))
       end
print (generic function with 52 methods)

julia> @btime printfd(1000)
  222.553 μs (5005 allocations: 219.14 KiB)

julia> @btime printfd(1000)
  221.671 μs (5005 allocations: 219.14 KiB)

This is consistent. The benchmarks was run with a source build of julia.

@KristofferC KristofferC added the performance Must go faster label Aug 16, 2018
@MikeInnes
Copy link
Member

I've noticed this particularly with generated functions. In my case, it seemed like type inference was happening but no inlining/optimisation, for some reason.

@KristofferC
Copy link
Member Author

Could this be due to where things are placed in the method table and how fast the dynamic dispatch calls find them?

@mbauman
Copy link
Member

mbauman commented Aug 24, 2018

Shot in the dark: if there are any type instabilities it might be that inference is getting a "luckier" first compilation sequence after redefinition and ending up with a better inferred result than it "should".

@KristofferC
Copy link
Member Author

I don't notice any difference in the inferred code so it seems this is something with the runtime...

@yuyichao
Copy link
Contributor

yuyichao commented Aug 24, 2018 via email

@KristofferC
Copy link
Member Author

This specific case seems fixed now...

julia> using BenchmarkTools

julia> function printfd(n)
           open("/dev/null", "w") do io
               for i = 1:n
                   print(io, i, UInt32(i+1))
               end
           end
       end

printfd (generic function with 1 method)

julia> @btime printfd(1000)
  197.087 μs (5005 allocations: 219.14 KiB)

julia> @btime printfd(1000)
  196.564 μs (5005 allocations: 219.14 KiB)

julia> @eval Base begin
       show(io::IO, n::Signed) = (write(io, string(n)); nothing)
       print(io::IO, n::Unsigned) = print(io, string(n))
       end

print (generic function with 52 methods)

julia> @btime printfd(1000)
  205.369 μs (5005 allocations: 219.14 KiB)

julia> @btime printfd(1000)
  205.492 μs (5005 allocations: 219.14 KiB)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

No branches or pull requests

4 participants