Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethink Implicit Resolution Rules #5881

Closed
odersky opened this issue Feb 8, 2019 · 23 comments
Closed

Rethink Implicit Resolution Rules #5881

odersky opened this issue Feb 8, 2019 · 23 comments

Comments

@odersky
Copy link
Contributor

odersky commented Feb 8, 2019

I have opened this issue as a place to discuss how we should adapt the rules for implicit resolution.

Contextual vs Type Scope Searches

The current spec says:

First, eligible are all identifiers x that can be accessed at the point of the method call without a prefix and that denote an implicit definition or an implicit parameter. An eligible identifier may thus be a local name, or a member of an enclosing template, or it may be have been made accessible without a prefix through an import clause. If there are no eligible identifiers under this rule, then, second, eligible are also all implicit members of some object that belongs to the implicit scope of the implicit parameter's type, T.

This gives a priority: Search in context first, only is that fails search in implicit scope of type. Do we want to keep that? It prevents us from defining in a context a "fallback" rule that is less specific than rules in the implicit scope of their types, since the "fallback" would be selected before the other implicits are even considered.

Implicit Type Scope

The current spec includes the prefix p of a type reference p.T in the implicit scope of that type. This holds even for the package prefix. I.e. p could be a package with a toplevel implicit in it. Referring to p.T would bring that implicit in scope. Do we want to keep that? Or is it too suprising / abusable as a mechanism?

Ranking

Do we want to change how implicits are ranked for disambiguation? If yes, how?

@odersky
Copy link
Contributor Author

odersky commented Feb 9, 2019

/cc @milessabin

@odersky
Copy link
Contributor Author

odersky commented Feb 11, 2019

See #5887. This applies the following changes to implicit resolution:

  1. nested implicits always take precedence over outer ones
  2. no more shadowing checks
  3. package prefixes are not considered.

(1) enables a simple solution to the local coherence / local consistency problem #4234 #2046 #4153.
Consider:

abstract class A {
  def show: String
}
class B extends A {
  def show = "B"
}
class C extends A {
  def show = "C"
}
object M {
  def f given B, C : String = {
    implied for A = the[B]
    the[A].show
  }
}

f has two givens for B and C which both extend A. So inferring an A inside f is ambiguous.
However, we can heal the ambiguity by defining a nested implied instance for A which takes precedence over the two parameters.

I believe this is quite workable in practice, and obviates the need for a more complex solution.

@odersky
Copy link
Contributor Author

odersky commented Feb 11, 2019

My tendency is to start with #5887 and refine it as follows:

  • implicit search is scoped. Inner implicits take precendence over outer ones.
  • The scope corresponding to top-level imports(*) is special.
    If an implicit is searched for type T in this scope, we also consider all implicits in the
    implied scope of type T.
  • This replaces the previous sequence of contextual search, followed by type scope search.

That way, it would be possible to import low-priority fallback implicits that complement what is
found in the type scope.

(*) Note that this is not necessarily the outermost scope: There could be implicits defined directly in
enclosing packages.

@hrhino
Copy link
Contributor

hrhino commented Feb 11, 2019

What's the motivation for no longer searching package prefixes?

A (I believe) legitimate use case is in mixed (Java/Scala) projects, where for whatever reason rewriting a Java class to Scala is infeasible, but one nevertheless wishes to put values in its implicit scope. Especially with the demise of package objects and the rise of top-level definitions (yay!), this is a useful technique.

@odersky
Copy link
Contributor Author

odersky commented Feb 11, 2019

@hrhino The reason for no longer searching package prefixes is that implicits put into scope that way tend to be very surprising. A common use of a package level implicit as a global definition for a whole project, but it's generally intended to stay private to that project. Most people are very surprised to learn that the implicit leaks out with any type they define in the package.

The meta goal of many of the implicits related changes is to make it clearer to the user of a library where implicits are coming from.

@smarter
Copy link
Member

smarter commented Feb 11, 2019

What's the motivation for no longer searching package prefixes?

See also scala/scala-dev#446

@abgruszecki
Copy link
Contributor

abgruszecki commented Feb 12, 2019

Something which might be worth considering reg. local implicits being fallbacks to global ones: we already allow implicit (implied?) matches. If we added a similar mechanism that only searched the type scope, then it'd be possible to explicitly define a local fallback to a type-scope instance. That way, we also might end up with simpler resolution rules overall.

This hypothetical mechanism might look as follows:

inline implied localFallback: T = implied[type] match {
  case t: T => t
  case _ => default : T
}

@milessabin
Copy link
Contributor

Implicit ranking should consider the implicit arguments of implicit methods when using specificity to order candidate implicits.

The relative ordering of a pair of candidate implicits should be computed as follows,

  1. The candidates are compared by their result type, the most specific by normal overload rules being selected (this is consistent with the current behaviour).
  2. If (1) is a tie then we compare the lengths of the implicit argument lists of the candidates (if any). The candidate with the longest argument list wins (this matches users intuitions that methods with more implicit arguments are "more demanding" hence more specific).
  3. If (2) is a tie then we have a pair of candidates with implicit argument lists of equal length and which are both equally specific wrt their result type/the expected type. We now apply the full normal overloading resolution rules and choose the most specific.

This change makes combining implicits from multiple implicit scopes a great deal more straightforward, since we can now rely on more than just the result type of the implicit definition to guide implicit selection. With this change the following works as expected,

class Show[T](val i: Int)
object Show {
  def apply[T](implicit st: Show[T]): Int = st.i

  implicit val showInt: Show[Int] = new Show[Int](0)
  implicit def fallback[T]: Show[T] = new Show[T](1)
}

class Generic
object Generic {
  implicit val gen: Generic = new Generic
  implicit def showGen[T](implicit gen: Generic): Show[T] = new Show[T](2)
}

object Test extends App {
  assert(Show[Int] == 0)
  assert(Show[String] == 1)
  assert(Show[Generic] == 2) // showGen beats fallback due to longer argument list
}

In Scala 2 and Dotty the specificity of an implicit is computed only from its result type. This is partly responsible for a similar bug in each (Scala 2, Dotty) but, more importantly, is both against the spirit of the spec (which aspires to consistency with normal overload resolution) and contrary to most people's expectations: everyone I've asked expects that "more demanding" implicits should be preferred over "less demanding" implicits if they can be satisfied; many have been surprised that this isn't how scalac already behaves.

There's a pull request making this change to Scala 2 here which gives some additional examples. It should be fairly straightforward to port to Dotty. Notably the only thing in the Scala 2 community build that broke was shapeless and that a corresponding fix was very simple.

@milessabin
Copy link
Contributor

@odersky

  • The scope corresponding to top-level imports(*) is special.
    If an implicit is searched for type T in this scope, we also consider all implicits in the
    implied scope of type T.
  • This replaces the previous sequence of contextual search, followed by type scope search.

That way, it would be possible to import low-priority fallback implicits that complement what is found in the type scope.

I'm very much in favour of this.

However, I don't think it's quite enough to deal with the imported low-priority fallback problem. The issue I see is that the problem is only solved if the import is at the top-level — any implicit imported at an inner scope will still override more specific implicits in the implicit scope. For example, in this case,

trait Show[T] {
  def show(t: T): String
}

class Foo
object Foo {
  implicit def showFoo: Show[Foo] = _ => "Pick me!"
}

object ThirdParty {
  implicit def showFallback[T]: Show[T] = _ => "Only if necessary"
}

If ThirdParty is imported at the top level then things will work as expected,

import ThirdParty._

object Test {
  implicitly[Show[Foo]].show // == "Pick me!"
}

because both showFoo and showFallback are found when searching the top-level scope, and showFoo beats showFallback on specificity. However, if the import is in an inner scope then showFallback will be selected,

object Test {
  import ThirdParty._

  implicitly[Show[Foo]].show // == "Only if necessary"
}

I don't think it's quite good enough to say "move the import to the top-level" in this sort of case, because it's reasonable to want to limit the scope in which such fallback implicits are visible.

We can fix this with a variant of import (import implicit), which imports implicits with the priority they would have if they were imported at the top-level. We could then write,

object Test {
  import implicit ThirdParty._ // implicits imported with top-level priority

  implicitly[Show[Foo]].show   // == "Pick me!" // showFoo wins again
}

and get the desired result.

There's a prototype of this against Scala 2 here. In addition to changing the priority of imported implicits, the prototype imports only implicits. I think that's actually quite desirable as well, but it's orthogonal to this comment.

@milessabin
Copy link
Contributor

@odersky

  1. nested implicits always take precedence over outer ones
  2. no more shadowing checks
  3. package prefixes are not considered.

Big thumbs up to all of these.

Am I right thinking that this means that the sets of available implicits as you ascend from inner to outer scopes are nested? If that's the case then I think we have a pretty good story for local consistency and scope level instance caching.

@drdozer
Copy link

drdozer commented Feb 13, 2019

I often run into issues where I have to add in lots of extra typing or extra code to explicitly disambiguate implicits.

def withOrd[O] given(O: Ord[O]) = { ... }

Inside the body of withOrd, the only instance of Ord that will be found without extra help is O. To get other instances, you have to very carefully tell it what to do. This is compounded when there's an inclosing implicit-scoped Ord.

The behaviour I'd prefer (although I don't know if it is viable) is to first look at the things in the closest scope and attempt to typecheck with that instance. If the typechecking fails, back off and look in the wider scope.

@abgruszecki
Copy link
Contributor

@drdozer you might want to experiment with using path-dependent types if the typical resolution does not work for you. For an example:

class Ord {
  type T
}
def withOrd given (O: Ord): O.T = ???

This way, withOrd will always lock on the lexically closest Ord. You can use blocks with explicit instances of Ord to select a specific instance for some scope. Note that it's still possible to search for Ord with a specific T by using refinements:

def withIntOrd given (o: Ord { type T = Int }): Int = 0
val a = {
  implied for (Ord { type T = Int }) = new Ord { ... }
  {
    implied for (Ord { type T = String }) = new Ord { ... }
    (withOrd, withIntOrd) // (String, Int)
  }
}

@odersky
Copy link
Contributor Author

odersky commented Feb 14, 2019

The behaviour I'd prefer (although I don't know if it is viable) is to first look at the things in the closest scope and attempt to typecheck with that instance. If the typechecking fails, back off and look in the wider scope.

I believe that's what #5887 implements if you mean "typecheck the implicit candidate with the expected type".

@odersky
Copy link
Contributor Author

odersky commented Feb 14, 2019

Am I right thinking that this means that the sets of available implicits as you ascend from inner to outer scopes are nested? If that's the case then I think we have a pretty good story for local consistency and scope level instance caching.

Yes, they are nested.

@odersky
Copy link
Contributor Author

odersky commented Feb 14, 2019

I'm very much in favour of this.

However, I don't think it's quite enough to deal with the imported low-priority fallback problem. The issue I see is that the problem is only solved if the import is at the top-level — any implicit imported at an inner scope will still override more specific implicits in the implicit scope. For example, in this case,

I agree that it is restrictive. However, if we change that we lose the nice nesting properties of implicit contexts that helps caching. So I'd like to wait and see for the moment whether we hit use-cased that can't be worked around that way.

@odersky
Copy link
Contributor Author

odersky commented Feb 14, 2019

About specificity: It could be good to change the rules. However, we cannot appeal to analogy with overloading. In fact, if anything, overloading behaves in the opposite way to what is proposed.
Example:

  def foo[T](implicit gen: Generic): Show[T] = new Show[T](2)
  def foo[T]: Show[T] = new Show[T](1)

  println(foo[Int])

Here, both scalac and Dotty will say "ambiguous". Here's Dotty's error message:

23 |  println(foo[Int])
   |          ^^^
   |Ambiguous overload. The overloaded alternatives of method foo in object Test with types
   | [T] => Show[T]
   | [T](implicit gen: Generic): Show[T]
   |both match type arguments [Int] and expected type Any

If we change the implicit gen parameter to a normal parameter, both scalac and Dotty will pick the second version of foo. That means shorter parameter list wins, not longer one.

@odersky
Copy link
Contributor Author

odersky commented Feb 14, 2019

#5925 contains changes to both overloading and implicit search that now take the number of implicit parameters into account.

@milessabin
Copy link
Contributor

@odersky

I agree that it is restrictive. However, if we change that we lose the nice nesting properties of implicit contexts that helps caching.

I've pretty much convinced myself that this isn't the case. Can you show me an example where an import implicit would break nesting?

@drdozer
Copy link

drdozer commented Feb 14, 2019

@odersky

Yeah! It works now as I'd expect :) Wonderful! That will make it far easier to define instances by composing other instances.

I believe that's what #5887 implements if you mean "typecheck the implicit candidate with the expected type".

@odersky
Copy link
Contributor Author

odersky commented Feb 15, 2019

@milessabin

If an import implicit provides a fallback with a lower priority than implicits with type scope, we can't simply go inside out to find the possible implicits. inside out goes from highest to lowest priority. Or I am misunderstanding what you propose.

@milessabin
Copy link
Contributor

My proposal (ie. the one I implemented in the scalac PR) is that import implicit imports as if at the outer level from the point of view of the scope being imported into.

@odersky
Copy link
Contributor Author

odersky commented Feb 21, 2019

My proposal (ie. the one I implemented in the scalac PR) is that import implicit imports as if at the outer level from the point of view of the scope being imported into.

So the lexical scoping is not the same as the conceptual scoping here?

@milessabin
Copy link
Contributor

So the lexical scoping is not the same as the conceptual scoping here?

I'd say: the priority ordering is different from the lexical nesting. We have a precedent for that wrt specificity.

@odersky odersky closed this as completed Apr 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants