-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethink Implicit Resolution Rules #5881
Comments
/cc @milessabin |
See #5887. This applies the following changes to implicit resolution:
(1) enables a simple solution to the local coherence / local consistency problem #4234 #2046 #4153. abstract class A {
def show: String
}
class B extends A {
def show = "B"
}
class C extends A {
def show = "C"
}
object M {
def f given B, C : String = {
implied for A = the[B]
the[A].show
}
}
I believe this is quite workable in practice, and obviates the need for a more complex solution. |
My tendency is to start with #5887 and refine it as follows:
That way, it would be possible to import low-priority fallback implicits that complement what is (*) Note that this is not necessarily the outermost scope: There could be implicits defined directly in |
What's the motivation for no longer searching package prefixes? A (I believe) legitimate use case is in mixed (Java/Scala) projects, where for whatever reason rewriting a Java class to Scala is infeasible, but one nevertheless wishes to put values in its implicit scope. Especially with the demise of package objects and the rise of top-level definitions (yay!), this is a useful technique. |
@hrhino The reason for no longer searching package prefixes is that implicits put into scope that way tend to be very surprising. A common use of a package level implicit as a global definition for a whole project, but it's generally intended to stay private to that project. Most people are very surprised to learn that the implicit leaks out with any type they define in the package. The meta goal of many of the implicits related changes is to make it clearer to the user of a library where implicits are coming from. |
See also scala/scala-dev#446 |
Something which might be worth considering reg. local implicits being fallbacks to global ones: we already allow implicit (implied?) matches. If we added a similar mechanism that only searched the type scope, then it'd be possible to explicitly define a local fallback to a type-scope instance. That way, we also might end up with simpler resolution rules overall. This hypothetical mechanism might look as follows: inline implied localFallback: T = implied[type] match {
case t: T => t
case _ => default : T
} |
Implicit ranking should consider the implicit arguments of implicit methods when using specificity to order candidate implicits. The relative ordering of a pair of candidate implicits should be computed as follows,
This change makes combining implicits from multiple implicit scopes a great deal more straightforward, since we can now rely on more than just the result type of the implicit definition to guide implicit selection. With this change the following works as expected, class Show[T](val i: Int)
object Show {
def apply[T](implicit st: Show[T]): Int = st.i
implicit val showInt: Show[Int] = new Show[Int](0)
implicit def fallback[T]: Show[T] = new Show[T](1)
}
class Generic
object Generic {
implicit val gen: Generic = new Generic
implicit def showGen[T](implicit gen: Generic): Show[T] = new Show[T](2)
}
object Test extends App {
assert(Show[Int] == 0)
assert(Show[String] == 1)
assert(Show[Generic] == 2) // showGen beats fallback due to longer argument list
} In Scala 2 and Dotty the specificity of an implicit is computed only from its result type. This is partly responsible for a similar bug in each (Scala 2, Dotty) but, more importantly, is both against the spirit of the spec (which aspires to consistency with normal overload resolution) and contrary to most people's expectations: everyone I've asked expects that "more demanding" implicits should be preferred over "less demanding" implicits if they can be satisfied; many have been surprised that this isn't how scalac already behaves. There's a pull request making this change to Scala 2 here which gives some additional examples. It should be fairly straightforward to port to Dotty. Notably the only thing in the Scala 2 community build that broke was shapeless and that a corresponding fix was very simple. |
I'm very much in favour of this. However, I don't think it's quite enough to deal with the imported low-priority fallback problem. The issue I see is that the problem is only solved if the import is at the top-level — any implicit imported at an inner scope will still override more specific implicits in the implicit scope. For example, in this case, trait Show[T] {
def show(t: T): String
}
class Foo
object Foo {
implicit def showFoo: Show[Foo] = _ => "Pick me!"
}
object ThirdParty {
implicit def showFallback[T]: Show[T] = _ => "Only if necessary"
} If import ThirdParty._
object Test {
implicitly[Show[Foo]].show // == "Pick me!"
} because both object Test {
import ThirdParty._
implicitly[Show[Foo]].show // == "Only if necessary"
} I don't think it's quite good enough to say "move the import to the top-level" in this sort of case, because it's reasonable to want to limit the scope in which such fallback implicits are visible. We can fix this with a variant of object Test {
import implicit ThirdParty._ // implicits imported with top-level priority
implicitly[Show[Foo]].show // == "Pick me!" // showFoo wins again
} and get the desired result. There's a prototype of this against Scala 2 here. In addition to changing the priority of imported implicits, the prototype imports only implicits. I think that's actually quite desirable as well, but it's orthogonal to this comment. |
Big thumbs up to all of these. Am I right thinking that this means that the sets of available implicits as you ascend from inner to outer scopes are nested? If that's the case then I think we have a pretty good story for local consistency and scope level instance caching. |
I often run into issues where I have to add in lots of extra typing or extra code to explicitly disambiguate implicits. def withOrd[O] given(O: Ord[O]) = { ... } Inside the body of The behaviour I'd prefer (although I don't know if it is viable) is to first look at the things in the closest scope and attempt to typecheck with that instance. If the typechecking fails, back off and look in the wider scope. |
@drdozer you might want to experiment with using path-dependent types if the typical resolution does not work for you. For an example: class Ord {
type T
}
def withOrd given (O: Ord): O.T = ??? This way, def withIntOrd given (o: Ord { type T = Int }): Int = 0
val a = {
implied for (Ord { type T = Int }) = new Ord { ... }
{
implied for (Ord { type T = String }) = new Ord { ... }
(withOrd, withIntOrd) // (String, Int)
}
} |
I believe that's what #5887 implements if you mean "typecheck the implicit candidate with the expected type". |
Yes, they are nested. |
I agree that it is restrictive. However, if we change that we lose the nice nesting properties of implicit contexts that helps caching. So I'd like to wait and see for the moment whether we hit use-cased that can't be worked around that way. |
About specificity: It could be good to change the rules. However, we cannot appeal to analogy with overloading. In fact, if anything, overloading behaves in the opposite way to what is proposed. def foo[T](implicit gen: Generic): Show[T] = new Show[T](2)
def foo[T]: Show[T] = new Show[T](1)
println(foo[Int]) Here, both scalac and Dotty will say "ambiguous". Here's Dotty's error message:
If we change the implicit |
#5925 contains changes to both overloading and implicit search that now take the number of implicit parameters into account. |
I've pretty much convinced myself that this isn't the case. Can you show me an example where an |
If an |
My proposal (ie. the one I implemented in the scalac PR) is that |
So the lexical scoping is not the same as the conceptual scoping here? |
I'd say: the priority ordering is different from the lexical nesting. We have a precedent for that wrt specificity. |
I have opened this issue as a place to discuss how we should adapt the rules for implicit resolution.
Contextual vs Type Scope Searches
The current spec says:
This gives a priority: Search in context first, only is that fails search in implicit scope of type. Do we want to keep that? It prevents us from defining in a context a "fallback" rule that is less specific than rules in the implicit scope of their types, since the "fallback" would be selected before the other implicits are even considered.
Implicit Type Scope
The current spec includes the prefix
p
of a type referencep.T
in the implicit scope of that type. This holds even for the package prefix. I.e.p
could be a package with a toplevel implicit in it. Referring top.T
would bring that implicit in scope. Do we want to keep that? Or is it too suprising / abusable as a mechanism?Ranking
Do we want to change how implicits are ranked for disambiguation? If yes, how?
The text was updated successfully, but these errors were encountered: