Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec: Do a subtyping check when decoding #168

Merged
merged 13 commits into from
Apr 22, 2021

Conversation

nomeata
Copy link
Collaborator

@nomeata nomeata commented Jan 20, 2021

this introduces a subtyping check

  • before even starting to decode
  • when decoding opt values.

An implementation can probably pre-compute something at first, and then
at each opt occurrence quickly look which way to go…

With these changes, the coercion relation never fails on well-typed
inputs, and it is anyways right-unique.
I am wondering if it makes sense to rewrite it as a family of functions,
indexed by the input and output types (like Andreas had it originally).

This fixes #141, I hope.

this introduces a subtyping check

 * before even starting to decode
 * when decoding `opt` values.

An implementation can probably pre-compute something at first, and then
at each `opt` occurrence quickly look which way to go…

With these changes, the coercion relation never fails on well-typed
inputs, and it is anyways right-unique.
I am wondering if it makes sense to rewrite it as a family of functions,
indexed by the input and output types (like Andreas had it originally).
Copy link
Contributor

@rossberg rossberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks!

Yeah, at this point the functional specification would probably be preferable again -- more compact and easier to grok for non-PLT-initiates. But we can leave that for later.

spec/Candid.md Outdated Show resolved Hide resolved
Co-authored-by: Andreas Rossberg <rossberg@dfinity.org>
@nomeata
Copy link
Collaborator Author

nomeata commented Jan 20, 2021

I don’t think I’d mind rewriting it right away. OTOH, the current formulation does it’s job… we’ll see.

Are you fine with requiring a subtyping for the whole message to begin with? This means that vec {} : vec int no longer decodes at type vec bool? (Probably a good thing)

@rossberg
Copy link
Contributor

Seems reasonable, but would the alternative of doing a check on each empty vector and on variants be equivalent?

@nomeata
Copy link
Collaborator Author

nomeata commented Jan 20, 2021

Seems reasonable, but would the alternative of doing a check on each empty vector and on variants be equivalent?

Then you’d also have to do it on func and service references. Essentially, everytime when a constituent type of the type doesn't have a manifest value it corresponds to to.

@rossberg
Copy link
Contributor

Right, but specification-wise, maybe that is a more natural way to formulate it? Hm, and even practically, perhaps it might save you from running any check in many cases.

@nomeata
Copy link
Collaborator Author

nomeata commented Jan 20, 2021

I think I find it much easier to assume that the coercion function/relation only works on input that has already passed the subtyping check. And yes, better a single check than many (in the case of [[], [],[]] for example)

@rossberg
Copy link
Contributor

An implementation could of course cache the pairs of types already checked (which it wants to do anyway for recursion), so I think it should be less work rather than more. But I don't mind going the other way for the spec.

@nomeata
Copy link
Collaborator Author

nomeata commented Jan 26, 2021

I wasn't sure if we’d still need to fix #151, but since that issue said “so maybe
rewriting this as a partial coercion function would be helpful” I just rewrote this PR to make coercion a total function indexed by the actual and expected types. Seems simpler this way?

@nomeata
Copy link
Collaborator Author

nomeata commented Jan 27, 2021

nomeata added a commit that referenced this pull request Jan 27, 2021
this models #168 in Coq. So far so good.

The crucial bit will be connecting this to the IDL Soundness
formalization. That requires
 * adding a function reference type
 * proving a subtyping-compositionality lemma, namely
   ```
   If t1 <: t2 and s1 in t1 <: s2 in t2 then s1 <: s2.
   ```

That will be no small untertaking. I will probably remove the other
variants in the Coq file then?
Copy link
Contributor

@chenyan-dfinity chenyan-dfinity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Do we want to bump the spec version as well?

@nomeata
Copy link
Collaborator Author

nomeata commented Feb 2, 2021

I think a minor bump at least.

I am also abit torn about the role of the test data – I consider them part of the spec, but if we update them now we break the rust CI (it’s the downside of monorepos enforcing lockstep development of components that are trailing each other.) But I guess we can be pragmatic and update the test data when we update the rust implementation.

@chenyan-dfinity
Copy link
Contributor

if we update them now we break the rust CI

Ah, right. Maybe we can update the test in comments, and uncomment them when we are ready?

@nomeata
Copy link
Collaborator Author

nomeata commented Feb 3, 2021

The problem is that writing these tests by hand, without any implementation to cross-check them, will likely just cause them to be riddled with typos in the binary parts…

Let’s just merge this with a version bump, and maybe add a version comment to all tests?

@nomeata nomeata marked this pull request as ready for review February 3, 2021 08:48
@nomeata nomeata requested a review from rossberg February 3, 2021 08:48
@nomeata
Copy link
Collaborator Author

nomeata commented Feb 3, 2021

@rossberg would you mind to re-review? I rewrote it with a coercion function indexed by t1 <: t2.

Copy link
Contributor

@rossberg rossberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good modulo a few minor things.

I would like to suggest an alternative notation, however. Following the R(v : t) style that the doc uses for serialisation functions, we could write c[t <: t'](v) as C(v : t <: t').

spec/Candid.md Outdated

-------------------
null ~> null : null
c[<t> <: <t>](x) = x for every <t> ∈ <numtype>, bool, text, null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For hygiene, I think it's safer to expand the def for each case, so that you can limit the argument value to the right shapes as well.

spec/Candid.md Outdated Show resolved Hide resolved
spec/Candid.md Outdated Show resolved Hide resolved
spec/Candid.md Outdated Show resolved Hide resolved
```

* Higher-order soundness of subtyping
See <./IDL-Soundness.md>, with the following instantiations:
```
s1 ~> s2 ≔ s2 <: s1
t1 <: t2 ≔ (∀ v. v : t1 ⇒ v ~> _ : t2 )
t1 <: t2 ≔ t1 <: t2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you could just drop this line?

Copy link
Collaborator Author

@nomeata nomeata Feb 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don’t think so, the left hand side is “parameters of IDL-Soundness”, the right hand side is “stuff defined here”. The punnig is mostly accidential (and if we would distinguish “desired subtyping” and “value subtyping”, it might matter.)

spec/Candid.md Outdated Show resolved Hide resolved
spec/Candid.md Outdated Show resolved Hide resolved
@nomeata
Copy link
Collaborator Author

nomeata commented Feb 3, 2021

that the doc uses for serialisation functions, we could write c[t <: t'](v) as C(v : t <: t').

I’d really like to have a notation that shows the stage distinction, i.e. “for every t and t', with t <: t' you get a function t -> t'”. Or, in dependently-typed phrased c : forall t1 t2, t1 <: t2 -> t1 -> t2.

This matters, because all the type-level stuff (including subtype checks) happend during computation of the value function; once we have partially applied c[t <: t'], there is no more type analysis needed.

That said, a nicer notation would be nice. If this was LaTex I’d write C_{t <: t'} for the function, but not sure if that looks good in ASCII. [[ t1 <: t2 ]] maybe?

Co-authored-by: Andreas Rossberg <rossberg@dfinity.org>
@nomeata
Copy link
Collaborator Author

nomeata commented Mar 1, 2021

Here I think the prototypical conter example. Let m and n be co-prime (no common divisor besides 1), and let

t1 = vec vec vec … vec t1 = vec^n t1
t2 = vec vec vec … vec t2 = vec^m t2

The type tables have sizes n and m. To do the check t1 <: t2, you’ll end up comparing each vec in t1 with each vec in t2, so that’s in O(n×m). But only time or also space? Less sure how to make the argument here, though. But we can look at these types to check concrete algorithms for size usage.

I know that t1 and t2 are actually both equal to µ t. vec t, and if both were canonicalized, the’d had a type table of size one. But we can easily fix that:

Ta[t] = rec { nat; t }
Tb[t] = rec { int; t }
Tc[t] = rec { reserved; t }
t1 = Ta^{n-1}[ Tb [t1] ]
t2 = Tb^{n-1}[ Tc [t1] ]

The nat, int, reserved merely make sure we can’t have smaller type tables for the t1, t2, but otherwise don't affect the subtyping relation.

Also note that trying to store less in the memotable (e.g. have less memo-points) will likely just trade heap space for stack space. So when assessing space complexity, one should write the algorithm as a loop without stack.

Speaking of stack: It is rather straight-forward to implement this subtyping check in a loop without stack, using the standard work list approach that one uses on most graph algorithms: Keep a list or set of pairs of types (or type table indices, if both types are stored with type tables) that you still need to check. Until that is empty, pop one, add it to the “done” set, check that it makes sense locally, and then add all subchecks that are not done yet to the todo list.

@rossberg
Copy link
Contributor

rossberg commented Mar 1, 2021

equivalence is a subrelation of subtyping

µx. record { x } is equivalent to empty, but µx. record { x } is not a subtype of empty. Is this the only exception?

They are not equivalent, they are merely isomorphic. Those are very different notions, with different implications. The type system doesn't magically equate type isomorphisms, just like Bool <> {#a; #b}. Maybe one day somebody will invent homotopy type theory with full path inference. :)

As @nomeata says, the worst case (space and time) complexity is quadratic. In practical cases, it typically is linear, though.

To be honest, this complexity has been worrying me quite a bit. That really is Candid's Achilles heel. I wonder whether, in practice, we couldn't get away with some form of iso-recursive interpretation, which would always be linear.

@chenyan-dfinity
Copy link
Contributor

Also note that trying to store less in the memotable (e.g. have less memo-points) will likely just trade heap space for stack space.

This is a good trade. Memo table is global and stack is bounded by the depth of the recursion. For the vec^m, vec^n example, only memoizing Var node, the size of the memo table is only n + m, as one of the tuple is always the single Var node, while the other type unfolds. Similarly, the recursion depth would only increase linearly instead of quadratically. Overall, I think space is still linear.

I wonder whether, in practice, we couldn't get away with some form of iso-recursive interpretation, which would always be linear.

I hope by unfolding types along with the value structure, it is enough to rule out invalid types.

@nomeata
Copy link
Collaborator Author

nomeata commented Mar 2, 2021

Also note that trying to store less in the memotable (e.g. have less memo-points) will likely just trade heap space for stack space.

This is a good trade. Memo table is global and stack is bounded by the depth of the recursion. For the vec^m, vec^n example, only memoizing Var node, the size of the memo table is only n + m, as one of the tuple is always the single Var node, while the other type unfolds. Similarly, the recursion depth would only increase linearly instead of quadratically. Overall, I think space is still linear.

isn’t there n vars, if you represent the type in A-normal form (as it is in the type table, where every type is its own var)?

Additionally, I am certain I can construct an example where every subterm has to be its own var.

@rossberg
Copy link
Contributor

rossberg commented Mar 2, 2021

Well, we don't need to guess, the results are well-known. See e.g. TAPL Ch. 21. Structural subtyping is quadratic (it is equivalent to deciding DFA inclusion) when you do memoization properly. Without it, it's exponential, as was the case for Amadio & Cardelli's original algorithm.

Comment on lines +934 to +935
C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v) if not (null <: <t>) and <t> <: <t'>
C[<t> <: opt <t'>](_) = null if not (null <: <t>) and not (<t> <: <t'>)
Copy link
Collaborator Author

@nomeata nomeata Apr 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v) if not (null <: <t>) and <t> <: <t'>
C[<t> <: opt <t'>](_) = null if not (null <: <t>) and not (<t> <: <t'>)
C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v) if not (null <: <t'>) and <t> <: <t'>
C[<t> <: opt <t'>](_) = null if not (null <: <t'>) and not (<t> <: <t'>)

These conditions need to be stricter, see https://dfinity.slack.com/archives/C016H8MCVRC/p1617755023039700, else C[bool <: fix opt](true) = fix opt, and we certainly don’t want infinite values here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about the empty record case?
C[ μx. record { x } <: μx. opt record { x } ](_) = opt C[μx. record { x } <: record { μx. opt record { x } } ]

Do we want to allow μx. record { x } to be subtype of opt t?

Copy link
Collaborator Author

@nomeata nomeata Apr 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every type is a subtype of opt t. The question is only whether it coerces to null or something else.

I think this case is ok: Note that the coercion function takes a value v : µx.record{x}, and no such value exists, so C[ μx. record { x } <: μx. opt record { x } ] is (independent of the equations) the empty function with an empty domain.

But you are right, this is a bit fishy, in the sense that implementations must be careful not to loop. But we already have similar issues with µx.record{x}, even in the value decoder, where we’d easily run into a loop if we are not careful.

Copy link
Contributor

@chenyan-dfinity chenyan-dfinity Apr 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to detect loops in this case. The usual trick is to count the depth of the continuous nesting of records. But here, µx.record{x} doesn't change, we are only reducing the expected type, i.e. C[ μx. record { x } <: μx. opt record { x } ] = opt C[ μx. record { x } <: record { μx. opt record { x } } ]. Operationally, the type on the left doesn't change at all, so we cannot count the depth.

Of course, there are more expensive ways to detect loops, e.g. memoization or decoding the value with the actual type first. But to rule out other possible implementations just because of this case seems a bit wasteful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really a new problem? The problem with µx.record{x} is about decoding, not coercion, and that isn’t changed by this PR? There are no values of that type anyways, so one wouldn’t ever get past the decoding stage before erroring out (or looping.)

In the Motoko decoder, I bump a counter whenever I go from record { t1, … } to t1, and reset whenever I actually consum a byte from the input stream. If that counter exceeds the table size I have recognized a loop.

In the Haskell decoder, I pre-process the type table and replace any empty type like µx. record { …., x , …} with empty.

-------------------------
<v> ~> <v'> : opt <t>
C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v) if not (null <: <t>) and <t> <: <t'>
C[<t> <: opt <t'>](_) = null if not (null <: <t>) and not (<t> <: <t'>)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
C[<t> <: opt <t'>](_) = null if not (null <: <t>) and not (<t> <: <t'>)
C[<t> <: opt <t'>](_) = null otherwise

We don't have a rule for null <: t' then. I think this applies to the null <: t' case as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right. But let’s be explicit here, and and not rely on fall-through-semantics.

So it seems the fomulation that we want, so that each line is correct independent of their order, might be

C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v)  if not (null <: <t'>) and <t> <: <t'>
C[<t> <: opt <t'>](_)   = null                   if not (null <: <t'>) and not (<t> <: <t'>)
C[<t> <: opt <t'>](_)   = null                   if null <: <t'> and not (null <: <t>)

(Remember that null <: t implies that t = null, t = opt … or t = reserved, and these cases have their own rules)

I think this needs a table:

null <: t null <: t' t <: t' Rule?
no no no second rule
no no yes first rule
no yes no third rule
no yes yes third rule
yes no no second rule
yes no yes impossible (<: is transitive)
yes yes no see rules for t = null, t = opt … or t = reserved
yes yes yes see rules for t = null, t = opt … or t = reserved

Comment on lines +934 to +935
C[<t> <: opt <t'>](<v>) = opt C[<t> <: <t'>](v) if not (null <: <t>) and <t> <: <t'>
C[<t> <: opt <t'>](_) = null if not (null <: <t>) and not (<t> <: <t'>)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about the empty record case?
C[ μx. record { x } <: μx. opt record { x } ](_) = opt C[μx. record { x } <: record { μx. opt record { x } } ]

Do we want to allow μx. record { x } to be subtype of opt t?

@chenyan-dfinity
Copy link
Contributor

chenyan-dfinity commented Apr 11, 2021

The variant rule is a bit unexpected. While it's true that we can add tags via the opt rule, so opt variant { a } <: opt variant { a; b }. But because variant { a } is not a subtype of variant { a; b }, if the receiver expects opt variant {a;b}, and the sender sends opt variant {a}, the receiver always gets null. I was hoping we can still decode opt variant {a}. Otherwise, adding tags is effectively a breaking change: Until the sender upgrades the type, it is always null.

Never mind. I get the wrong direction. The problem still exists but less severe. If the receiver upgrades opt variant {a;b} to opt variant {a}, until the sender upgrades the type, the receiver only gets null from this sender.

Also, for the main service type, adding a new method is a breaking change?

@nomeata
Copy link
Collaborator Author

nomeata commented Apr 11, 2021

Also, for the main service type, adding a new method is a breaking change?

No, I don't think so. Why should it? Do you have an example?

@chenyan-dfinity
Copy link
Contributor

chenyan-dfinity commented Apr 11, 2021

Also, for the main service type, adding a new method is a breaking change?

No, I don't think so. Why should it? Do you have an example?

service { <methtype'>;* } <: service { }. This means we can only remove methods? Or we assume the main service is out-bound? At least, the spec didn't mention this.

@nomeata
Copy link
Collaborator Author

nomeata commented Apr 12, 2021

No, it means you can only add methods, and never remove any: You upgrade a service towards a sub-type (so that the new version can stand in where the old version is expected).

@chenyan-dfinity
Copy link
Contributor

chenyan-dfinity commented Apr 20, 2021

Everything here is implemented at #211. Should we merge this PR?

The only remaining item there is that there is a significant slowdown in decoding record/variant types (see the benchmark chart from the PR). Not sure where the slowdown comes from.

@nomeata
Copy link
Collaborator Author

nomeata commented Apr 20, 2021

Everything here is implemented at #211. Should we merge this PR?

I guess. How much breakage do you expect for users of the library? I wonder if it would make sense to hold off until after launch…

I see you expanded the test suite a bit, thanks. Are you content the the coverage? In other words: If someone who isn’t you implements this logic (say in another language), and they get the test suite to pass, would you trust that they had to think about and fix all the corner cases?

@chenyan-dfinity
Copy link
Contributor

Maybe we can merge this to beta branch for now. This enables us to implement the new spec in other languages by pulling in the test suites.

How much breakage do you expect for users of the library?

From a user's perspective, the breakage is minimal: requires both serialize and deserialize trait for deserialization; relocate some traits to another module.

For test coverage, I think it's reasonably well. This PR doesn't change much of the existing tests. Out of 200+ tests, we only need to update around 5 tests.

@chenyan-dfinity chenyan-dfinity changed the base branch from master to beta April 20, 2021 23:17
@nomeata
Copy link
Collaborator Author

nomeata commented Apr 21, 2021

Maybe we can merge this to beta branch for now. This enables us to implement the new spec in other languages by pulling in the test suites.

Good idea

For test coverage, I think it's reasonably well. This PR doesn't change much of the existing tests. Out of 200+ tests, we only need to update around 5 tests.

I’d rather conclude that the coverage is reasonably bad then, as we don’t have many tests that are affected by this change ;-)

I expect we need a fair amount of tests where we check if subtyping is really properly checked even in the absence of values (e.g. with a vec {} value, or in references.)

chenyan-dfinity added a commit that referenced this pull request Apr 22, 2021
Breaking changes:
* Deserialization requires both `Deserialize` and `CandidType` trait
* Implement #168
* `de::ArgumentDecoder`, `ser::ArgumentEncoder` moved to `utils::{ArgumentDecoder, ArgumentEncoder}`
* `types::subtype` returns `Result<()>` instead of `bool` for better error message
* Fix #214
@chenyan-dfinity
Copy link
Contributor

I expect we need a fair amount of tests where we check if subtyping is really properly checked even in the absence of values (e.g. with a vec {} value, or in references.)

Good catch for the vec {} case. I add tests for record, opt and op variant already. Let's merge this to beta, and we can iterate.

@chenyan-dfinity chenyan-dfinity merged commit 6e62bbf into beta Apr 22, 2021
@chenyan-dfinity chenyan-dfinity deleted the joachim/spec-subtype-on-decode branch April 22, 2021 21:53
nomeata added a commit that referenced this pull request Apr 23, 2021
…ence (#171)

A revamp of the Coq development:

 * It models the subtype-checking on decoding (#168). Looks good
 * It connects MiniCandid to the IDL-Soundness theorem. The main work here is the subtyping-compositonality lemma.
   ```
   If t1 <: t2 and s1 in t1 <: s2 in t2 then s1 <: s2.
   ```
   With this in place, instantiating the “canonical subtyping” proof there works nicely.
 * It proves transitive coherence with regard to the relaxed relation as per #173
 * Mild coqdoc’ifiacation. I’d like to eventually render these to HTML and host them somewhere.
   It’s very annoying that Github Action artifacts, even if they are HTML, are not directly accessible with the browser.
   Maybe setup Github pages?
   
It is still a Mini-Candid with a limited set of types, but I think it has all the interesting ones to cover the corner cases. Even adding vectors adds a lot of technical noise with little additional insight (see #154.)
nomeata added a commit that referenced this pull request Jun 16, 2021
this writes the rules out as described in
#168 (comment)

I believe that this is necessary for completeness, as shown by this
table, which checks that `t <: opt t'` is covered:

```
| null <: t | null <: t' | t <: t' | Rule?
| --- | --- | --- | --- |
| no | no | no | second rule
| no | no | yes | first rule
| no | yes | no | third rule
| no | yes | yes | third rule
| yes | no | no | second rule
| yes | no | yes | impossible (<: is transitive)
| yes | yes | no | see rules for  `t = null`, `t = opt …` or `t = reserved`
| yes | yes | yes | see rules for  `t = null`, `t = opt …` or `t = reserved`
```

Remember that `null <: t` implies that `t = null`, `t = opt …` or `t =
reserved`, and these cases have their own rules.

Fixes #239.
nomeata added a commit to nomeata/haskell-candid that referenced this pull request Jun 17, 2021
This implements the changes to the Candid spec from dfinity/candid#168, by introducing a notion of coercion.

This also required significant changes to the machinery around function and service types, and it now handles annotations properly.
chenyan-dfinity added a commit that referenced this pull request Aug 30, 2021
* SpeC: Refine the opt rule

this writes the rules out as described in
#168 (comment)

I believe that this is necessary for completeness, as shown by this
table, which checks that `t <: opt t'` is covered:

```
| null <: t | null <: t' | t <: t' | Rule?
| --- | --- | --- | --- |
| no | no | no | second rule
| no | no | yes | first rule
| no | yes | no | third rule
| no | yes | yes | third rule
| yes | no | no | second rule
| yes | no | yes | impossible (<: is transitive)
| yes | yes | no | see rules for  `t = null`, `t = opt …` or `t = reserved`
| yes | yes | yes | see rules for  `t = null`, `t = opt …` or `t = reserved`
```

Remember that `null <: t` implies that `t = null`, `t = opt …` or `t =
reserved`, and these cases have their own rules.

Fixes #239.

* null <: t'

Co-authored-by: Yan Chen <yan.chen@dfinity.org>
Co-authored-by: Yan Chen <48968912+chenyan-dfinity@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Still not sound: Gradual typing vs. opportunistic decoding
3 participants