-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add htlc dust ceiling #4837
Add htlc dust ceiling #4837
Conversation
a1e52b0
to
ff38155
Compare
ff38155
to
70db581
Compare
There seem to be a couple of failing tests here:
|
Hmm. These elements failures are a good illustration of how this dust-bucket limit interacts badly with MPP + high feerate environments.
It might be worthwhile to take this into account when routing MPPs, so as not to route too many 'dusty' payments through the same peer (esp relevant during high fee periods). I raised the dust-bucket limit for the test to avoid this and changed the channel to "warn" instead of fail if a proposed feerate update will put you above the dust-bucket (though it's worth noting that this will render this channel unusable until feerates go back down). Not sure if this is better or worse than just dropping to chain to be honest -- this problem is hard lol. |
70db581
to
ae76ca2
Compare
Good point, would it make sense to introduce a global limit on the total number of parts we're willing to create (i.e., a lower limit below which we stop splitting) or is this something that we should be accounted for per-channel? Both are possible, the latter requires slightly more accounting, but could potentially be more permissive. Then again, we won't have as precise an idea of what the real situation at the channel is (i e., Others may also be using the channel and we might just be contributing). Might be worthwhile to open a new issue to discuss this. |
By the way in the elements case it'd be totally ok to just mark them as skippable, just want to make sure we don't have remotely triggerable channel closures or faults 😉 |
Haha yeah, except in this case I think it's kind of a good idea to 'mark' the test as requiring a change in the max htlc exposure limit. Probably could have been more explicit by only varying it in the case of it being the liquid network, oh well. Started an issue to discuss strategies etc around the MPP split + dust limit interactions. |
ae76ca2
to
6a5a94a
Compare
I disagree with the spec PR anyway... |
Trying to dig into the issues here.
This is
The fix here is likely to increase the channel capacity, or reduce the splits by reducing the amount, the highlighted issue is but a side-effect in this case. |
Sorry, AFAICT the whole concept of this change is wrong. Maybe because the spec PR is an unreadable mess, but I've left comments there, too. |
Turns out that the elements test is failing correctly, with a feerate adjustment after the htlcs have been added, pushing them into the dust bucket (despite 10% safety margin). So we should adjust the test. @rustyrussell how would you solve this if we could do a complete greenfield implementation? @niftynei and I had been wondering if putting the channel in a drain mode, in which we fail HTLCs that we'd add to the dust bucket, and accept + fail + remove new HTLCs from remote. This would allow us to be purely reactive, and still drain HTLCs from the channel, thus avoiding stuck payments that could tear down the channel altogether. |
Ok so per @rustyrussell's proposal to change the spec, lightning/bolts#919 (comment)
According to this, we simply would not send the |
Ok posted to the other PR as well; if we use @rustyrussell's proposed rule we'd need
I don't think we've shipped 2. yet across the network. Unclear to me how this offers protection to existing channels right now if we can't upgrade them? |
To reduce the surface area of amount of a channel balance that can be eaten up as htlc dust, we introduce a new config '--max-dust-htlc-exposure-msat', which sets the max amount that any channel's balance can be added as dust Changelog-Added: config: new option --max-dust-htlc-exposure-msat, which limits the total amount of sats to be allowed as dust on a channel
Liquid's threshold for dust is a bit higher, so we bump up the max limit here so we can actually get the whole MPP'd payment sent over the wire
If we're over the dust limit, we fail it immediatey *after* commiting it, but we need a way to signal this throughout the lifecycle, so we add it to htlc_in struct and persist it through to the database. If it's supposed to be failed, we fail after the commit cycle is completed.
for every new added htlc, check that adding it won't go over our 'dust budget' (which assumes a slightly higher than current feerate, as this prevents sudden feerate changes from overshooting our dust budget) note that if the feerate changes surpass the limits we've set, we immediately fail the channel.
And update some behavior to check both sides on receipt of a update_fee, as per the proposed spec. lightning/bolts#919
Let's make this a softer launch by just warning on the channel til the feerates go back down. You can also 'fix' this by upping your dust limit with the `max-dust-htlc-exposure-msat` config.
Fails liquid-regtest otherwise; liquid tends to hit the dust limit earlier than non-liquid tx, and MPP exacerbates this by divvying up payments into dusty bits then attempting to shove them through the same channel, hitting the dust max. The MPP then fails as not all the parts were able to arrive at their destination.
6a5a94a
to
a8c061e
Compare
We were triggering the dust panic.
ACK 8f68dba |
@niftynei: Shouldn't this PR have added |
Right @whitslack, This 272fb72 should fix the |
Implements lightning/bolts#919