-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lnd crash after 30 seconds. #1292
Comments
Applying this small patch will allow you to boot again: diff --git a/contractcourt/chain_arbitrator.go b/contractcourt/chain_arbitrator.go
index c567dcf9..05ce17db 100644
--- a/contractcourt/chain_arbitrator.go
+++ b/contractcourt/chain_arbitrator.go
@@ -453,9 +453,11 @@ func (c *ChainArbitrator) Start() error {
return err
}
}
- for _, arbitrator := range c.activeChannels {
+ for chanPoint, arbitrator := range c.activeChannels {
if err := arbitrator.Start(); err != nil {
- return err
+ log.Warnf("unable to start channel arb for: %v",
+ chanPoint)
+ continue
}
} We're aware of the greater issue and are working on a fix atm. The set of things in the logs are in a sense benign: simply you have a bunch of tiny channels that can't be sweep on chain at the current fee level or were just far too small. |
@Roasbeef Is this issue specific to the update to v0.4.2-beta? I updated a few days ago, and with a restart today the daemon started crashing on start. After applying your patch above, it's working fine again. Are there plans to mitigate this issue in an upcoming release? What's your advice for the moment regarding upgrades? I worry a bit about lots of people upgrading their nodes and LND just stops working. Not everybody is able to patch & compile from source. |
This isn't related to lnd. If you had a channel in this state before the
update, then newly restarted you may hit the issue. The bug was fixed in
lnd and channels can no longer enter this state. There's also a min
accepted chan size, which will become increasingly important in the future
to allow nodes to maintain a healthier set of inbound channels.
…On Thu, Jun 21, 2018, 12:07 AM Stadicus ***@***.***> wrote:
@Roasbeef <https://github.com/Roasbeef> Is this issue specific to the
update to v0.4.2-beta? I updated a few days ago, and with a restart today
the daemon started crashing on start. After applying your patch above, it's
working fine again.
Are there plans to mitigate this issue in an upcoming release? What's your
advice for the moment regarding upgrades? I worry a bit about lots of
people upgrading their nodes and LND just stops working. Not everybody is
able to patch & compile from source.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1292 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA87LslmtleNcglsBx64vi0_t1OO29lAks5t-tWlgaJpZM4UQVx6>
.
|
If I understand correctly, these are dust channels, with not enough balance to be closed, correct? If I already have a channel like this, is there anything one can do to preempt this LND crash, without manually applying this patch? Or is this already merged into master? |
There's a "forget channel" rpc that'll be merged soon. It'll let you drop
dust channels like this. Should be merged sometime next week likely.
On Jun 21, 2018 12:14 AM, "Stadicus" <notifications@github.com> wrote:
If I understand correctly, these are dust channels, with not enough balance
to be closed, correct? If I already have a channel like this, is there
anything one can do to preempt this LND crash, without manually applying
this patch? Or is this already merged into master?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1292 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA87Ls6uB0-MmtTZzDjLN0Drmdl50fcnks5t-tc7gaJpZM4UQVx6>
.
|
Thanks for clarifying and all your incredible work! |
just upgraded to version 524291d
since then lnd crashes shortly after start
with bitcoind:
with btcd:
some other things in the log that looks bad but doesn't crash:
The text was updated successfully, but these errors were encountered: