Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

thread 'tokio-runtime-worker' has overflowed its stack when target & finalised block are distant apart in count #1854

Closed
jun0tpyrc opened this issue Oct 27, 2020 · 4 comments

Comments

@jun0tpyrc
Copy link

jun0tpyrc commented Oct 27, 2020

2020-10-27 04:07:35  Accepted a new tcp connection from 127.0.0.1:50992.
2020-10-27 04:08:35  Accepted a new tcp connection from 127.0.0.1:51096.
2020-10-27 04:08:35  Accepted a new tcp connection from 127.0.0.1:51098.
2020-10-27 04:09:35  Accepted a new tcp connection from 127.0.0.1:51218.
2020-10-27 04:09:35  Accepted a new tcp connection from 127.0.0.1:51220.
2020-10-27 04:09:52  💤 Idle (0 peers), best: #2711501 (0x904f…b93c), finalized #1281257 (0xea83…e96a), ⬇ 0 ⬆ 0
2020-10-27 04:09:55  🔍 Discovered new external address for our node: /ip4/13.212.82.128/tcp/30333/p2p/12D3KooWBPiVVvEQ5spgp3hws6PneydovaJKGcL4JkdSMYEshn9w
2020-10-27 04:09:57  ⚙️  Syncing  0.0 bps, target=#2814195 (4 peers), best: #2711501 (0x904f…b93c), finalized #1281257 (0xea83…e96a), ⬇ 4.5kiB/s ⬆ 2.2kiB/s

thread 'tokio-runtime-worker' has overflowed its stack
fatal runtime error: stack overflow
2020-10-27 04:10:01  It isn't safe to expose RPC publicly without a proxy server that filters available set of RPC methods.
2020-10-27 04:10:01  It isn't safe to expose RPC publicly without a proxy server that filters available set of RPC methods.
2020-10-27 04:10:01  Parity Polkadot
2020-10-27 04:10:01  ✌️  version 0.8.25-5649229-x86_64-linux-gnu
2020-10-27 04:10:01  ❤️  by Parity Technologies <admin@parity.io>, 2017-2020
2020-10-27 04:10:01  📋 Chain specification: Westend
2020-10-27 04:10:01  🏷 Node name: undesirable-fiction-1384
2020-10-27 04:10:01  👤 Role: FULL
2020-10-27 04:10:01  💾 Database: RocksDb at /opt/data/chains/westend2/db
2020-10-27 04:10:01  ⛓  Native runtime: westend-46 (parity-westend-0.tx3.au2)
2020-10-27 04:10:02  🏷 Local node identity is: 12D3KooWBPiVVvEQ5spgp3hws6PneydovaJKGcL4JkdSMYEshn9w
2020-10-27 04:10:02  📦 Highest known block at #2711501
2020-10-27 04:10:02  〽️ Prometheus server started at 127.0.0.1:9615
2020-10-27 04:10:02  Listening for new connections on 0.0.0.0:9944.
2020-10-27 04:10:02  Accepted a new tcp connection from 127.0.0.1:51348.
2020-10-27 04:10:35  Accepted a new tcp connection from 127.0.0.1:51402.
2020-10-27 04:10:35  Accepted a new tcp connection from 127.0.0.1:51404.
2020-10-27 04:10:55  💤 Idle (0 peers), best: #2711501 (0x904f…b93c), finalized #1281257 (0xea83…e96a), ⬇ 0 ⬆ 0
2020-10-27 04:11:35  Accepted a new tcp connection from 127.0.0.1:51526.
2020-10-27 04:11:35  Accepted a new tcp connection from 127.0.0.1:51528.
2020-10-27 04:12:35  Accepted a new tcp connection from 127.0.0.1:51634.
2020-10-27 04:12:35  Accepted a new tcp connection from 127.0.0.1:51636.

(the difference is that large because of previous #1755 )
image: parity/polkadot:master-0.8.25-5649229-ce19b4ec

@bkchr
Copy link
Member

bkchr commented Oct 27, 2020

Do you run this on Macos?

@jun0tpyrc
Copy link
Author

jun0tpyrc commented Oct 27, 2020

no, we run docker-compose in Amazon Linux , outside docker ulimit -a gives

[root@bstg-dot-1 ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63450
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63450
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

@bkchr
Copy link
Member

bkchr commented Oct 27, 2020

Can you try to run it with gdb to get a stack trace?

@jun0tpyrc
Copy link
Author

Sorry in order to recover soon we restarted to sync from 0 , I could just confirm that would work to get node in-sync again

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants