You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As demonstrated in #966, keygen ceremonies require sending/recieving on the order of 100MB of data in some stages. This has caused some buffer overflowing before and is likely lead to nodes timing out more often leading to more frequent ceremony failures. While buffer overflow should be fixed in #1104 by batching broadcast messages (between CFE and SC), it won't reduce the total amount of data to be sent/received over the internet.
One way to reduce the bandwidth usage is to "compress" the data by hashing it during the broadcast verification stage. If all nodes are honest and send the same data, all participants will be able to verify the broadcast by simply comparing the hashes. The downside is that we will need to add additional logic to handle the (presumably rare) cases where some hashes don't match.
I was assured that bandwidth has not really been an issue (at least on testnet), so giving this a low priority as #1104 will fix the one issue we had on testnet (buffer overflow).
The text was updated successfully, but these errors were encountered:
nakul-cf
changed the title
Consider optimising the happy path in broadcast verification
[SC-2885] Consider optimising the happy path in broadcast verification
Jan 5, 2022
@msgmaxim upping the priority of this one since I think it will be a big benefit to our efforts in scaling the set sizes.
I do think that the highest priority item is just to make the p2p connections reliable however, and we should make a concerted effort to do that with libp2p for one iteration, and then if we can't make progress on that front we should ditch libp2p entirely.
morelazers
changed the title
[SC-2885] Consider optimising the happy path in broadcast verification
Consider optimising the happy path in broadcast verification
May 24, 2022
As demonstrated in #966, keygen ceremonies require sending/recieving on the order of 100MB of data in some stages. This has caused some buffer overflowing before and is likely lead to nodes timing out more often leading to more frequent ceremony failures. While buffer overflow should be fixed in #1104 by batching broadcast messages (between CFE and SC), it won't reduce the total amount of data to be sent/received over the internet.
One way to reduce the bandwidth usage is to "compress" the data by hashing it during the broadcast verification stage. If all nodes are honest and send the same data, all participants will be able to verify the broadcast by simply comparing the hashes. The downside is that we will need to add additional logic to handle the (presumably rare) cases where some hashes don't match.
I was assured that bandwidth has not really been an issue (at least on testnet), so giving this a low priority as #1104 will fix the one issue we had on testnet (buffer overflow).
The text was updated successfully, but these errors were encountered: