Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase the check interval #1082

Closed
wants to merge 1 commit into from
Closed

Increase the check interval #1082

wants to merge 1 commit into from

Conversation

yeasy
Copy link
Member

@yeasy yeasy commented Apr 13, 2020

Type of change

  • Test update

Description

This patch increase the check interval from the default 1ms to 1s when
updating channel config.

Originally, there will emit lots of useless checking when the channel is
not ready temporarily, and waste cpu utilization.

Using 1 s as the checking period is good enough for the integration test.

@yeasy yeasy requested a review from a team as a code owner April 13, 2020 21:47
Copy link
Contributor

@sykesm sykesm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand why you want to make this change but given how narrowly scoped it is, it's not something that I'd support merging.

The eventually loops all use the default polling time from gomega. If you're willing to wait longer, ginkgo has functions that control the default times.

@yeasy
Copy link
Member Author

yeasy commented Apr 14, 2020

I understand why you want to make this change but given how narrowly scoped it is, it's not something that I'd support merging.

The eventually loops all use the default polling time from gomega. If you're willing to wait longer, ginkgo has functions that control the default times.

The method improved here is a little different from others, which emits a deliver request to the OSN every 1 ms. And it's noticed that the CPU is quite high during this polling.

note (@sykesm): I mistakenly edited this comment earlier instead of replying. Apologies. I believe I've restored the original comment from history. It was not my intent to change your words.

@sykesm
Copy link
Contributor

sykesm commented Apr 15, 2020

The method improved here is a little different from others

I really don't see how. This Eventually drives CurrentConfigBlock with a delay of 10ms between calls. This is the same polling interval that is used everywhere. It's repeatedly calling peer channel fetch to get the height. The same pattern is used when calling peer lifecycle chaincode is used for checkcommitreadiness or queryinstalled, and when discover is used to find endorsers. They all turn into tight loops of CLI tools calling the servers to get data.

which emits a deliver request to the OSN every 1 ms. And it's noticed that the CPU is quite high during this polling.

10 ms, and yes, CPU is high during all poling in the integration tests. This is why I understand why you're suggesting the change and why I'm saying that the scope is so narrow as to not be helpful.

Simply changing random Eventually calls isn't going to help with maintenance, clarity, or overall CPU consumption; changes need to be less ad-hoc and more systematic.

@yeasy
Copy link
Member Author

yeasy commented Apr 15, 2020

The method improved here is a little different from others

I really don't see how. This Eventually drives CurrentConfigBlock with a delay of 10ms between calls. This is the same polling interval that is used everywhere. It's repeatedly calling peer channel fetch to get the height. The same pattern is used when calling peer lifecycle chaincode is used for checkcommitreadiness or queryinstalled, and when discover is used to find endorsers. They all turn into tight loops of CLI tools calling the servers to get data.

which emits a deliver request to the OSN every 1 ms. And it's noticed that the CPU is quite high during this polling.

10 ms, and yes, CPU is high during all poling in the integration tests. This is why I understand why you're suggesting the change and why I'm saying that the scope is so narrow as to not be helpful.

Simply changing random Eventually calls isn't going to help with maintenance, clarity, or overall CPU consumption; changes need to be less ad-hoc and more systematic.

Well, not sure whether it's a good idea to change the code at the same time without this issue happens.

But sure, we can modify the global default polling interval of the test suite.

@yeasy yeasy changed the title Slow down the check interval Increase the check interval Apr 15, 2020
This patch increase the check interval from the default 1ms to 1s when
updating channel config.

Originally, there will emit lots of useless checking when the channel is
not ready temporarily, and waste cpu utilization.

Using 1 s as the checking period is good enough for the integration test.

Change-Id: I59a8689faa019adac7ca097b7c370dadf0f6c80a
Signed-off-by: Baohua Yang <yangbaohua@gmail.com>
Signed-off-by: Baohua Yang <baohua.yang@oracle.com>
@sykesm
Copy link
Contributor

sykesm commented Apr 16, 2020

Thanks for raising the concern via PR. I've proposed #1111 to address this at a scope that should address polling CLI operations without impacting other parts of the integration tests.

@sykesm sykesm closed this Apr 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants