Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do I need to use common-pools to wrap ManagedChannel #1636

Closed
thefallentree opened this issue Apr 6, 2016 · 8 comments
Closed

Do I need to use common-pools to wrap ManagedChannel #1636

thefallentree opened this issue Apr 6, 2016 · 8 comments

Comments

@thefallentree
Copy link

Hi,

Do we need to implement a pool of ManagedChannel on the client side?

We are currently sharing a single ManagedChannel to a single target on whole application, but we are seeing some pretty weird behavior that channel went into permanent DEADLINE_EXCEEDED mode.

We havn't been able to replicated it locally and we are trying anything we can..

Thanks

@buchgr
Copy link
Collaborator

buchgr commented Apr 6, 2016

what gRPC version are you using?

we are seeing some pretty weird behavior that channel went into permanent DEADLINE_EXCEEDED mode.

I don't understand. Could you please elaborate?

Also, we have a better chance of fixing your problem if you could share some code and additional details.

Thanks!

@buchgr
Copy link
Collaborator

buchgr commented Apr 6, 2016

Also, is the garbage collector running a lot and are the pause times similar or longer to your call deadlines?

@thefallentree
Copy link
Author

we are using 0.13.2 latest release. We don't seem to be triggering GC , our call deadlines are pretty big, mostly 30s.

Cheers

@buchgr
Copy link
Collaborator

buchgr commented Apr 6, 2016

Do we need to implement a pool of ManagedChannel on the client side?

no that should not be necessary. also deadlines aren't per channel, but per client call.

hmm. not sure we can help without additional details...

@thefallentree
Copy link
Author

Thanks for the information, we will try to gather some more data on our side.

It's been pretty random, as we have been hit by this phantom issue several times in the last several days.

@buchgr
Copy link
Collaborator

buchgr commented Apr 6, 2016

@thefallentree

we are seeing some pretty weird behavior that channel went into permanent DEADLINE_EXCEEDED mode.

what does that mean exactly? After some point in time every RPC fails with DEADLINE_EXCEEDED?

@ejona86
Copy link
Member

ejona86 commented Apr 6, 2016

@thefallentree, what sort of networking environment are you in? Could it be possible that the network (NAT, Load Balancer, etc) is breaking the TCP connection but not informing your client? We aren't currently doing manual keepalive so there may be some cases still which may cause O(minutes) to detect the connection failure. (I'm working on fixing that though.)

The only reason I would suggest having a pool of Channels is if it is difficult for your application to share Channels more precisely; your application doesn't actually know which Channels it wants to share so you have a common pool to try to reuse Channels as much as possible. All other cases of pooling I'd be suspicious of in that either 1) it is working around a bug or 2) there is a more appropriate place for the logic (like in a LoadBalancer).

@ejona86
Copy link
Member

ejona86 commented Apr 19, 2016

Closing for now. Will reopen if we get some more information. May be partially related to #1648.

@ejona86 ejona86 closed this as completed Apr 19, 2016
@hsaliak hsaliak reopened this Apr 2, 2018
@hsaliak hsaliak closed this as completed Apr 2, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Sep 28, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants