-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interprocess alerts in transit grows. Publishing hangs. #578
Comments
I've got the same issue. Anyone any thoughts? |
I've since migrated from NCHAN to Centrifugo, which hasn't given me any such trouble. It was a mostly painless migration. Pretty close to a drop-in replacement. |
... it was missing one feature, which I've decided to live without for now. Looks like they're implementing it, though! |
Same issue. We pass a last_event_id that does not exist because we want to start BEFORE our first known message ID so we can get all the data from start. If you pass the first actual last_event_id, it skips over that. but doing this causes channels to eventually lockup, while some other channels keep going. There is also a warning by doing this " Missed message for websocket subscriber". Please fix |
Running into some strange behavior with a new server setup. Everything seemed fine at first, but it seems that sometimes trying to publish a message hangs, and then interprocess alerts in transit keeps growing. Once this happens, it becomes impossible to publish to any channel. For example...
Here's the NCHAN portion of the NGINX config...
NGINX Version: nginx/1.17.10 (CentOS)
I think this might have something to do with the fact that we've migrated to a completely new server with a fresh new install of NGINX and NCHAN. The issue seems to only happen after first subscribing to a channel using a ?last_event_id= query parameter and then trying to publish a message on that channel. I suspect we're sending event IDs saved from the OLD server, which do not exist at all in the new server's NCHAN store.
Do you think this could lead to the issue I'm describing? I can't imagine that's really it, as that would mean the whole pub/sub system could be brought down by one bad subscription request. Any thoughts?
The text was updated successfully, but these errors were encountered: