You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current behaviour is the message is re-queued leading to an endless loop of consuming the message (and re-queuing it).
RT Ticket Number
If applicable
To Reproduce
This is only one simple way of reproducing this issue, out of many. We will use tol-lab-share to reproduce this.
Setting Up
Stop all local RabbitMQ instances. You can view if you have any of them running by brew services ls.
Run ./docker/dependencies/up.sh in tol-lab-share. This will:
Spin up a local instance of a RedPanda schema store (backend only, which is what's required)
Spin up a RabbitMQ instance with a management portal
Navigate http://localhost:8080/ and use admin/development as username/password combination to check if the dependencies are up. There will not be any RabbitMQ components (vhosts, queues, exchanges, etc.) created yet.
Run pipenv shell to navigate into the virtual environment.
Run a pipenv install --dev to install latest dependencies.
Run python setup_dev_rabbit.py. This will inject the required RabbitMQ components into the local broker.
Run ./schemas/push.sh http://localhost:8081. This will inject the required schemas into the local RedPanda backend.
Run tol-lab-share consumers using pipenv run python main.py.
If you navigate to http://localhost:8080/ and log in, you should be able to see the Rabbit components populated in the management portal.
Reproducing the issue
We can send to an exchange a malformed message that errs in tol-lab-share to see the behaviour of the result. Let us use tls.volume-tracking queue.
Go to the "Publish Message" section, and enter some malformed message.
Publish the message, and observe the terminal that is running tol-lab-share. It will endlessly spit out error messages (e.g. KeyError).
Observe the management portal's messaging behaviour chart.
Important
Note how it keeps redelivering the message into the queue. It stops erroring only after we purge the messages in the queue using "Purge Messages" button.
Expected behaviour
In event of an error in the consumer, it should not re-queue the message. It must dead letter it.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Windows or Mac?
Browser Chrome, Firefox, Safari or other?
Browser version (use 'About' to look up)?
Additional context
Use ./docker/dependencies/down.sh to shut the dependencies down once you are done. It will purge the docker containers that you started up with up.sh.
The text was updated successfully, but these errors were encountered:
Describe the bug
When
process_message
function inlab_share_lib/processing/rabbit_message_processor.py
returnsFalse
, the message must be dead lettered without re-queuing.The current behaviour is the message is re-queued leading to an endless loop of consuming the message (and re-queuing it).
RT Ticket Number
If applicable
To Reproduce
This is only one simple way of reproducing this issue, out of many. We will use
tol-lab-share
to reproduce this.Setting Up
brew services ls
../docker/dependencies/up.sh
in tol-lab-share. This will:admin/development
as username/password combination to check if the dependencies are up. There will not be any RabbitMQ components (vhosts, queues, exchanges, etc.) created yet.pipenv shell
to navigate into the virtual environment.pipenv install --dev
to install latest dependencies.python setup_dev_rabbit.py
. This will inject the required RabbitMQ components into the local broker../schemas/push.sh http://localhost:8081
. This will inject the required schemas into the local RedPanda backend.tol-lab-share
consumers usingpipenv run python main.py
.If you navigate to http://localhost:8080/ and log in, you should be able to see the Rabbit components populated in the management portal.
Reproducing the issue
We can send to an exchange a malformed message that errs in
tol-lab-share
to see the behaviour of the result. Let us usetls.volume-tracking
queue.tol-lab-share
. It will endlessly spit out error messages (e.g.KeyError
).Important
Note how it keeps redelivering the message into the queue. It stops erroring only after we purge the messages in the queue using "Purge Messages" button.
Expected behaviour
In event of an error in the consumer, it should not re-queue the message. It must dead letter it.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
Use
./docker/dependencies/down.sh
to shut the dependencies down once you are done. It will purge the docker containers that you started up withup.sh
.The text was updated successfully, but these errors were encountered: