-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consumer can be rebalanced if poll is not called before max.poll.interval.ms
#101
Labels
Comments
Thanks! |
Closed
voidconductor
added a commit
to voidconductor/monix-kafka
that referenced
this issue
Sep 23, 2019
paualarco
added a commit
to paualarco/monix-kafka
that referenced
this issue
Jan 23, 2021
Merged
Avasil
pushed a commit
that referenced
this issue
Mar 24, 2022
* Async commit completed with callback (#94) Add consumer polling (#94) AutoCommit polling logic (#94) Remove nested task loop Remove redundant atomic refs Add test for (#101) Apply changes to older versions * Tests passing * Observable poll heartbeat rate * Resolves merge conflicts * Unused imports * Multiple threads test failing * Test small poll heartbeat and slow upstream * ScalafmtAll * Removes lambda anonymous class for scala 11 compatibility * Scaladoc fix * MR Feedback * Unused import * Adding poll heartbeat tests * l * Improved rebalancing test scenarios * a * Improved test * Monix Kafka benchmarks for consumer, single and sink producer Block Fix Binded connection First benchmark for kafka producer Added kafka benchmark strategy plan Added sink and consumer benchmarks Producer results Akka Removed references to akka a Final * Benchmarks * Fs2 * Heartbeat poll * Benchmarks updated * Private poll heartbeat rate * Some improvements * Fix unexpected consumed records * Benchmarks with different pollHeartbeatIntervals * New benchmark results * More benchmarking * Increase consumed records on perf test * Set kafka-clients dependency to 1.1.1 * RC8 - ProducerBenchmarks * Provided scala compat * Downgrade scalacheck dependency * Clean up * Fixed topic regex tests * Typo * Update embedded kafka dependency * Port back to older kafka subprojects * A * Make poll interval rate private on kafka 10 and 9 * Bit of clean up and updated readme * Update changelog and trigger pipeline * Test indentation fix * Add logback in test * Add unchanged topic partition test case * Kafka tests running on github actions * Renames github workflows folder * Updates scala action * Removes tests for kafka 0.9.x Revmoves travis-ci * Add benchmark with autocomit async and sync scenarios * Removed benchmark results file * Some clean up * Use pollTimeout and updated benchmark results * ScalafmtAll * Ignore Ds_store file * Removes logback file from main * Remove unnecessary stuff * Remove unused import * Benchmarks readme * Small correction * MR Feedback
The README now says
But I cannot find that release on Maven Central (or the release tag here). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Starting from Kafka 0.10.1.0, there is
max.poll.interval.ms
setting:The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.
Since, monix-kafka backpressures until all the records has been processed. This could be a problem if processing takes time.
One solution could be to always poll every x second. Then, pause the partition when consumer is still busy processing so no new records are fetched (but it will reset the timer).
Alpakka-kafka and fs2-kafka are doing this
https://github.com/ovotech/fs2-kafka/blob/master/modules/core/src/main/scala/fs2/kafka/internal/KafkaConsumerActor.scala#L332
https://github.com/akka/alpakka-kafka/blob/master/core/src/main/scala/akka/kafka/internal/KafkaConsumerActor.scala#L435
This will probably require some change, we should just mention in readme to reduce
max.poll.records
if it hits the limit.The text was updated successfully, but these errors were encountered: