Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manual Offset Commit/Heartbeat Deadlock #11

Open
wbarnha opened this issue Mar 8, 2024 · 1 comment
Open

Manual Offset Commit/Heartbeat Deadlock #11

wbarnha opened this issue Mar 8, 2024 · 1 comment

Comments

@wbarnha
Copy link
Owner

wbarnha commented Mar 8, 2024

I believe that I have found the reason for the deadlock that has been alluded to in a few other issues on the board.

dpkp#2373
dpkp#2099
dpkp#2042
dpkp#1989

The offset commit appears to be blocked by-design with the assumption that the operation should resume without issue once the underlying network problem has been resolved. The issue appears to be that the consumer is not holding onto an exclusive client lock while it is waiting. This leads to a race condition between the main thread and the heartbeat thread due to a failure to maintain lock ordering.

The order of operations is as follows:

  1. Consumer handles a message.
  2. Network error occurs.
  3. Consumer tries to commit offset. Commit blocks in an infinite loop and releases the KafkaClient lock on each attempt: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/consumer.py#L512
  4. While trying to identify a new coordinator, BaseCoordinator takes the KafkaClient lock and then the internal coordinator lock: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L245
  5. HeartbeatThread thread runs and only takes the coordinator lock: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L958
  6. HeartbeatThread detects consumer timeout and tries to shutdown the coordinator, taking the client lock only after having already taken the coordinator lock in the step above (the inverse order in how the locks are taken by the main thread during the block commit operation): https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L993 https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L766

It is admittedly a very tight window for a race condition but it does exist based on my own experience as well as that of others in the community. The problem can be avoided by allowing the consumer exclusive access to the KafkaClient while trying to commit the offset, or by ensuring that the heartbeat thread has exclusivity to the client while it is checking things out.

It should also be noted that, while I have only spelled out the race condition as it exists between the commit and heartbeat operations, I wouldn't be surprised if the heartbeat was also interfering with other operations because of this issue.

@xiaoguo1992
Copy link

Hello,

I encountered this issue while using kafka-python version 2.0.2. I would like to know the potential risks if the problem is addressed by swapping the order of acquiring two locks in base.py as follows:

Modification approach:

In methods ensure_coordinator_ready, ensure_active_group, and maybe_leave_group, change to using with self._lock, self._client._lock:
In method _run_once, swap the lock acquisition order from with self.coordinator._client._lock, self.coordinator._lock: to with self.coordinator._lock, self.coordinator._client._lock:
Would such modifications pose any risks? Is there a better solution to resolve this issue?

Thank you for your assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants