-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drone failed to mark backend as terminated #838
drone failed to mark backend as terminated #838
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
move |state: &BackendState| { | ||
let timestamp = chrono::Utc::now(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming this is a bug causing all events stored in the state store to have the timestamp of when this backend was created, not when the callback was called.
@@ -220,16 +251,6 @@ impl Runtime for DockerRuntime { | |||
); | |||
Ok(false) | |||
} | |||
Err(bollard::errors::Error::DockerResponseServerError { | |||
status_code: 404, .. | |||
}) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving this check up to the inspect container call because we expect to catch a 404 there first.
📝 Walkthrough📝 WalkthroughWalkthroughThe pull request introduces modifications to the In addition, changes are made to the Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
plane/src/drone/executor.rs (2)
Line range hint
208-236
: Critical: Add container status check before termination.The current implementation doesn't fully address the PR's main objective. When handling termination, we should explicitly check if the container is still running before attempting termination to prevent the reported issue where the drone continues to believe a stopped container is running.
Consider modifying the termination logic:
let manager = { let Some(manager) = self.backends.get(backend_id) else { tracing::warn!(backend_id = backend_id.as_value(), "Backend not found when handling terminate action (assumed terminated)."); - // Terminate will only return an error on a docker error, not if the backend is already terminated. - self.runtime.terminate(backend_id, true).await?; + // Check container status first + if let Ok(true) = self.runtime.is_running(backend_id).await { + self.runtime.terminate(backend_id, true).await?; + } self.state_store .lock()
Line range hint
208-236
: Consider implementing a hard-termination fallback.As mentioned in the PR objectives, termination loops should eventually transition to a "hard-terminating" state. Currently, there's no mechanism to handle persistent termination failures.
Consider implementing a retry counter or timeout that triggers hard termination:
- Track failed termination attempts in the backend state
- After N retries or timeout T, switch to hard termination mode
- Log these events for monitoring and debugging
Would you like me to provide a detailed implementation proposal?
plane/src/drone/runtime/docker/mod.rs (2)
224-226
: Consider enhancing error context for debugging.The generic error conversion
e.into()
might lose valuable context. Consider wrapping the error with additional context about the container and operation being performed.- Err(e) => return Err(e.into()), + Err(e) => return Err(anyhow::anyhow!( + "Failed to inspect container {}: {}", container_id, e)),
203-214
: Add debug logging for running containers.While we log warnings for non-running containers, adding debug logs for running containers would help with troubleshooting and provide a complete picture of container state transitions.
if let Some(state) = details.state { if !state.running.unwrap_or(false) { tracing::warn!( %container_id, %backend_id, "Container could not be terminated, because it is not running." ); return Ok(false); } + tracing::debug!( + %container_id, + %backend_id, + "Container is running, proceeding with termination." + ); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
plane/src/drone/executor.rs
(1 hunks)plane/src/drone/runtime/docker/mod.rs
(1 hunks)
🔇 Additional comments (3)
plane/src/drone/executor.rs (2)
Line range hint 181-186
: LGTM: Timestamp now accurately reflects event registration time.
Moving the timestamp inside the closure ensures it captures the exact time of registration rather than closure creation time.
Line range hint 208-236
: Verify termination state consistency across the system.
Let's verify how termination state is handled across different components to ensure we're not missing any edge cases.
✅ Verification successful
Based on the shell script results, I can now generate the final response:
Termination state handling appears consistent and well-implemented
The verification shows a robust termination state handling system with:
- Clear state transitions (Ready -> Terminating/HardTerminating -> Terminated)
- Comprehensive termination reasons (Swept, External, KeyExpired, Lost, StartupTimeout, InternalError)
- Proper locking mechanisms to avoid deadlocks
- Consistent error handling and logging across components
The code in question correctly handles the edge case of a missing backend by:
- Safely terminating the runtime
- Properly updating state store with
Lost
reason - Using appropriate locking patterns
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for termination-related state transitions and error handling
echo "Checking termination state handling:"
rg -A 5 "to_terminated|to_hard_terminating|TerminationReason" --type rust
echo "\nChecking container status checks:"
ast-grep --pattern 'fn is_running($$$) -> $$$'
echo "\nChecking termination retry logic:"
rg -A 5 "attempt.*terminate|retry.*terminate" --type rust
Length of output: 18881
plane/src/drone/runtime/docker/mod.rs (1)
197-226
: LGTM! The container state inspection addresses the termination loop issue.
The added check prevents the drone from attempting to terminate already stopped containers, which directly addresses the PR objective. This should resolve the issue of termination loops caused by Docker's behavior of not returning errors for stop/kill operations on stopped containers.
Let's verify the container state inspection behavior:
✅ Verification successful
The search results show that container stop/kill operations are used in two locations:
- Main implementation in
plane/src/drone/runtime/docker/mod.rs
- Test code in
plane/plane-tests/tests/common/docker.rs
Let's verify if the test code needs similar state handling:
The container state inspection implementation is correctly scoped
The test code in plane-tests
has a different use case for container state inspection - it checks exit codes after container completion, not for termination handling. The new state inspection logic in the main implementation correctly addresses the termination loop issue without needing changes in test code.
Key findings:
- Main implementation: Added proper state verification before termination
- Test code: Uses state inspection for exit code verification, which is a different concern
- No other instances of container termination that need similar handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for patterns that might trigger similar container state issues
# Look for other places where container state assumptions are made
# Test 1: Search for direct container operations that might need state checks
rg -A 3 "stop_container|kill_container" --type rust
# Test 2: Look for potential race conditions in container state handling
ast-grep --pattern 'inspect_container($_, $_)'
Length of output: 1464
Script:
#!/bin/bash
# Check the context of container operations in tests
rg -B 10 -A 10 "stop_container" plane/plane-tests/tests/common/docker.rs
# Look for any container state checks in tests
rg "inspect_container|container.*state" plane/plane-tests/tests/common/docker.rs
Length of output: 1041
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; left one style nit
We've observed failure modes where a container exit is not detected. In those cases, we expect that a termination request will see that the container is no longer running due to docker returning an error when calling
stop_container
/kill_container
. Docker doesn't error when callingstop_container()
on an already stopped container, however. This has caused termination loops that never fully succeed, leaving the drone thinking the container is still running, and re-renewing its key over and over. This PR fixes this issue by checking the container's status before callingterminate()
on it.This still doesn't address these two issues, however:
(1) we aren't detecting the container exit in some cases which causes this failure mode in the first place
(2) we should make
terminating
loops eventually resort tohard-terminating