-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(v2): storage network with v2 data engine #3521
feat(v2): storage network with v2 data engine #3521
Conversation
WalkthroughThis pull request modifies how storage IP addresses are handled during instance creation. In the engine controller, it adds logic to retrieve the instance manager pod and use its storage IP when setting the Changes
Sequence Diagram(s)Engine Instance Creation FlowsequenceDiagram
participant EC as EngineController
participant DS as DataStore
participant Pod as Instance Manager Pod
participant ES as EngineService
EC->>DS: GetPod(instanceManagerName)
DS-->>EC: Pod (with storage IP)
EC->>ES: CreateEngineInstance(InitiatorAddress & TargetAddress = storage IP)
ES-->>EC: Instance process created
Replica Instance Creation FlowsequenceDiagram
participant RC as ReplicaController
participant ES as EngineService
RC->>ES: CreateReplicaInstance(without IM IP)
ES-->>RC: Instance process created
Suggested reviewers
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
7bffa9e
to
f4b3312
Compare
Test result:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
e3492b6
to
d78b240
Compare
Hi @c3y1huang There is only one place where it may need the instance-manager ip, not sure if in this case will we need to pass the storage network ip instead of pod ip.
|
0c167dd
to
7763b73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
controller/backing_image_controller.go (1)
1647-1653
: LGTM! Consider improving error messages.The implementation correctly retrieves the storage IP from the instance manager pod. However, the error messages could be more descriptive.
Consider enhancing error messages to include more context:
- return errors.Wrapf(err, "failed to get pod for instance manager %v", srcInstanceManager.Name) + return errors.Wrapf(err, "failed to get pod for source instance manager %v when syncing v2 backing image", srcInstanceManager.Name)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
controller/backing_image_controller.go
(1 hunks)controller/engine_controller.go
(1 hunks)controller/replica_controller.go
(0 hunks)engineapi/instance_manager.go
(0 hunks)
💤 Files with no reviewable changes (2)
- engineapi/instance_manager.go
- controller/replica_controller.go
🚧 Files skipped from review as they are similar to previous changes (1)
- controller/engine_controller.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Build binaries
- GitHub Check: Summary
🔇 Additional comments (1)
controller/backing_image_controller.go (1)
1655-1655
: LGTM! Storage IP integration looks good.The code correctly uses the storage IP with proper host:port formatting for SPDK backing image creation.
Missed that. Thank you! I've updated it. PTAL. |
LGTM. Need any updates for RWX volumes? |
I don't think so. We're using Kubernetes Endpoints for the RWX volumes. did I miss anything? |
7763b73
to
0202a2c
Compare
No, just confirmed no things are missed. Thank you. |
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
longhorn/longhorn-6450 Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
longhorn/longhorn-6450 Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
0202a2c
to
6fb5593
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
controller/engine_controller.go (1)
2087-2125
: 💡 Verification agent🧩 Analysis chain
Consider storage IP handling during engine upgrades.
The upgrade logic doesn't handle storage IP changes. This might cause issues during live upgrades if the storage network configuration changes.
Consider updating the upgrade logic to handle storage IP changes:
- Verify if storage IP changes during upgrade
- Update engine configuration if needed
- Document any limitations or requirements for storage network during upgrades
Run this script to check for potential upgrade issues:
🏁 Script executed:
#!/bin/bash # Description: Check if there are any upgrade-related issues with storage IP # Search for upgrade-related code that might need storage IP handling rg -A 5 "UpgradeEngine|EngineUpgrade|ImageUpgrade" # Check for any existing upgrade tests rg -A 5 "TestUpgrade|test.*upgrade"Length of output: 26121
Action Required: Enhance Engine Upgrade Logic to Handle Storage IP Changes
The current upgrade process in
controller/engine_controller.go
(lines 2087–2125) does not account for potential storage IP changes. This omission may lead to issues during live upgrades if the storage network configuration is altered. To address this, please:
- Detect Storage IP Changes: Incorporate logic to verify whether the storage IP has changed during an upgrade.
- Update Engine Configuration: If a change is detected, update the engine configuration accordingly to prevent live replica disruption.
- Documentation & Testing: Document any new limitations or requirements concerning storage IP handling during upgrades and consider adding tests to validate this scenario.
🧹 Nitpick comments (1)
controller/engine_controller.go (1)
485-491
: Validate storage IP availability.The code retrieves the storage IP but doesn't validate if it's available. Consider adding validation and a fallback mechanism.
Add validation before using the storage IP:
instanceManagerPod, err := ec.ds.GetPod(im.Name) if err != nil { return nil, errors.Wrapf(err, "failed to get pod for instance manager %v", im.Name) } instanceManagerStorageIP := ec.ds.GetStorageIPFromPod(instanceManagerPod) +if instanceManagerStorageIP == "" { + return nil, fmt.Errorf("storage IP not available for instance manager pod %v", im.Name) +}
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
controller/backing_image_controller.go
(1 hunks)controller/engine_controller.go
(1 hunks)controller/replica_controller.go
(0 hunks)engineapi/instance_manager.go
(0 hunks)
💤 Files with no reviewable changes (2)
- engineapi/instance_manager.go
- controller/replica_controller.go
🚧 Files skipped from review as they are similar to previous changes (1)
- controller/backing_image_controller.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Build binaries
- GitHub Check: Summary
🔇 Additional comments (1)
controller/engine_controller.go (1)
500-501
: LGTM! Storage IP is consistently used.The implementation correctly uses the storage IP for both initiator and target addresses, which aligns with the storage network requirements.
Which issue(s) this PR fixes:
Issue longhorn/longhorn#6450
What this PR does / why we need it:
Use storage network for V2 data engine.
Special notes for your reviewer:
None
Additional documentation or context
None