-
Notifications
You must be signed in to change notification settings - Fork 677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ibm_is_lb: Total provision time too long #5380
Comments
@hkantare Please help advocate for development effort to address this very significant problem. See TL;DR below:
|
@ujjwal-ibm Can you look into this issue |
looking into it |
@hkantare @ujjwal-ibm Please also be aware, this has a major impact on Red Hat OpenShift Container Platform (OCP) self-managed setup on IBM Cloud, using Installer-Provisioned Installation (IPI) executed by the Red Hat OpenShift Installer (which uses Terraform to provision resources as part of the IPI setup procedure) The runtime was The total time was This equates to Timing
Resources, grouped by those which ran in parallel:
|
hi @sean-freeman We are looking to optimise pool and pool member. Currently the fix is being tested |
|
Community Note
Terraform CLI and Terraform IBM Provider Version
N/A
Affected Resource(s)
ibm_is_lb
Terraform Configuration Files
Expected Behavior
Terraform Resource
ibm_is_lb
should follow API Specification and uponcreate
allow data input via nested:Citation:
Actual Behavior
Terraform Resources are modular-only, there is no allowance for nested creation.
This means every end-user must use in sequence:
ibm_is_lb
(5-10 minutes provision time)ibm_is_lb_pool
(~4 minutes provision time, as a new pool will cause update/scan of the LB instance)ibm_is_lb_pool_member
for HA Pair Node A (~4 minutes provision time, as a new pool will cause LB update/scan )ibm_is_lb_pool_member
for HA Pair Node B (~4 minutes provision time, as a new pool will cause LB update/scan)ibm_is_lb_listener
(~4 minutes provision time, as a new pool will cause update/scan of the LB instance)== 5 + 16 minutes approximately using Terraform versus 5 minutes from API/CLI/Web GUI, to create 1 listener (e.g. Port 443) and 1 pool (with 2 pool server members)
This is a compounding problem as there are very few cases that use such as simple Load Balancer configuration. A more reasonable expectation for an average setup would be:
ibm_is_lb
(5-10 minutes provision time)ibm_is_lb_pool
(~20 minutes provision time, as a new pool will cause update/scan of the LB instance)ibm_is_lb_pool_member
for HA Pair Node A (~20 minutes provision time, as a new pool will cause LB update/scan )ibm_is_lb_pool_member
for HA Pair Node B (~20 minutes provision time, as a new pool will cause LB update/scan)ibm_is_lb_listener
(~20 minutes provision time, as a new pool will cause update/scan of the LB instance)== 5 + 80 minutes approximately using Terraform versus 5 minutes from API/CLI/Web GUI, to create 5 listener (e.g. Port 443) and 5 pools (each with 2 pool server members).
I can appreciate how the modular-only approach would be considered the correct approach, it is logical for Terraform purposes and there is no nested pools/members and listeners on the
update
[PATCH] API Endpoint.A compromise needs to be found as this is far too long for execution of a Load Balancer setup. This may mean
ibm_is_lb
logic needs to be expanded to mask complexity and handle multiple API calls:create
as currently + call API withpools
andlisteners
pools
andlisteners
input and confirm no update to make to the Resource, as currentlypools
, call existing logic inibm_is_lb_pool
Terraform Resourcepools.members
, call existing logic inibm_is_lb_pool_member
Terraform Resourcelisteners
, call existing logic inibm_is_lb_listener
The text was updated successfully, but these errors were encountered: