-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow running provisioner on existing resource #745
Comments
May I ask what your use case is for this? Provisioners are meant as a way to bootstrap nodes. We made the design decision early on to not support update provisioners because it is rather complicated (what causes a "diff" in the provisioner? is it idempotent to re-run?). |
To add to this: we saw no issue in not supporting this because Terraform is meant to create/destroy infrastructure components. The runtime management of these components should be the responsibility of Chef, Consul, etc. |
We've run into multiple cases where the provisioning script either doesn't run or doesn't run successfully, and/or we have existing machines that we want to run the 'bootstrap' on (which is heavily tied into terraform, using a bunch of variables that are pulled out of the terraform config). However, if the answer is "we don't want to support that", then that's the answer and we'll do the blow away/recreate/etc. dance 😄 |
@teancom So, if it doesn't run or doesn't run successfully, Terraform should mark the resource as "tainted" and automatically destroy/create on the next run. Are you not seeing this? |
I have a relevant question. My provisioner (Ansible) pulls the latest code base and sets up my production environment. When I deploy new code, I just run my ansible script to refresh the prod servers. How will that workflow fit in with terraform? would |
yes - as long as your state file is in-sync with provisioned servers |
Closing this as I don't see an issue here. We also added |
@mitchellh what if you have an existing infrastructure that can't stop running ? Or what if existing infrastructure contains properties which are not terraform managed, and has to be manually reset after recreation? |
@kubek2k In those scenarios you'll have to run provisioners manually outside of TF. |
@mitchellh wondering if it doesn't hammer the adoption of terraform for many |
@mitchellh I am wondering if this is worth re-evaluating with the addition of the chef provisioner as it addresses the idempotency concern above. Reason I ask is our use case: We are migrating away from IronFan (which manages infrastructure lifecycle as well as a very tight coupling with Chef), and are looking to have Terraform be the replacement with its much looser coupling to its provisioners, namely Chef in our case, but still allow us to use a single source of truth to determine what is running on our infrastructure. We are not able to achieve this single source of truth with the provisioners only running at creation, and looking at the tooling necessary to create a single source of truth that feeds Terraform at beginning of life, and then manages the config until end of life seems like a lot of unnecessary moving parts. Would it be possible to entertain an opt-in type flag for the Chef provisioner that would allow it to run in some if certain attributes of the provisioner changed? Oris there another alternative/project that is available that anyone may know about? |
+1 it makes sense to have alternative behaviour. I'm really wondered that fact why I have to reapply existing node if I've just changed my cookbook and need to apply it to the node. Actually you don't need make diff cookbook resources have to. |
-1 the implementation would be fundamentally insecure. It's very common that you don't want the thingy that created your arecture to have root access to it. |
@mitchellh here's our use case: We use an AWS launch configuration with an auto scaling group. We have terraform setup to always create a new launch configuration and then update the auto scaling group. On new launch config create, we'd like to run a script we've written to scale up the ASG to bring the new launch configuration into service, and then scale it back down again to eliminate the old instances running based on the previous launch configuration. Attaching the provisioner to the launch configuration doesn't work since the ASG hasn't been updated yet in the plan apply. It would be neat to attach the script we have to changes of the auto scaling group launch configuration policy... but no way to do that at the moment. |
I have a similar usecase: I am doing something like:
The network_interface I'm creating gives the host access to a subnet that chef will need (or else it fails). I cannot figure out how to allow the aws_network_interface to be applied before the instance provisions. |
@woodhull did you ever figure out a solution for this? I suppose you could use |
We wrap all of terraform execution in a custom ruby script that does this and many other tasks before and after every terraform run completes. |
You can see the solution to this issue here: And also here: To run commands on resources that have already been created you need to create a
A big thanks to @ydaetskcor for the solution. http://stackoverflow.com/users/2291321/ydaetskcor |
@cohenaj194, I am using the same solution. Basically, I use null_resource quite several times with ansible + terraform setup, whenever I need to copy something, create dynamic inventory for ansible, etc. The issue for me is to rerun null_resources on changes. For instance, I have a step to add EC2 instance IPs to the group in inventory file:
So, whenever I change the number of instances, my inventory will be updated. Now I need to copy this new inventory to the ansible "master" host, so I use null_resource again:
but it doesn't appear during the plan phase. One solution I found is to monitor number of instances by adding trigger to null_resource:
But I also have ansible files themselves, that are not related to the infrastructure changes, I mean role definitions. Do you have idea how to track local files changes, how do I trigger null_resource if I updated my role or how do I make the resource to run always? @mitchellh, considering this case - do you think it would make sense to add something like "tainted: always" option to null_resources? I am new to Terraform, but I feel like the connection between Terraform and configuration management tooling is missing a bit. |
@mlushpenko this is a pretty old issue. you'll probably have better luck opening another one that is specific to |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
There is currently no way to run a provisioning script on an existing resource. Adding provisioner sections to an existing (already provisioned) aws_instance is not something that terraform notices as a 'change', so the provisioner is not run during the next
apply
. The only way to run the provisioner is to destroy the instance and let terraform create it again, which may be non-optimal.The text was updated successfully, but these errors were encountered: