-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--no-destroy-on-error like Vagrant #409
Comments
This sounds reasonable. I think the
|
I would find the option to continue until an error and not destroy the vm extremely useful If someone can give me some pointers on where to start looking to implement this I may be able to put some time into adding this as an option |
This would be very useful for me, too. @timmow you may need to modify each builder's instance creation cleanup step to do nothing if a certain flag is set (for example https://github.com/mitchellh/packer/blob/master/builder/amazon/common/step_run_source_instance.go#L122) It would be a certain amount of work to comb through all of the steps and figure out where it would be appropriate to take no action. An idea I just had would be to give a flag that would wait for user input before processing any clean up step. That way you could perform your debugging, hit enter for example, and packer would take care of the cleanup. Feel free to ping me here if I can offer any help. |
fyi that's done in a FILO manner here https://github.com/mitchellh/multistep/blob/master/basic_runner.go#L71 you may need to extend the basic runner (debuggable_runner?) |
It'd be great to add some sort of step "skipping" functionality lower down, which would basically skip cleanup steps for this |
Similar to debug "pausing", I think an option like |
Hi, I see that this issue is fixed by this commit 2ed7c3f but I don't see the change in HEAD of master branch. Where and why did it disappear? |
I really need this as well. |
Is there any hope to have this feature? What needs to be done to have 2ed7c3f or some variation of it merged? |
Yeah, I could also use this option. I see it was committed but then disappeared. Is there any update on this? |
I would really love this too. I can't tell you how much time I've wasted trying to debug problems and have to go through a lengthy VM creation process to get to the error again and again. Being able to keep the VM around would be a huge win. |
Is there an ETA when this (or similar functionality) will be merged into main? Trying to use Packer to build a VM with Visual Studio installed as part of the base Vagrant box, and I really need it to not destroy the VM before I've had a chance to look at why the steps are failing. Having to acknowledge each step via --debug is not acceptable. |
Another vote for this one, as the |
Blowing so much time trying to debug the final state of the machine before it fails. The -debug switch doesn't cut it - I want it to run through normal process then leave the working folder in tact after failure so I can diagnose with logs and state. Really looking forward to some sort of preserve working state switch. |
Another +1 for this feature, it would be immensely helpful. |
+1 Running into similar issues where it would be nice to debug the final state, tweak some provisioning scripts, and then run the build again to see if that fixed the process, rather than manually hitting enter on ever debug step. |
Another + 1 for this feature. It would be nice to know what happened to this? No one from the team answered. Go ahead step up to the plate it doesn't hurt. LOL! I am totally new to Packer. I was at the tail end of an ISO build of 1.5 hours and this happened. Testing and debugging should be paramount to bringing a totally sweet application full stream. |
+1 here as well, we create our images headless, so having --debug require manual stepping through is no good to us, but being able to inspect the faulty image would be great. |
👍 I like to have this feature too |
+1 This feature would be great! |
For those who share my goal of compiling the latest packer dev release while also integrating orivej earlier fix that pauses on first fail of packer build here are the steps I took that worked for me.
I can confirm that this worked for me and provisioning only paused when there was an error. I was not able to successfully merge https://github.com/orivej/packer/tree/debug-on-error-2. I'm curious, I'm fairly new to packer and git and this issue; is there some other way people have been implementing orivej's fixes then how I have described? I may be missing something very obvious so please clue me in if that is the case. |
Just checking on the state of this issue. Is it that it's @orivej's changes address this issue and a pull request needs to be made? Or does this still need to be addressed? |
+1 |
it would be really useful, right now I'm using an inline shell with |
Imho |
@noose - I don't sit and watch the build - there are some very long running sections (like installing SQL server) that I wouldn't want it to hold up on for user input. I would like to kick off a test build and when I come back to it, have something I can debug with minimal effort. |
IMHO the -debug is totally useless. I'm running complicated builds, and I really don't have patience of pressing enter thousand times until I get to the issue. |
@henris42 while I agree with you on the uselessness of |
@noose, I automate the packer build in a Jenkins job (which pull from Git the config/scripts and Ansible playbooks). Using packer in this way, an interactive mode is not useful; it's much more useful a post failure analsys. |
Seems like everyone needs this. Building these AMI's is error prone and this feature would make it less time consuming to troubleshoot |
I agree with @worstadmin. In the case of building Vagrant boxes, you can tackle the problem from multiple angles (e.g. keep the virtual machine around, try things with the null provisioner, etc.), whereas Amazon images are a special breed and very tiresome to debug when there is an issue. Combined with #1687 this would be great. Additionally, it is often helpful to ignore errors from the provisioners and let it continue, specially during the early stage of development of an image, etc. |
Almost 3 years later... and still almost nothing. I've spent the last few days smashing my head on a keyboard trying to do complex windows builds which arbitrarily and randomly fail execution of powershell scripts with no output and because of the auto-cleanup I can't jump onto the instance. When I run with -debug enabled, the extra "pauses" introduced by requiring manual entry seem to cause this problem to not occur. Which, you'd think that would make sense I just add a ton of sleeps into my powershell scripts to simulate this, and that does not help. Not even lying, I'll Paypal someone a bounty of $100 if someone can seriously make a --no-destroy-on-error feature ASAP and get the ball rolling on a PR for this. I (and it seems like hundreds of others) need this feature, especially when considering that packer is usually used with automation in mind (via CI/CD/etc). So here's my long +1 and plea. |
Hey there could be a workaround for a shell provisioner, I have no idea about other provisioners though. 😿 |
I had it almost working today, yet learning into Go I didn't know that I'll land in metaprogramming hell again chasing the interface through several files to find the implementation :( |
…builder errors Resolves hashicorp#409
…builder errors Resolves hashicorp#409
Check out my current proposal at #3885 that already looks good to me! |
…builder errors Resolves hashicorp#409
…builder errors Resolves hashicorp#409
…builder errors Resolves hashicorp#409
As a workaround until there's a new packer release which contains #3885:
You then have 4 hours to ssh into the still-running VM and poke around. |
What the hell is going on here?
Are we to conclude that the US government has gag-ordered HashiCorp and told them not to fix this, or something? I'm having a hard time coming up with alternative explanations. I've had the impression that HashiCorp's tools are a good choice for DevOpsy stuff overall, but now I'm having second thoughts. Seriously. Are we all missing something obvious here, or is HashiCorp just being super shady? |
The reason this ticket is closed is because the problem has already been fixed. Add flag Furthermore, before answering this question, you can ssh into the VM and poke around. |
@peterlindstrom234, this has already been implemented. You can use "-on-error=abort" and packer shouldn't perform any cleanup when an error occurs. |
Alright, my bad. It sure took strangely long though. |
@peterlindstrom234 it took long because of the US-gov't gag order |
It would appear that an error exit code by
postinstall.sh
is enough to totally wipe out the generated boxes.It would be useful to keep them around to manually manipulate while working on them. The
-debug
switch can be used for this, but it's not really ideal since you basically have to know the appropriate step (stepCreateVM
) to wait at.See also: hashicorp/vagrant#2011
The text was updated successfully, but these errors were encountered: