-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform nomad_job throwing "job
stanza not found" error during terraform plan
when we have made no code change
#92
Comments
job
stanza not found" errorjob
stanza not found" error during terraform plan
when we have made no code change
Hi @wfeng-fsde , thanks for the report. Was there an upgrade (of Terraform or the Nomad provider) that changed to cause this? |
@cgbaker Teammate of @wfeng-fsde here. We ran into this issue with Terraform 0.12.18 first I believed. Then we tried using latest Terraform 0.12.20. But we ran into same issue with both versions of Terraform. On the nomad provider side, since the provider version spec is "~> 1.4", I am not sure if we were using the same version earlier, but now I see we're pulling v1.4.2 version of nomad tf provider. |
I'm not getting the same error as you, but I'm getting something similar. Using the following versions:
and a shortened template base on the one included above. I get the following error:
The reason is that the
There are a few things I don't understand: why you're getting a different error message and why this worked before. Is that the entire template pasted above (perhaps the copy above is missing the first line)? If not, can you provide the full template? |
I also ran into this bug with Terraform v0.12.24 + provider.nomad v1.4.5 We are able to reproduce 100% of time. Here is my job template: job "docs" {
datacenters = ["test"]
group "example" {
meta {
date_time = "${deploy_timestamp}"
}
task "server" {
driver = "raw_exec"
template {
destination = "local/sample.conf"
data = "sample"
}
}
}
} and this is main.tf: locals {
template_vars = {
deploy_timestamp = formatdate("DD-MM-YY hh-mm ZZZ", timestamp())
}
}
resource nomad_job test {
jobspec = templatefile("${path.module}/job.hcl.tpl", local.template_vars)
deregister_on_destroy = true
deregister_on_id_change = true
} it works fine on first apply, but after nomad_job is added to state file, the refresh of the state fails to parse job spec with error "'job' stanza not found" |
Tried replacing timestamp with random_id (via random provider) and got the same result. Moved deploy_timestamp from meta, into template and also got the same result. It seems that any dynamic value that changes between apply commands causes the bug. At the same time, if deploy_timestamp is set via a terraform variable that we change between applys works fine. |
Thank you for the detailed reproduction. I will check this out again.
…On Wed, Apr 15, 2020, 03:06 mahsoud ***@***.***> wrote:
Tried replacing timestamp with random_id (via random provider) and got the
same result.
Moved deploy_timestamp from meta, into template and also got the same
result.
It seems that any dynamic value that changes between apply commands causes
the bug. At the same time, if deploy_timestamp is set via a terraform
variable that we change between applys works fine.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#92 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMY6TZ76SWUNUQZ5USSTL3RMVTGRANCNFSM4KOMGUEQ>
.
|
Even if the generated value doesn't change 🤔 [...]
jobspec = templatefile("${path.module}/test.nomad", {
date = formatdate("YYYY-MM-DD", timestamp())
})
} So the generated value today was |
@cgbaker So I quickly checked and here: As soon as there is a I tried to add if !d.NewValueKnown("jobspec") {
return nil
} just before and [...]
~ jobspec = <<~EOT
job "docs" {
datacenters = ["dc1"]
group "example" {
meta {
date = "2020-04-29 48"
}
task "server" {
driver = "docker"
config {
image = "nginx"
}
}
}
}
EOT -> (known after apply)
[...] Which indeed says The problem is that in this case, there will be a diff on every apply. But I think there's no choice actually... |
I believe this is fair, and in my case, this was expected and desired behaviour |
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
Run
terraform -v
to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.This command will also output the provider version, please include that as well.
Nomad Version
Run
nomad server members
in your target node to view which version of Nomad is running. Make sure to include the entire version output.Provider Configuration
Which values are you setting in the provider configuration?
Environment Variables
Do you have any Nomad specific environment variable set in the machine running Terraform?
Nothing.
Affected Resource(s)
Please list the resources as a list, for example:
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
Terraform Configuration Files
and the template is:
Debug Output
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
During
terraform plan
, I get the following error:Panic Output
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the
crash.log
.Expected Behavior
What should have happened?
We have not changed this code for quite a long time and our infra has been up-to-date with this resource. So I expect
terraform plan
should pass with no change in this resource or no error output.Actual Behavior
What actually happened?
We came across this just recently while we made some changes to some other resources that is totally unrelated,
terraform plan
gives us this error. So I'm suspecting that this is a provider bug.Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
This was during
terraform plan
and we have not made any code change to this resource.Important Factoids
Are there anything atypical about your accounts that we should know? For example: Do you have ACL enabled? Multi-region deployment?
No
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
The text was updated successfully, but these errors were encountered: