Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform prompts several times to enter a provider.aws.region when a certain number of nested modules is reached. #8680

Closed
martin-cossy-atwork opened this issue Sep 6, 2016 · 11 comments

Comments

@martin-cossy-atwork
Copy link

martin-cossy-atwork commented Sep 6, 2016

EDIT: in order to reproduce this issue check out https://github.com/martin-flaregames/terraform-aws-region-prompt and follow the instructions provided in README.md

EDIT: (!) Please notice that the prompt for provider.aws.region will only occur if your ~./aws credential configuration is missing the entry for the default region. If not, Terraform may silently just read and use those credentials instead of the ones declared in the AWS provider.

Terraform is prompting me to manually specify the provider.aws.region although it is already configured and properly working. This occurs when my projects uses more or less deep or complex module structures, like: module uses module uses module, etc.

If I do enter the region then there is a second problem, the provider fails with an error telling me that a specific VPC id cannot be found, although it actually exists and was created by the provider itself some seconds ago.

Funnily the problem disappears if I reduce the number of nested modules. See below.

Terraform Version

Terraform v0.7.2
Terraform v0.7.3
Terraform v0.7.4
Terraform v0.7.5
Terraform v0.7.6

Affected Resource(s)

  • provider aws?

Expected Behavior

The provider should be configured once and work in all childs

Actual Behavior

It fails after some few nested modules including other modules

Steps to Reproduce

  1. rm -rf .terraform/modules
  2. terraform get
  3. terraform plan
  4. Here I get the prompt to enter the region names, if I do then the plan works allright
  5. terraform apply
  6. Again the prompt to enter the region names, if I do then the terraform starts applying
  7. see terraform tell me that a VPC id cannot be found

Steps to Reproduce (detailed)

The relevant part of the project folder structure looks like this

/
    security/
        group/
        ingress/
        allow/
        realm/
    opsworks/
        stack/
        app/
        layer/
            mongo/
    main/

The main test module is located in /main/ and the nested module calls occur in this order

from main/

  • module "mongolayer" { source = "../opsworks/layer/mongo" }

from opsworks/layer/mongo/

  • module "realm" { source = "../../../security/realm" }

from security/realm/

  • module "securitygroup" { source = "../group" }
  • module "allow" { source = "../allow" }
  • module "allowself" { source = "../allow" }

the number of times the region input is requested is in accordance with the number of modules used in the "realm" module.

As described before, when I enter rm -rf .terraform/modules; terraform get; terraform plan then terraform asks for the provider region several times

Get: file:///home/martin/devops/git/fg-infrastructure/devops/stackname
Get: file:///home/martin/devops/git/fg-infrastructure/devops/vpc/autoaddr
Get: file:///home/martin/devops/git/fg-infrastructure/devops/opsworks/stack
Get: file:///home/martin/devops/git/fg-infrastructure/devops/elb/https
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/group
Get: file:///home/martin/devops/git/fg-infrastructure/devops/opsworks/layer/mongo
Get: file:///home/martin/devops/git/fg-infrastructure/devops/vpc
Get: file:///home/martin/devops/git/fg-infrastructure/devops/elb/2listeners
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/group
Get: file:///home/martin/devops/git/fg-infrastructure/devops/route53/elbalias
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/ingress
Get: file:///home/martin/devops/git/fg-infrastructure/devops/alarm/commonelbalarms
Get: file:///home/martin/devops/git/fg-infrastructure/devops/alarm/elbalarm
Get: file:///home/martin/devops/git/fg-infrastructure/devops/alarm/elbalarm
Get: file:///home/martin/devops/git/fg-infrastructure/devops/alarm/elbalarm
Get: file:///home/martin/devops/git/fg-infrastructure/devops/alarm/elbalarm
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/realm
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/group
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/allow
Get: file:///home/martin/devops/git/fg-infrastructure/devops/security/allow
provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

module.elb.elb.data.template_file.ping_protocol: Refreshing state...
module.vpc.vpc.data.template_file.indexed_vpc_address: Refreshing state...
module.vpc.vpc.data.template_file.cidr_block_mask: Refreshing state...
...
...
...

The plan works as expected. And after executing terraform apply I get the prompt again.

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: 

module.elb.elb.data.template_file.ping_port: Refreshing state...
module.elb.elb.data.template_file.ping_protocol: Refreshing state...
...
...
...

but the provider still fails.

...
...
..
module.elb.elb.route53aliases.aws_route53_record.record_AAAA: Still creating... (30s elapsed)
module.elb.elb.route53aliases.aws_route53_record.record_A: Still creating... (30s elapsed)
module.elb.elb.route53aliases.aws_route53_record.record_A: Creation complete
module.elb.elb.route53aliases.aws_route53_record.record_AAAA: Creation complete
Error applying plan:

1 error(s) occurred:

* aws_security_group.security_group: Error creating Security Group: InvalidVpcID.NotFound: The vpc ID 'vpc-8df7a3ea' does not exist
    status code: 400, request id: d75bfd5e-316d-48d1-959e-84bbf8f027a2

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Although that VPC was recently created by the AWS provider.

Current workaround

If I copy/paste the contents of security/realm/ into the opsworks/layer/mongo/ module then all runs ok. So I reduced the number of nested modules and all seems to work again.

References

Seems to be related to 4443

@martin-cossy-atwork
Copy link
Author

Well this is strange. I just upgraded from 0.7.2 to 0.7.3 and the problem disappeared. Cannot tell why, even when taking a look in the change log.

Will keep working in my "complex" modules for some hours and if the problem do not occur again will just close this ticket.

@martin-cossy-atwork
Copy link
Author

The problem is still there. Right now I'm alternating between two almost identical modules (redis/memcached), one of them produces terraform to prompt for the region and the other one does not. Cannot see any differences between them. Will try to gather more information in order to create a test case.

@martin-cossy-atwork
Copy link
Author

I just got a configuration setup in which I can reproduce the problem. But I cannot tell what the problem is. I'll try to get a simplified test case soon. This may take time because as of tomorrow will be in vacation for 2 weeks, so I'll not be able to work in this until then.

@martin-cossy-atwork
Copy link
Author

Got an example for reproducing this problem, please check https://github.com/martin-flaregames/terraform-aws-region-prompt

@mitchellh
Copy link
Contributor

This is fixed with master (0.8) and we added a number of tests around this. :)

@martin-cossy-atwork
Copy link
Author

Well the test provided at github.com/martin-flaregames/terraform-aws-region-prompt is still failing, I guess I must open another ticket with a more focused description.

@mitchellh
Copy link
Contributor

Sorry @martin-flaregames, I am pretty sure I ran your example and it passed so I must've missed something. I apologize!! I'll look at the new issue and move discussion over there.

@papetti23
Copy link

Any update on this? I'm using the same folder structure to modularize but it keeps asking me for the region as well.

@joshma
Copy link

joshma commented Apr 11, 2017

Currently on v0.9.1 and seeing the same issue. terraform plan passes, after prompting me twice (once per module that uses the provider). However, terraform apply fails:

2 error(s) occurred:

* module.A.module.B.provider.aws.ohio: "region": required field is not set
* module.A.module.C.provider.aws.ohio: "region": required field is not set

@martin-cossy-atwork
Copy link
Author

@joshma This ticket is closed, as well as the follow up ticket #10722, which is according to my bug report correctly solved. Please provide a simple example so that the devs at hashicorp can really understand and reproduce your problem. Then open a new issue and explain how to reproduce the error.

@ghost
Copy link

ghost commented Apr 14, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants