Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for AWS instance metadata v2 #8062

Merged
merged 3 commits into from
Jan 10, 2020

Conversation

tyrannosaurus-becks
Copy link
Contributor

@tyrannosaurus-becks tyrannosaurus-becks commented Dec 20, 2019

Relates to #7924

This PR adds support for using AWS instance metadata v2 in the AWS auth agent. It creates a utility tool called InstanceMetadataService that can be used to centralize and encapsulate Vault's knowledge about the service, to make it easier to update in the future.

Copy link
Contributor

@joelthompson joelthompson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tyrannosaurus-becks -- it's great that you're doing this, but I wonder if the SDK doesn't provide functionality for some of this out of the box so you don't have to worry about managing and maintaining it?

@tyrannosaurus-becks
Copy link
Contributor Author

tyrannosaurus-becks commented Dec 20, 2019

@joelthompson thanks for looking at this!

Hey @tyrannosaurus-becks -- it's great that you're doing this, but I wonder if the SDK doesn't provide functionality for some of this out of the box so you don't have to worry about managing and maintaining it?

I don't see anything exporting information about the ec2 metadata service like the base URL here, can you elaborate on the approach you're suggesting?

@tyrannosaurus-becks tyrannosaurus-becks marked this pull request as ready for review December 20, 2019 17:40
@tyrannosaurus-becks tyrannosaurus-becks requested a review from a team December 20, 2019 17:40
@tyrannosaurus-becks
Copy link
Contributor Author

tyrannosaurus-becks commented Jan 9, 2020

Test failures appear unrelated. Here's additional test output:

=== RUN TestAWSEndToEnd
2020-01-09T13:55:16.542-0800 [DEBUG] storage.cache: creating LRU cache: size=0
2020-01-09T13:55:16.543-0800 [DEBUG] storage.cache: creating LRU cache: size=0
2020-01-09T13:55:16.543-0800 [DEBUG] storage.cache: creating LRU cache: size=0
2020-01-09T13:55:16.543-0800 [ERROR] core: no seal config found, can't determine if legacy or new-style shamir
2020-01-09T13:55:16.543-0800 [ERROR] core: no seal config found, can't determine if legacy or new-style shamir
2020-01-09T13:55:16.543-0800 [ERROR] core: no seal config found, can't determine if legacy or new-style shamir
2020-01-09T13:55:16.543-0800 [INFO]  core: security barrier not initialized
2020-01-09T13:55:16.543-0800 [INFO]  core: security barrier initialized: stored=1 shares=3 threshold=3
2020-01-09T13:55:16.543-0800 [DEBUG] core: cluster name not found/set, generating new
2020-01-09T13:55:16.543-0800 [DEBUG] core: cluster name set: name=vault-cluster-4d062359
2020-01-09T13:55:16.543-0800 [DEBUG] core: cluster ID not found, generating new
2020-01-09T13:55:16.543-0800 [DEBUG] core: cluster ID set: id=72214e17-52a6-98a3-089a-c2ac0c642ce3
2020-01-09T13:55:16.543-0800 [DEBUG] core: generating cluster private key
2020-01-09T13:55:16.553-0800 [DEBUG] core: generating local cluster certificate
2020-01-09T13:55:16.561-0800 [INFO]  core: post-unseal setup starting
2020-01-09T13:55:16.561-0800 [DEBUG] core: clearing forwarding clients
2020-01-09T13:55:16.561-0800 [DEBUG] core: done clearing forwarding clients
2020-01-09T13:55:16.570-0800 [INFO]  core: loaded wrapping token key
2020-01-09T13:55:16.570-0800 [INFO]  core: successfully setup plugin catalog: plugin-directory=
2020-01-09T13:55:16.570-0800 [INFO]  core: no mounts; adding default mount table
2020-01-09T13:55:16.571-0800 [INFO]  core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-01-09T13:55:16.571-0800 [INFO]  core: successfully mounted backend: type=system path=sys/
2020-01-09T13:55:16.571-0800 [INFO]  core: successfully mounted backend: type=identity path=identity/
2020-01-09T13:55:16.572-0800 [INFO]  core: successfully enabled credential backend: type=token path=token/
2020-01-09T13:55:16.572-0800 [INFO]  core: restoring leases
2020-01-09T13:55:16.572-0800 [INFO]  rollback: starting rollback manager
2020-01-09T13:55:16.573-0800 [DEBUG] expiration: collecting leases
2020-01-09T13:55:16.573-0800 [DEBUG] expiration: leases collected: num_existing=0
2020-01-09T13:55:16.573-0800 [INFO]  expiration: lease restore complete
2020-01-09T13:55:16.574-0800 [DEBUG] identity: loading entities
2020-01-09T13:55:16.574-0800 [DEBUG] identity: entities collected: num_existing=0
2020-01-09T13:55:16.574-0800 [INFO]  identity: entities restored
2020-01-09T13:55:16.574-0800 [DEBUG] identity: identity loading groups
2020-01-09T13:55:16.574-0800 [DEBUG] identity: groups collected: num_existing=0
2020-01-09T13:55:16.574-0800 [INFO]  identity: groups restored
2020-01-09T13:55:16.574-0800 [INFO]  core: post-unseal setup complete
2020-01-09T13:55:16.574-0800 [INFO]  core: root token generated
2020-01-09T13:55:16.574-0800 [INFO]  core: pre-seal teardown starting
2020-01-09T13:55:16.574-0800 [DEBUG] expiration: stop triggered
2020-01-09T13:55:16.574-0800 [DEBUG] expiration: finished stopping
2020-01-09T13:55:16.574-0800 [INFO]  rollback: stopping rollback manager
2020-01-09T13:55:16.574-0800 [INFO]  core: pre-seal teardown complete
2020-01-09T13:55:16.575-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.575-0800 [DEBUG] core: cannot unseal, not enough keys: keys=1 threshold=3 nonce=69b317f0-3f8d-5bf9-0a8e-4021d54e140f
2020-01-09T13:55:16.575-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.575-0800 [DEBUG] core: cannot unseal, not enough keys: keys=2 threshold=3 nonce=69b317f0-3f8d-5bf9-0a8e-4021d54e140f
2020-01-09T13:55:16.575-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.575-0800 [DEBUG] core: starting cluster listeners
2020-01-09T13:55:16.575-0800 [INFO]  core.cluster-listener: starting listener: listener_address=127.0.0.1:0
2020-01-09T13:55:16.575-0800 [INFO]  core.cluster-listener: serving cluster requests: cluster_listen_address=127.0.0.1:44089
2020-01-09T13:55:16.575-0800 [INFO]  core: vault is unsealed
2020-01-09T13:55:16.575-0800 [INFO]  core: entering standby mode
2020-01-09T13:55:16.575-0800 [INFO]  core: acquired lock, enabling active operation
2020-01-09T13:55:16.575-0800 [DEBUG] core: generating cluster private key
2020-01-09T13:55:16.583-0800 [DEBUG] core: generating local cluster certificate
2020-01-09T13:55:16.593-0800 [INFO]  core: post-unseal setup starting
2020-01-09T13:55:16.593-0800 [DEBUG] core: clearing forwarding clients
2020-01-09T13:55:16.593-0800 [DEBUG] core: done clearing forwarding clients
2020-01-09T13:55:16.593-0800 [INFO]  core: loaded wrapping token key
2020-01-09T13:55:16.593-0800 [INFO]  core: successfully setup plugin catalog: plugin-directory=
2020-01-09T13:55:16.593-0800 [INFO]  core: successfully mounted backend: type=system path=sys/
2020-01-09T13:55:16.593-0800 [INFO]  core: successfully mounted backend: type=identity path=identity/
2020-01-09T13:55:16.593-0800 [INFO]  core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-01-09T13:55:16.594-0800 [INFO]  core: successfully enabled credential backend: type=token path=token/
2020-01-09T13:55:16.594-0800 [INFO]  core: restoring leases
2020-01-09T13:55:16.594-0800 [INFO]  rollback: starting rollback manager
2020-01-09T13:55:16.594-0800 [DEBUG] expiration: collecting leases
2020-01-09T13:55:16.594-0800 [DEBUG] expiration: leases collected: num_existing=0
2020-01-09T13:55:16.594-0800 [DEBUG] identity: loading entities
2020-01-09T13:55:16.594-0800 [DEBUG] identity: entities collected: num_existing=0
2020-01-09T13:55:16.594-0800 [INFO]  identity: entities restored
2020-01-09T13:55:16.594-0800 [DEBUG] identity: identity loading groups
2020-01-09T13:55:16.594-0800 [DEBUG] identity: groups collected: num_existing=0
2020-01-09T13:55:16.594-0800 [INFO]  identity: groups restored
2020-01-09T13:55:16.594-0800 [DEBUG] core: request forwarding setup function
2020-01-09T13:55:16.594-0800 [DEBUG] core: clearing forwarding clients
2020-01-09T13:55:16.594-0800 [DEBUG] core: done clearing forwarding clients
2020-01-09T13:55:16.595-0800 [DEBUG] core: leaving request forwarding setup function
2020-01-09T13:55:16.595-0800 [INFO]  core: post-unseal setup complete
2020-01-09T13:55:16.594-0800 [INFO]  expiration: lease restore complete
2020-01-09T13:55:16.596-0800 [INFO]  core: successful mount: namespace= path=secret/ type=kv
2020-01-09T13:55:16.596-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.596-0800 [DEBUG] core: cannot unseal, not enough keys: keys=1 threshold=3 nonce=c9436bf5-267b-f91a-f0a0-e163664e0a28
2020-01-09T13:55:16.596-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.596-0800 [DEBUG] core: cannot unseal, not enough keys: keys=2 threshold=3 nonce=c9436bf5-267b-f91a-f0a0-e163664e0a28
2020-01-09T13:55:16.596-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.596-0800 [DEBUG] core: starting cluster listeners
2020-01-09T13:55:16.596-0800 [INFO]  core.cluster-listener: starting listener: listener_address=127.0.0.1:0
2020-01-09T13:55:16.597-0800 [INFO]  core.cluster-listener: serving cluster requests: cluster_listen_address=127.0.0.1:34733
2020-01-09T13:55:16.597-0800 [INFO]  core: vault is unsealed
2020-01-09T13:55:16.597-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.597-0800 [INFO]  core: entering standby mode
2020-01-09T13:55:16.597-0800 [DEBUG] core: cannot unseal, not enough keys: keys=1 threshold=3 nonce=630a8068-05ea-d898-4215-7a861281e167
2020-01-09T13:55:16.597-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.597-0800 [DEBUG] core: cannot unseal, not enough keys: keys=2 threshold=3 nonce=630a8068-05ea-d898-4215-7a861281e167
2020-01-09T13:55:16.597-0800 [DEBUG] core: unseal key supplied
2020-01-09T13:55:16.597-0800 [DEBUG] core: starting cluster listeners
2020-01-09T13:55:16.597-0800 [INFO]  core.cluster-listener: starting listener: listener_address=127.0.0.1:0
2020-01-09T13:55:16.597-0800 [INFO]  core.cluster-listener: serving cluster requests: cluster_listen_address=127.0.0.1:37669
2020-01-09T13:55:16.597-0800 [INFO]  core: vault is unsealed
2020-01-09T13:55:16.597-0800 [INFO]  core: entering standby mode
2020-01-09T13:55:18.597-0800 [TRACE] core: found new active node information, refreshing
2020-01-09T13:55:18.597-0800 [DEBUG] core: parsing information for new active node: active_cluster_addr=https://127.0.0.1:44089 active_redirect_addr=https://127.0.0.1:43915
2020-01-09T13:55:18.597-0800 [DEBUG] core: refreshing forwarding connection
2020-01-09T13:55:18.597-0800 [DEBUG] core: clearing forwarding clients
2020-01-09T13:55:18.597-0800 [DEBUG] core: done clearing forwarding clients
2020-01-09T13:55:18.597-0800 [DEBUG] core: done refreshing forwarding connection
2020-01-09T13:55:18.597-0800 [TRACE] core: found new active node information, refreshing
2020-01-09T13:55:18.597-0800 [DEBUG] core: parsing information for new active node: active_cluster_addr=https://127.0.0.1:44089 active_redirect_addr=https://127.0.0.1:43915
2020-01-09T13:55:18.597-0800 [DEBUG] core: refreshing forwarding connection
2020-01-09T13:55:18.597-0800 [DEBUG] core: clearing forwarding clients
2020-01-09T13:55:18.597-0800 [DEBUG] core: done clearing forwarding clients
2020-01-09T13:55:18.597-0800 [DEBUG] core: creating rpc dialer: host=fw-def35221-e34d-7e1b-ffae-342bc41623d1
2020-01-09T13:55:18.597-0800 [DEBUG] core: done refreshing forwarding connection
2020-01-09T13:55:18.598-0800 [DEBUG] core: creating rpc dialer: host=fw-def35221-e34d-7e1b-ffae-342bc41623d1
2020-01-09T13:55:18.598-0800 [DEBUG] core.cluster-listener: performing server cert lookup
2020-01-09T13:55:18.600-0800 [INFO]  core: enabled audit backend: path=noop/ type=noop
2020-01-09T13:55:18.607-0800 [DEBUG] auth.aws.auth_aws_c570980f.initialize: starting initialization
2020-01-09T13:55:18.607-0800 [INFO]  core: enabled credential backend: path=aws/ type=aws
2020-01-09T13:55:18.608-0800 [INFO]  auth.aws.auth_aws_c570980f.initialize: an upgrade was performed during initialization
2020-01-09T13:55:18.648-0800 [DEBUG] core.cluster-listener: performing client cert lookup
2020-01-09T13:55:18.683-0800 [DEBUG] core.request-forward: got request forwarding connection
2020-01-09T13:55:18.683-0800 [DEBUG] core.cluster-listener: performing server cert lookup
2020-01-09T13:55:18.711-0800 [DEBUG] core.cluster-listener: performing client cert lookup
2020-01-09T13:55:18.738-0800 [DEBUG] core.request-forward: got request forwarding connection
2020-01-09T13:55:19.194-0800 [INFO]  auth.handler: starting auth handler
2020-01-09T13:55:19.194-0800 [INFO]  auth.handler: authenticating
2020-01-09T13:55:19.194-0800 [TRACE] auth.aws: beginning authentication
2020-01-09T13:55:19.195-0800 [INFO]  sink.file: creating file sink
2020-01-09T13:55:19.195-0800 [TRACE] sink.file: enter write_token: path=/tmp/auth.tokensink.test.125709150
2020-01-09T13:55:19.195-0800 [TRACE] sink.file: exit write_token: path=/tmp/auth.tokensink.test.125709150
2020-01-09T13:55:19.195-0800 [INFO]  sink.file: file sink configured: path=/tmp/auth.tokensink.test.125709150 mode=-rw-r-----
2020-01-09T13:55:19.195-0800 [INFO]  sink.server: starting sink server
2020-01-09T13:55:20.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:20.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:21.122-0800 [DEBUG] identity: creating a new entity: alias="id:"af2d5745-8e48-2e0a-9472-6d0b8f7d619a" canonical_id:"bc4bb1fa-a8d1-a13a-29b0-ba0e945118a3" mount_type:"aws" mount_accessor:"auth_aws_c570980f" mount_path:"auth/aws/" name:"ca6ee915-820d-7b7d-6678-29d024409ee5" creation_time:<seconds:1578606921 nanos:122220127 > last_update_time:<seconds:1578606921 nanos:122220127 > namespace_id:"root" "
2020-01-09T13:55:21.124-0800 [INFO]  auth.handler: authentication successful, sending token to sinks
2020-01-09T13:55:21.124-0800 [INFO]  auth.handler: starting renewal process
2020-01-09T13:55:21.130-0800 [TRACE] sink.file: enter write_token: path=/tmp/auth.tokensink.test.125709150
2020-01-09T13:55:21.130-0800 [INFO]  sink.file: token written: path=/tmp/auth.tokensink.test.125709150
2020-01-09T13:55:21.130-0800 [TRACE] sink.file: exit write_token: path=/tmp/auth.tokensink.test.125709150
2020-01-09T13:55:21.131-0800 [INFO]  auth.handler: renewed auth token
2020-01-09T13:55:21.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:21.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:22.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:22.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:23.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:23.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:24.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:24.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:25.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:25.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:26.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:26.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:27.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:27.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:28.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:28.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:29.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:29.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:30.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:30.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:31.130-0800 [INFO]  expiration: revoked lease: lease_id=sys/wrapping/wrap/hed82c3b8e53e0fc5dba6c36eb597e589222b3952867123801666a3e6ed5179b0
2020-01-09T13:55:31.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:31.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:32.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:32.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:33.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:33.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:34.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:34.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:35.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:35.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:36.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:36.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:37.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:37.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:38.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:38.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:39.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:39.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:40.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:40.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:41.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:41.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:42.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:42.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:43.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:43.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:44.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:44.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:45.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:45.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:46.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:46.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:47.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:47.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:48.195-0800 [TRACE] auth.aws: checking for new credentials
2020-01-09T13:55:48.195-0800 [TRACE] auth.aws: credentials are unchanged and still valid
2020-01-09T13:55:48.613-0800 [INFO]  auth.handler: shutdown triggered, stopping lifetime watcher
2020-01-09T13:55:48.613-0800 [INFO]  auth.handler: auth handler stopped
2020-01-09T13:55:48.613-0800 [TRACE] auth.aws: shutdown triggered, stopping aws auth handler
2020-01-09T13:55:48.613-0800 [INFO]  sink.server: sink server stopped
2020-01-09T13:55:48.613-0800 [INFO]  TestAWSEndToEnd: cleaning up vault cluster
--- PASS: TestAWSEndToEnd (33.72s)
    aws_end_to_end_test.go:131: output: /tmp/auth.tokensink.test.125709150
PASS

Process finished with exit code 0

@tyrannosaurus-becks
Copy link
Contributor Author

@joelthompson

Hey @tyrannosaurus-becks -- it's great that you're doing this, but I wonder if the SDK doesn't provide functionality for some of this out of the box so you don't have to worry about managing and maintaining it?

You were right! I'd thought all the SDK methods returned objects rather than raw bodies but GetDynamicData turned out to return bodies as strings. Thanks for the tip, I'd been unaware of that method.

Copy link
Contributor

@kalafut kalafut left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Nice to see the manual http calls getting replaced.

@tyrannosaurus-becks tyrannosaurus-becks merged commit 820dfaf into master Jan 10, 2020
@tyrannosaurus-becks tyrannosaurus-becks deleted the use-aws-instance-metadata-v2 branch January 10, 2020 17:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants