Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection Issues with "aws_ssm.py" when S3 bucket is newly created #705

Closed
1 task done
jkritzen opened this issue Sep 7, 2021 · 8 comments
Closed
1 task done
Labels
bug This issue/PR relates to a bug module module plugins plugin (any type) python3

Comments

@jkritzen
Copy link

jkritzen commented Sep 7, 2021

Summary

When i create a S3 Bucket via terraform and then run ansible via AWS SSM i get error's from the ansible playbook cause SSM connection moduel uses the Global S3 Bucket DNS Name which DNS propagation will take up to 24h. The boto3 client doesn't follow the redirection, instead the redirection breaks the ansible playbook.

Issue Type

Bug Report

Component Name

aws_ssm.py (AWS Session manager Connection module)

Ansible Version

ansible 2.9.24
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/jkritzen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.9.6 (default, Jul 16 2021, 00:00:00) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)]

Collection Versions

AWS SDK versions

ame: boto3
Version: 1.17.112
Summary: The AWS SDK for Python
Home-page: https://github.com/boto/boto3
Author: Amazon Web Services
Author-email: None
License: Apache License 2.0
Location: /home/jkritzen/.local/lib/python3.9/site-packages
Requires: jmespath, botocore, s3transfer
Required-by: cloudsplaining, checkov, aws-sam-translator
---
Name: botocore
Version: 1.21.35
Summary: Low-level, data-driven core of boto 3.
Home-page: https://github.com/boto/botocore
Author: Amazon Web Services
Author-email: None
License: Apache License 2.0
Location: /home/jkritzen/.local/lib/python3.9/site-packages
Requires: urllib3, python-dateutil, jmespath
Required-by: s3transfer, cloudsplaining, boto3, awscli

Configuration

$ ansible-config dump --only-changed

OS / Environment

Centos 7+8, Fedora 33, Amazon Linux 2 (Issue is Independent from OS)

Steps to Reproduce

  1. Create a S3 bucket for usage with ansible aws collection
  2. Run ansible playbook with fresh S3 bucket
name: check the basics - linux
  hosts: all
  gather_facts: False
  tags: smoke

  vars:
    ansible_connection: community.aws.aws_ssm
    ansible_aws_ssm_region: eu-central-1
    ansible_aws_ssm_bucket_name: "Your Bucket Name"
    content: 'Some test content'
    test_filename: 'test.txt'
    remote_file: "/tmp/{{ test_filename }}"
    local_file: "/tmp/{{ inventory_hostname }}.txt"
  tasks:
  - name: ping
    ping:
    tags: ping
  - name: facts
    setup:
    tags: facts
  - name: command
    command: 'uname -a'
    changed_when: False
    tags: command

Which results in:

TASK [ping] ************************************************************************************
fatal: [i-056d93f5ac0e2e73e]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/u
sr/bin/python"}, "changed": false, "module_stderr": "", "module_stdout": "  File \"/home/ssm-use
r/.ansible/tmp/ansible-tmp-1630996611.7029357-2571004-39322398271826/AnsiballZ_ping.py\", line 1
\r\r\n    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\r\n    ^\r\r\nSyntaxError: invalid syntax
\r\r", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Expected Results

The "aws_ssm.py" connection module uses a not recommended config wich causes a "HTTP 307 Temporary Redirect response" from the S3 URL:

More infos:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingRouting.html#TemporaryRedirection
https://aws.amazon.com/de/premiumsupport/knowledge-center/s3-http-307-response/

To avoid the 307 Temporary Redirect response, send requests only to the Regional endpoint in the same Region as your S3 bucket: https://boto3.amazonaws.com/v1/documentation/api/1.9.42/guide/s3.html

In general, the SDK will handle the decision of what style to use for you, but there are some cases where you may want to set it yourself. For instance, if you have a CORS configured bucket that is only a few hours old, you may need to use path style addressing for generating pre-signed POSTs and URLs until the necessary DNS changes have time to propagate.

Within the ssm connection module (aws_ssm.py) it can be fixed using the "path" adressing style by changing:

        client = session.client(
            service,
            config=Config(signature_version="s3v4")
        )
        return client

to

        client = session.client(
            service,
            config=Config(signature_version="s3v4",s3={'addressing_style': 'path'})
        )
        return client

Actual Results

With current aws_ssm.py:

<i-056d93f5ac0e2e73e> EXEC curl 'https://s3-777-test-advanced-ssm.s3.amazonaws.
com/i-056d93f5ac0e2e73e//home/ssm-user/.ansible/tmp/ansible-tmp-1630996682.7521243-2571157-41384
86885801/AnsiballZ_ping.py
fatal: [i-056d93f5ac0e2e73e]: FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "module_stderr": "",
    "module_stdout": "  File \"/home/ssm-user/.ansible/tmp/ansible-tmp-1630996871.0271783-2571514-241697094330155/AnsiballZ_ping.py\", line 1\r\r\n    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\r\n    ^\r\r\nSyntaxError: invalid syntax\r\r",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

With config style "path":

TASK [ping] *******************************************************************************************************************************************************************************************************************************************
<i-056d93f5ac0e2e73e> EXEC curl 'https://s3.eu-central-1.amazonaws.com/s3-777-test-advanced-ssm/i-056d93f5ac0e2e73e//home/ssm-user/.ansible/tmp/ansible-tmp-1630996924.4223547-2571701-237546745523938/AnsiballZ_ping.py?
ok: [i-056d93f5ac0e2e73e] => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "invocation": {
        "module_args": {
            "data": "pong"
        }
    },
    "ping": "pong"
}

Code of Conduct

  • I agree to follow the Ansible Code of Conduct
@ansibullbot
Copy link

Files identified in the description:

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

@ansibullbot ansibullbot added bug This issue/PR relates to a bug module module needs_triage plugins plugin (any type) python3 labels Sep 7, 2021
@116davinder
Copy link
Contributor

116davinder commented Sep 7, 2021

@jkritzen, can you provide bit more details like why you would need to define ansible_connection: community.aws.aws_ssm this variable?

because once I remove ansible_connection variables everything works fine for me.
if I define this variable as you did I get a different error.

TASK [ping] ************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "failed to find the executable specified /usr/local/bin/session-manager-plugin. Please verify if the executable exists and re-try."}

Ansible supported connections:
https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/connection

@jkritzen
Copy link
Author

jkritzen commented Sep 8, 2021

@116davinder:
The ansible_conetcion: community.aws.aws_ssm is a connection via AWS Session Manager instead of using ssh:
https://docs.ansible.com/ansible/latest/collections/community/aws/aws_ssm_connection.html

It uses the S3 Bucket for transferring Files to the remote VM.

Requirements:

  • The remote EC2 instance must be running the AWS Systems Manager Agent (SSM Agent).
  • The control machine must have the aws session manager plugin installed.
  • The remote EC2 linux instance must have the curl installed.

As mentioned the connection module uses the Global URL from the S3 bucket (which needs 24h to propagate), when using the boto3 adressing style "path" the regional URL is generated.

I can provide you the patched "aws_ssm.py" file.

Kind regards,
Jörg

@116davinder
Copy link
Contributor

Hi @jkritzen,
Honestly, I am confused, I will try to recheck this in a day or so with a fresh mind.

if possible, please provide bit more details about hosts: all and what is your inventory file?

I have tried with various known options and I am unable to reproduce your error.
I am using the below-mentioned inventory file to connect with the ec2 instance.
ref:
https://docs.ansible.com/ansible/latest/collections/amazon/aws/aws_ec2_inventory.html
https://clarusway.com/ansible-working-with-dynamic-inventory-using-aws-ec2-plugin/

---
# Fetch all hosts in us-east-1, the hostname is the public DNS if it exists, otherwise the private IP address
plugin: 'aws_ec2'
regions:
  - 'us-east-1'
filters:
  # All instances with their `Project` tag set to match wildcard`
  tag:Project:
    - '*XXXX*'
  # Add only instances managed by Terraform
  tag:ManagedBy: '*Terraform*'
  # Add only Preprod instances
  tag:Environment: 'Preprod'
# Note: I(hostnames) sets the inventory_hostname. To modify ansible_host without modifying
# inventory_hostname use compose (see example below).
hostnames:
  - 'private-ip-address'
  - 'public-ip-address'
  - 'dns-name'
# keyed_groups may be used to create custom groups
strict: false
keyed_groups:
  # Add hosts to tag_Name_Value groups for each Name/Value tag pair
  - prefix: 'tag'
    key: 'tags'
  # Create a group for each value of the AnsibleGroup tag
  - key: 'tags.AnsibleGroup.split(",")'
    separator: ''
# Set individual variables with compose
compose:
  # Use the private IP address to connect to the host
  # (note: this does not modify inventory_hostname, which is set via I(hostnames))
  ansible_host: 'public_ip_address is defined | ternary(public_ip_address, private_ip_address)'

Error:

TASK [ping] *****************************************************************************************************************************************************************
fatal: [10.72.13.189]: FAILED! => {"msg": "failed to find the executable specified /usr/local/bin/session-manager-plugin. Please verify if the executable exists and re-try."}

@jkritzen
Copy link
Author

Your local ansible controller doesn't fullfill the requirements:
Requirements

  • The below requirements are needed on the local controller node that executes this connection.
  • The remote EC2 instance must be running the AWS Systems Manager Agent (SSM Agent).
  • The control machine must have the aws session manager plugin installed.
  • The remote EC2 linux instance must have the curl installed.

You are missing the aws session manager plugin:
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html#install-plugin-linux

The Inventory files contains the AWS instance id's instead of ip Adresses:

[jump]
i-0d289a8b7c58b0c20

@gillg
Copy link

gillg commented Jan 9, 2022

Should be fully resolved by #786 and #854

@jkritzen
Copy link
Author

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue/PR relates to a bug module module plugins plugin (any type) python3
Projects
None yet
4 participants