-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add asg refresh and info modules #425
Add asg refresh and info modules #425
Conversation
recheck |
Thank you, @pabelanger, I'll definitely re-check on my end. However, upon a rebuild, most of the errors I'm seeing are for "max retries" failures on other, existing modules, e.g.:
|
test /rebuild_failed |
89cd027
to
467a287
Compare
RequestLimitExceeded means that you're hitting the API limits. We have a decorator which should help with that. Short example:
We've generally found that the default boto3 retry process is insufficient. If you need to use pagination there's also more information in: |
b37ac83
to
1a7385a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking the time to write these modules. Some suggestions inline.
Additionally, some of your documentation is a little terse (or possibly include copy&paste artifacts). Please see
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documenting.html#documentation-block for more information about what's expected in the documentation blocks.
059a6e9
to
1a7385a
Compare
@tremble Thank you very much for your feedback on these two modules. Apologies, I only saw your comment about the 'retries' issue initially, and only worked with that in my recent rebuilds. I'll take into account all your other suggestions, as well. ---I'm seeing a lot of other folk's builds are failing in intermittent ways, and wasn't sure if it's Shippable running up against their own AWS account's API limits? Also, in running my own builds, I've noted instances in Shippable where a simple "true/false" assertion task-step would take over five minutes, or when a docker teardown would also cause a build to 'hang' for a while (in addition to the retries-related issues for existing modules). |
When the tests are run in shippable they're running in an AWS account managed by the Ansible team. ansible-test tries to be clever and only run tests against code that's changed, but when changes are made to module_utils or things like the groups list, this trigges a full CI run. This in turn can trigger around 24 parallel sets of tests and starts bumping up against the AWS account's API limits. In general we see four types of flake:
Yeah, that's just the downside of running in an environment we don't fully control. In a perfect world the CI nodes would have the docker containers pre-downloaded. The Ansible Cloud team has plans to move to a Zuul controlled entirely by the wider Ansible team (see also the big warning at the top of the shippable pages - shippable's being decommissioned) |
Thank you for all the details, @tremble . Good to know some of the limitations of the current testing setup. I've been going over your code-review feedback, and am working on getting a passing build. |
dd081bb
to
9076255
Compare
Hi @tremble. Thank you again for your code-review/feedback. I thought your comments/fixes were pretty straightforward, so I implemented them and then tried to rebuild several times over the course of the last week. However, I'm still getting some unexpected failures which all seem to be related to retries for tests on existing modules, or simply some sections timing out past the 45 minute mark. (e.g.: https://app.shippable.com/github/ansible-collections/community.aws/runs/1708/summary/console). I'd still love to get these two modules into the collection, but I'm not sure what else I should do on my end to facilitate the process. I don't want to keep "clogging-up" the Shippable builds with my repeat-failures, and hoping one will make it under the 45 minute mark. However, I'm not sure what needs to be done in relation to these new "asg-refresh" modules or their related tests, as all failures seem to be coming from other, existing, modules (unless I'm mistaken?). |
b0126fc
to
39d0cd9
Compare
39d0cd9
to
2703899
Compare
@tremble My apologies, just coming back to this code after some time...I think I may have also made a mistake in trying to rebuild without replying to each code-change you mentioned in your "change requested". I'd just changed the code itself, but not sure if that was sufficient? Anyhow, I'm still getting some intermittent failures, and wasn't sure if it would be easier to close this PR and open a new, cleaner one. Please let me know what you think would be best. |
Hi @tremble. I see my builds are still failing. Not sure if I have a regression. Also, due to my mistakes in the review process, I think it's better if I clean up my code a bit and open a new PR. Thank you again for the review, I will incorporate it into a cleaner, new PR. Hopefully that will pass the build. |
…ameters (ansible-collections#425) Add example for amazon.aws.aws_secret with region and aws_profile parameters SUMMARY Added a new example for amazon.aws.aws_secret that includes use of the region and aws_profile parameters. Resolves ansible-collections#416. ISSUE TYPE Docs Pull Request COMPONENT NAME amazon.aws.aws_secret Reviewed-by: Abhijeet Kasurde <None> Reviewed-by: Mark Chappell <None> Reviewed-by: None <None>
SUMMARY
Adding the ec2_asg_instance_refresh and related *_info module. These modules are intended to be used together to start or cancel an EC2 AutoScaling Group (ASG) instance refresh, and then track the subsequent progress with the provided InstanceRefreshId. The *_info module can also be used to get multiple pages of refresh history using the NextToken.
ISSUE TYPE
COMPONENT NAME
ec2_asg_instance_refresh
ec2_asg_instance_refreshes_info
ADDITIONAL INFORMATION