Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load issue : pillar failed to render #51575

Closed
cesarl333 opened this issue Feb 11, 2019 · 2 comments
Closed

Load issue : pillar failed to render #51575

cesarl333 opened this issue Feb 11, 2019 · 2 comments
Labels
Duplicate Duplicate of another issue or PR - will be closed info-needed waiting for more info
Milestone

Comments

@cesarl333
Copy link

Description of Issue/Question

Using salt-cloud to deploy VMware VM, i have a reactor on /salt/cloud/*/created to finish the configuration.
Due to VLAN necessity, I have to move my network adapter from another portgroup after deploying it in a first one. For security reasons, my cloud-master isn't the target master for every minions.
From my point of view, orchestrate is necessary to permit requisites and be sure that my network and salt-minion configuration states are applied before moving network adapter and poweroff VM.
Next, when the VM power on, the new network and minion configs are loaded and VM is connected to the target VLAN so that minion connects to the dedicated syndic.
So, my reactor runs a runner.state.orchestrate which does the descripted sequence.
It works great if I deploy my VMs one by one.
Unfortunately, when I deploy many VMs simultaneously, my orchestrate failed for some of its, due to rendering pillar issue : jinja variables set in my top.sls are said impossible to import in other pillars because of not being exported.
It looks like a load issue due to an excessive number of pillar rendering requests (saying that because the same configurations works great if deployed one by one).
I don't have any issue when applying states, even globally.
I wonder if there is an ability to :

  • retry pillar rendering ?
  • retry applying state inside the orchestrate ?
    Or better, avoid such rendering errors ?

Setup

Reactor reactor.sls :
reactor_bootstrap:
runner.state.orchestrate:
- mods: deploy_bootstrap
- pillar:
event_data: {{ data | json() }}

Orchestrate deploy_bootstrap.sls :
{% set VM_Name = data['data']['name'] %}
deploy_salt:
salt.state:
- tgt: {{ VM_Name }}
- sls:
- salt.config
[...]

State salt/config.sls :
/etc/salt/minion.d/config.conf:
file.managed:
- source: salt://salt/files/config.conf
- template: jinja
- mode: 0644
- user: root
- group: root
- makedirs: True

File salt://salt/files/config.conf :
master: {% for server in pillar.salt.config.vlan_master %}

  • {{ server }}
    {%- endfor %}
    master_type: str
    retry_dns: 0

Pillar top.sls :
{% set VM_Name = grains['id'].split('.')[0] %}
base:
'*':
- salt

Pillar salt/config.sls :
{%- from "top.sls" import VM_Name %}
[... generates the pillar.salt.config.vlan_master based on VM_Name]

Steps to Reproduce Issue

Export a jinja variable in the top.sls on which other pillar depend
Call a salt.state applying on minion from an orchestrate launched many times simultaneously

2019-02-07 16:58:27,324 [salt.pillar :741 ][CRITICAL][854] Rendering SLS 'salt.config' failed, render error:
Jinja variable the template 'top.sls' (imported on line 1) does not export the requested name 'VM_Name'
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/pillar/init.py", line 736, in render_pstate
**defaults)
File "/usr/lib/python2.7/dist-packages/salt/template.py", line 93, in compile_template
ret = render(input_data, saltenv, sls, **render_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/renderers/jinja.py", line 70, in render
**kws)
File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 170, in render_tmpl
output = render_str(tmplstr, context, tmplpath)
File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 399, in render_jinja_tmpl
buf=tmplstr)
SaltRenderError: Jinja variable the template 'top.sls' (imported on line 1) does not export the requested name 'VM_Name'

Versions Report

Master :

Salt Version:
Salt: 2018.3.2

Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.2
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.9.4
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.2
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.9 (default, Jun 29 2016, 13:08:31)
python-gnupg: 0.3.6
PyYAML: 3.11
PyZMQ: 14.4.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.0.5

System Versions:
dist: debian 8.10
locale: UTF-8
machine: x86_64
release: 3.16.0-5-amd64
system: Linux
version: debian 8.10

Minion :

Salt Version:
Salt: 2018.3.3

Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.2
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.9.4
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.2
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.9 (default, Jun 29 2016, 13:08:31)
python-gnupg: 0.3.6
PyYAML: 3.11
PyZMQ: 14.4.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.0.5

System Versions:
dist: debian 8.10
locale: UTF-8
machine: x86_64
release: 3.16.0-5-amd64
system: Linux
version: debian 8.10

@Ch3LL
Copy link
Contributor

Ch3LL commented Feb 11, 2019

looks like this is a duplicate of #30353 and #23373

and should most likely be fixed by #50655

@Ch3LL Ch3LL added Duplicate Duplicate of another issue or PR - will be closed info-needed waiting for more info labels Feb 11, 2019
@Ch3LL Ch3LL added this to the Blocked milestone Feb 11, 2019
@cesarl333
Copy link
Author

Hi,
Sorry for the delay, a production incident prevent me to try this before.
I've applied the #50655 and it seems to solve this issue.
Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Duplicate Duplicate of another issue or PR - will be closed info-needed waiting for more info
Projects
None yet
Development

No branches or pull requests

2 participants