Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reactor/orchestrate race condition on salt['pillar.get'] #23373

Closed
tnypex opened this issue May 5, 2015 · 22 comments
Closed

reactor/orchestrate race condition on salt['pillar.get'] #23373

tnypex opened this issue May 5, 2015 · 22 comments
Assignees
Labels
Bug broken, incorrect, or confusing behavior Confirmed Salt engineer has confirmed bug/feature - often including a MCVE Core relates to code central or existential to Salt fixed-pls-verify fix is linked, bug author to confirm fix P4 Priority 4 Runners severity-high 2nd top severity, seen by most users, causes major problems
Milestone

Comments

@tnypex
Copy link

tnypex commented May 5, 2015

Hi all,

with 2015.2.0rc2

I have the following reactor sls that calls an orchestrate runner:

{% set fun = data.get('fun', {}) %}
{% set id = data.get('id', {}) %}
{% if fun == "test.ping" %}
orchestrate_test_{{id}}:
  runner.state.orchestrate:
    - mods: orchestration.test
    - pillar: { target_server: {{id}} }
{% endif %}

When running test.ping on multiple minions (3 in this case)
the reactor sls is rendered fine 3 times :

2015-05-05 15:52:40,038 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /srv/salt/base/reactor/jobs_ret.sls:


orchestrate_test_minion1:
  runner.state.orchestrate:
    - mods: orchestration.test
    - pillar: { target_server: minion1 }
2015-05-05 15:52:40,128 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /srv/salt/base/reactor/jobs_ret.sls:


orchestrate_test_minion2:
  runner.state.orchestrate:
    - mods: orchestration.test
    - pillar: { target_server: minion2 }
2015-05-05 15:52:40,261 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /srv/salt/base/reactor/jobs_ret.sls:


orchestrate_test_minion3:
  runner.state.orchestrate:
    - mods: orchestration.test
    - pillar: { target_server: minion3 }

however, the orchestrate runners will not always get the correct pillar data (it will sometimes get an empty value, sometimes an other orchestrate runner's pillar data, sometimes the expected value) :

The orchestrate sls:

{% set tgs = salt['pillar.get']('target_server') %}

test:
  salt.function:
    - name: test.arg
    - tgt: {{ tgs }}
2015-05-05 15:52:45,024 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test.sls:


test:
  salt.function:
    - name: test.arg
    - tgt: minion1
2015-05-05 15:52:45,025 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test.sls:


test:
  salt.function:
    - name: test.arg
    - tgt: minion1
2015-05-05 15:52:45,275 [salt.template                              ][DEBUG   ][25815] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test.sls:


test:
  salt.function:
    - name: test.arg
    - tgt: minion3

There are also sometimes some RuntimeError: Set changed size during iteration exceptions raised such as:

2015-05-05 15:52:48,917 [salt.loader                                ][ERROR   ][25815] Failed to read the virtual function for states: xmpp
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/salt/loader.py", line 1098, in process_virtual
    virtual = mod.__virtual__()
  File "/usr/lib/python2.6/site-packages/salt/states/xmpp.py", line 25, in __virtual__
    return 'xmpp' if 'xmpp.send_msg' in __salt__ else False
  File "/usr/lib64/python2.6/_abcoll.py", line 342, in __contains__
    self[key]
  File "/usr/lib/python2.6/site-packages/salt/utils/lazy.py", line 85, in __getitem__
    if self._load(key):
  File "/usr/lib/python2.6/site-packages/salt/loader.py", line 1024, in _load
    ret = _inner_load(mod_name)
  File "/usr/lib/python2.6/site-packages/salt/loader.py", line 1013, in _inner_load
    if self._load_module(name) and key in self._dict:
  File "/usr/lib/python2.6/site-packages/salt/loader.py", line 990, in _load_module
    Depends.enforce_dependencies(self._dict, self.tag)
  File "/usr/lib/python2.6/site-packages/salt/utils/decorators/__init__.py", line 81, in enforce_dependencies
    for module, func, fallback_function in dependent_set:
RuntimeError: Set changed size during iteration
@jfindlay jfindlay added Bug broken, incorrect, or confusing behavior Core relates to code central or existential to Salt severity-high 2nd top severity, seen by most users, causes major problems P4 Priority 4 labels May 6, 2015
@jfindlay jfindlay added this to the Approved milestone May 6, 2015
@jfindlay
Copy link
Contributor

jfindlay commented May 6, 2015

@tnypex, thanks for the report.

@cachedout
Copy link
Contributor

My first thought here is that we might be hitting an issue with threading in the reactor, though it doesn't quite seem clear how that might be affecting things. We'll need to carefully step through the code here to see what's happening.

@cachedout
Copy link
Contributor

@tnypex There should be some lines in your debug log that start with ThreadPool executing func: .... Can you paste those? They'll be right after those rendering lines you already pasted.

@tnypex
Copy link
Author

tnypex commented May 7, 2015

Here they are:

2015-05-05 15:52:40,066 [salt.utils.process                         ][DEBUG   ][25815] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x7fbaf0c47a90>> with args:('state.orchestrate', {'mods': 'orchestration.test', 'name': 'orchestrate_test_minion1', 'pillar': OrderedDict([('target_server', 'minion1')]), 'state': 'runner', '__id__': 'orchestrate_test_minion1', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs_ret.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
2015-05-05 15:52:40,180 [salt.utils.process                         ][DEBUG   ][25815] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x7fbaf0c47a90>> with args:('state.orchestrate', {'mods': 'orchestration.test', 'name': 'orchestrate_test_minion2', 'pillar': OrderedDict([('target_server', 'minion2')]), 'state': 'runner', '__id__': 'orchestrate_test_minion2', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs_ret.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
2015-05-05 15:52:40,333 [salt.utils.process                         ][DEBUG   ][25815] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x7fbaf0c47a90>> with args:('state.orchestrate', {'mods': 'orchestration.test', 'name': 'orchestrate_test_minion3', 'pillar': OrderedDict([('target_server', 'minion3')]), 'state': 'runner', '__id__': 'orchestrate_test_minion3', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs_ret.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}

@tnypex
Copy link
Author

tnypex commented May 7, 2015

I don't really know what I am doing :), but it looks like the following fixes this issue.
Maybe it will ring some bells

diff -u reactor.py_orig reactor.py
--- reactor.py_orig     2015-05-07 15:56:28.748606678 +0200
+++ reactor.py  2015-05-07 15:44:34.142703862 +0200
@@ -216,7 +216,7 @@
         if 'runner' not in self.client_cache:
             self.client_cache['runner'] = salt.runner.RunnerClient(self.opts)
         try:
-            self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
+            self.pool.fire_async(self.client_cache['runner'].async, args=(fun, kwargs))
         except SystemExit:
             log.warning('Attempt to exit in reactor by runner. Ignored')
         except Exception as exc:

rallytime pushed a commit to rallytime/salt that referenced this issue May 20, 2015
Depends.enforce_dependency() class method fires unsuccessfully. There appears
to be no synchronization within the Depends decorator class wrt the
class global dependency_dict which results in incomplete population of any
loader instantiation occuring at the time of one of these exceptions.

This would mitigate the immediate affects of saltstack#23839 and saltstack#23373.
@DmitryKuzmenko DmitryKuzmenko self-assigned this Jul 9, 2015
@DmitryKuzmenko DmitryKuzmenko added the Confirmed Salt engineer has confirmed bug/feature - often including a MCVE label Sep 29, 2015
@meggiebot meggiebot modified the milestones: B 11, Approved Sep 30, 2015
@basepi
Copy link
Contributor

basepi commented Oct 7, 2015

@jacksontj We've tracked this down to a race condition due to your addition of threading in the reactor system. @DmitryKuzmenko's evaluation is below:

The problem with reactor race condition is in the inconsistence between loader+modules and reactor designs:

  1. Lazy loader loads modules and injects global __opts__ and __pillar__ into them. Such global based nature of modules configuration assumes modules wouldn't be used in parallel inside one process with different configuration (pillar data) needed.
  2. Reactor uses worker threads that could access the same module expecting different pillar data inside it.

How it looks: when 5 minions responded to test.ping, reactor starts to execute 5 jobs in 5 threads. Each thread need to load pillar module to call pillar.get (it's from example in the issue #23373).

loader loads module and injects pillar data containing the proper minion id for the thread initiated loading. Another thread overwrites the pillar data and both uses it, so reactor gets same pillar data for different jobs

One of possible solution as I see is to modify reactor workers to work similar as master workers, based on processes instead of threads

@jacksontj Do you have any ideas on how to make this work with pillar? We may have to revert your threading work there, or at least config-gate it.

@basepi
Copy link
Contributor

basepi commented Oct 7, 2015

We're expecting the best workaround will actually be to just implement a multiprocessing implementation next to the threading implementation, and have a config value to choose (defaulting to multiprocessing).

@jacksontj
Copy link
Contributor

Hmm, that would work but we'll probably need another way around this as well. With the move to tornado-- the plan is to make more things happen concurrently (using coroutines). This means that more things will be running in parallel within the same process-- not to mention the fact that we support threads (multiprocessing = False). So we'll need to figure this one out.. although this is a fun one ;)

We've actually run into this problem in a few different variants recently. The basic issue is we are injecting a bunch of metadata specific to that run into the globals() space-- which means the next caller (or in this case a concurrent caller) gets access to those as well. Ideally we'd have some sort of per-request storage space (similar to flask's request context). From a little looking around it seems that tornado already has this functionality more-or-less builtin.

I found a gists which shows something we can mess with. I will probably have some time to play around with this today. It seems to all work fine-- although it looks like it might be an API change (although I have some ideas for some magic to make it compatible)-- we'll see.

That being said-- making a processpool option should be very simple-- note that it will have different performance characteristics (today it can use 1 core, in processpool it can use up to N).

@basepi
Copy link
Contributor

basepi commented Oct 9, 2015

I know @DmitryKuzmenko has a fix for this he's testing, but I don't know the details yet...

@jacksontj
Copy link
Contributor

I've actually been hacking most of the day on a RequestContext thing. It won't be complete today, but I should be able to push up some code to get some feedback.

@jacksontj
Copy link
Contributor

@basepi I have a branch where I'm messing around with this (https://github.com/jacksontj/salt/tree/concurrent) it seems to be working fine. I still have to add tests-- and make it so in the reactor, but its making progress :)

@DmitryKuzmenko
Copy link
Contributor

My approach is close to the @jacksontj one, but I hard nothing about Tornado's contexts and implemented own DictWrapper that maps Thread name to actual dict data.

Also my PR contains two bugfixes. One is for salt-master, it doesn't always properly exitst on break signal. The second one is for highstate outputter. The issue is reproduced with the described case. Outputter expects highstate data contain only key to dict mapping, but gets {'args':[], 'kwargs': {...}}

@DmitryKuzmenko
Copy link
Contributor

Also it's a good question: if we'll have one of this approach fixing threading behavior do we need a multiprocessing reactor implementation?

jacksontj added a commit to jacksontj/salt that referenced this issue Oct 23, 2015
Inspired by saltstack#23373

The basic issue we ran into is that the loader is injecting globals directly into the global namespace. This means that these injected values are not thread or coroutine safe-- meaning we can never do more than one thing at a time. Instead of multiprocessing everything to death-- we can simply use a stack_context to nicely handle this down in the core. As far as the module authors/users are concerned nothing has changed-- but the storage behind the scenes is now per-JID. This same set of classes can easily be used to store additional data (next candidates are reactors, master MWorker tasks, etc.).
@DmitryKuzmenko DmitryKuzmenko modified the milestones: B 10, Approved Oct 26, 2015
@cachedout
Copy link
Contributor

So now that we have the ContexetDict implementation merged, this should be resolved I believe.

@tnypex Are you willing to test this at the head of develop?

@cachedout cachedout added the fixed-pls-verify fix is linked, bug author to confirm fix label Nov 17, 2015
@cachedout
Copy link
Contributor

We didn't hear back here, so I'm going to go ahead and close this since it does appear to be fixed. If not, please leave a comment here and we'll happily re-open it.

@tnypex
Copy link
Author

tnypex commented Nov 26, 2015

Hi,

Sorry for the delay, I tested from develop but the issue is still present in orchestrate runners started by the reactor.

@basepi basepi reopened this Nov 30, 2015
@DmitryKuzmenko
Copy link
Contributor

@jacksontj will you look at this or I should take care?

@jacksontj
Copy link
Contributor

From my local testing I can't reproduce the same changed size during iteration error, but I do run into some issues getting your SLSs to run. The first one seems to be an oddity with pillar.get from jinja. For example, with the following SLS:

{% set tgs = salt['pillar.get']('target_server') %}

test:
  salt.function:
    - name: test.arg
    - tgt: {{ tgs }}

if "target_server" is not in the pillar, I'd expect it to come back as None or empty string-- but for some reason its coming back as an empty dict and I'm getting exceptions like:

[ERROR   ] Failed matching available minions with glob pattern: {}
Traceback (most recent call last):
  File "/home/jacksontj/src/salt/salt/utils/minions.py", line 626, in check_minions
    minions = check_func(expr, greedy)
  File "/home/jacksontj/src/salt/salt/utils/minions.py", line 193, in _check_glob_minions
    return fnmatch.filter(files, expr)
  File "/home/jacksontj/.virtualenvs/salt/lib64/python2.7/fnmatch.py", line 50, in filter
    if not pat in _cache:
TypeError: unhashable type: 'dict'

Although that looks ugly-- its not actually whats broken. After looking some more I see that the minion is throwing an exception:

Which means its choking on the pub job. I added some prints and it seems that the pub job looks like:

{
    'tgt_type': 'glob',
    'jid': '',
    'cmd': 'publish',
    'tgt': '*',
    'kwargs': {
        'delimiter': ': ',
        'show_timeout': True,
        'show_jid': False
    },
    'ret': '',
    'user': 'sudo_jacksontj',
    'key': '<SOMEKEY>',
    'arg': [

    ],
    'fun': 'test.ping'
}

And 'ret' is definitely not valid (its supposed to be returners in the pub job, not the return). After tracking that down I found that there was some variable shadowing in states.saltmod (d968d69).

I've opened a PR with fixes (#29397) can you try with that? And if you do still see issues, mind pasting logs/errors/tracebacks as well?

cachedout pushed a commit that referenced this issue Dec 3, 2015
@DmitryKuzmenko DmitryKuzmenko modified the milestones: B 6, B 10 Dec 14, 2015
@cachedout
Copy link
Contributor

Since we have a fix in and no reply, I am closing this. If it turns out not to be fixed, please comment and we will re-open this.

@tnypex
Copy link
Author

tnypex commented Jan 12, 2016

Hi,

Sorry I should have been more precise on the issue that remains, from the develop branch I still experience mixed up pillar data in orchestrate runners started by the reactor (no more tracebacks).

the reactor sls is:

{% set fun = data.get('fun', {}) %}
{% set arg = data.get('arg', {}) %}
{% set minions = data.get('minions', {}) %}


{% if fun == "test.ping" %}
{% for minion in minions %}
orchestrate_handle_test_{{minion}}:
  runner.state.orchestrate:
    - mods: orchestration.test.test
    - pillar: {target_server: {{minion}}}

{% endfor %}
{% endif %}

and the orchestrate sls:

notify_test_{{target_server}}:
  salt.function:
    - name: cmd.run
    - tgt: minion1
    - arg:
      - '/tmp/test.ksh  {{ target_server }}'
      - '/tmp/'
      - '/dev/null'
      - 'myid'

When, run test.ping against multiple minions salt '*' test.ping, the reactor sls is correctly rendered but the following orchestrate runners get mixed/inconsistent pillar data, full debug logs pasted below:

[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'_stamp': '2016-01-12T11:00:22.995250', 'minions': ['minion1', 'minion2', 'minion3', 'minion4']}
[DEBUG   ] Sending event - data = {'tgt_type': 'glob', 'jid': '20160112120022993175', 'tgt': '*', '_stamp': '2016-01-12T11:00:22.996043', 'user': 'root', 'arg': [], 'fun': 'test.ping', 'minions': ['minion1', 'minion2', 'minion3', 'minion4']}
[DEBUG   ] Gathering reactors for tag 20160112120022993175
[DEBUG   ] Gathering reactors for tag salt/job/20160112120022993175/new
[DEBUG   ] Compiling reactions for tag salt/job/20160112120022993175/new
[DEBUG   ] compile template: /srv/salt/base/reactor/jobs.sls
[INFO    ] User root Published command test.ping with jid 20160112120022993175
[DEBUG   ] Published command details {'tgt_type': 'glob', 'jid': '20160112120022993175', 'tgt': '*', 'ret': '', 'user': 'root', 'arg': [], 'fun': 'test.ping'}
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'jinja' renderer: 0.0269289016724
[DEBUG   ] Rendered data from file: /srv/salt/base/reactor/jobs.sls:







orchestrate_handle_test_minion1:
  runner.state.orchestrate:
    - mods: orchestration.test.test
    - pillar: {target_server: minion1}


orchestrate_handle_test_minion2:
  runner.state.orchestrate:
    - mods: orchestration.test.test
    - pillar: {target_server: minion2}


orchestrate_handle_test_minion3:
  runner.state.orchestrate:
    - mods: orchestration.test.test
    - pillar: {target_server: minion3}


orchestrate_handle_test_minion4:
  runner.state.orchestrate:
    - mods: orchestration.test.test
    - pillar: {target_server: minion4}







[DEBUG   ] LazyLoaded config.get
[DEBUG   ] Results of YAML rendering: 
OrderedDict([('orchestrate_handle_test_minion1', OrderedDict([('runner.state.orchestrate', [OrderedDict([('mods', 'orchestration.test.test')]), OrderedDict([('pillar', OrderedDict([('target_server', 'minion1')]))])])])), ('orchestrate_handle_test_minion2', OrderedDict([('runner.state.orchestrate', [OrderedDict([('mods', 'orchestration.test.test')]), OrderedDict([('pillar', OrderedDict([('target_server', 'minion2')]))])])])), ('orchestrate_handle_test_minion3', OrderedDict([('runner.state.orchestrate', [OrderedDict([('mods', 'orchestration.test.test')]), OrderedDict([('pillar', OrderedDict([('target_server', 'minion3')]))])])])), ('orchestrate_handle_test_minion4', OrderedDict([('runner.state.orchestrate', [OrderedDict([('mods', 'orchestration.test.test')]), OrderedDict([('pillar', OrderedDict([('target_server', 'minion4')]))])])]))])
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'yaml' renderer: 0.0187299251556
[DEBUG   ] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x23d8f10>> with args:('state.orchestrate', {'mods': 'orchestration.test.test', 'name': u'orchestrate_handle_test_minion1', 'pillar': OrderedDict([('target_server', 'minion1')]), 'state': 'runner', '__id__': 'orchestrate_handle_test_minion1', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
[DEBUG   ] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x23d8f10>> with args:('state.orchestrate', {'mods': 'orchestration.test.test', 'name': u'orchestrate_handle_test_minion2', 'pillar': OrderedDict([('target_server', 'minion2')]), 'state': 'runner', '__id__': 'orchestrate_handle_test_minion2', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x23d8f10>> with args:('state.orchestrate', {'mods': 'orchestration.test.test', 'name': u'orchestrate_handle_test_minion4', 'pillar': OrderedDict([('target_server', 'minion4')]), 'state': 'runner', '__id__': 'orchestrate_handle_test_minion4', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
[DEBUG   ] ThreadPool executing func: <bound method RunnerClient.low of <salt.runner.RunnerClient object at 0x23d8f10>> with args:('state.orchestrate', {'mods': 'orchestration.test.test', 'name': u'orchestrate_handle_test_minion3', 'pillar': OrderedDict([('target_server', 'minion3')]), 'state': 'runner', '__id__': 'orchestrate_handle_test_minion3', '__user__': 'Reactor', '__sls__': '/srv/salt/base/reactor/jobs.sls', 'order': 1, 'user': 'Reactor'}) kwargs{}
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'fun': 'runner.state.orchestrate', 'jid': '20160112120024318919', 'user': 'Reactor', '_stamp': '2016-01-12T11:00:24.326944'}
[DEBUG   ] LazyLoaded state.orchestrate
[DEBUG   ] Error loading runners.nacl: libnacl import error, perhaps missing python libnacl package
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'fun': 'runner.state.orchestrate', 'jid': '20160112120024365367', 'user': 'Reactor', '_stamp': '2016-01-12T11:00:24.385043'}
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'fun': 'runner.state.orchestrate', 'jid': '20160112120024412357', 'user': 'Reactor', '_stamp': '2016-01-12T11:00:24.428243'}
/usr/lib/python2.6/site-packages/salt/client/mixins.py:333: DeprecationWarning: kwargs must be passed inside the low under "kwargs"
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'fun': 'runner.state.orchestrate', 'jid': '20160112120024506091', 'user': 'Reactor', '_stamp': '2016-01-12T11:00:24.523897'}
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Guessing ID. The id can be explicitly set in /etc/salt/minion
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[INFO    ] Found minion id from generate_minion_id(): saltmaster
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded state.sls
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] LazyLoaded state.sls
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded state.sls
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] LazyLoaded roots.envs
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded roots.envs
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] Could not LazyLoad roots.init
[DEBUG   ] Updating roots fileserver cache
[DEBUG   ] LazyLoaded state.sls
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] Could not LazyLoad roots.init
[DEBUG   ] Updating roots fileserver cache
[DEBUG   ] LazyLoaded roots.envs
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] Could not LazyLoad roots.init
[DEBUG   ] Updating roots fileserver cache
[DEBUG   ] LazyLoaded roots.envs
[DEBUG   ] Could not LazyLoad roots.init
[DEBUG   ] Updating roots fileserver cache
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] In saltenv 'base', looking at rel_path 'orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/master/files/base/orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] LazyLoaded jinja.render
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://orchestration/test/test.sls'
[DEBUG   ] compile template: /var/cache/salt/master/files/base/orchestration/test/test.sls
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[DEBUG   ] In saltenv 'base', looking at rel_path 'orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/master/files/base/orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] LazyLoaded pillar.get
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'jinja' renderer: 0.0154371261597
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://orchestration/test/test.sls'
[DEBUG   ] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test/test.sls:



notify_test_minion1:
  salt.function:
    - name: cmd.run
    - tgt: minion1
    - arg:
      - '/tmp/test.ksh minion1'
      - '/tmp/'
      - '/dev/null'
      - 'myid'



[DEBUG   ] compile template: /var/cache/salt/master/files/base/orchestration/test/test.sls
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[DEBUG   ] LazyLoaded config.get
[DEBUG   ] Results of YAML rendering: 
OrderedDict([('notify_test_minion1', OrderedDict([('salt.function', [OrderedDict([('name', 'cmd.run')]), OrderedDict([('tgt', 'minion1')]), OrderedDict([('arg', ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'])])])]))])
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'jinja' renderer: 0.0174221992493
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'yaml' renderer: 0.0254518985748
[DEBUG   ] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test/test.sls:



notify_test_minion1:
  salt.function:
    - name: cmd.run
    - tgt: minion1
    - arg:
      - '/tmp/test.ksh minion1'
      - '/tmp/'
      - '/dev/null'
      - 'myid'



[DEBUG   ] LazyLoaded config.option
[DEBUG   ] Results of YAML rendering: 
OrderedDict([('notify_test_minion1', OrderedDict([('salt.function', [OrderedDict([('name', 'cmd.run')]), OrderedDict([('tgt', 'minion1')]), OrderedDict([('arg', ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'])])])]))])
[DEBUG   ] LazyLoaded salt.function
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'yaml' renderer: 0.0123250484467
[INFO    ] Running state [cmd.run] at time 12:00:26.056375
[DEBUG   ] LazyLoaded salt.function
[INFO    ] Executing state salt.function for cmd.run
[INFO    ] Running state [cmd.run] at time 12:00:26.061644
[INFO    ] Executing state salt.function for cmd.run
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] LazyLoaded saltutil.cmd
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'saltmaster_master', 'tcp://127.0.0.1:4506', 'clear')
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'saltmaster_master', 'tcp://127.0.0.1:4506', 'clear')
[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'_stamp': '2016-01-12T11:00:26.319562', 'minions': ['minion1']}
[DEBUG   ] Sending event - data = {'tgt_type': 'glob', 'jid': '20160112120026315063', 'tgt': 'minion1', '_stamp': '2016-01-12T11:00:26.323314', 'user': 'root', 'arg': ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'], 'fun': 'cmd.run', 'minions': ['minion1']}
[DEBUG   ] Gathering reactors for tag 20160112120026315063
[DEBUG   ] Gathering reactors for tag salt/job/20160112120026315063/new
[DEBUG   ] Compiling reactions for tag salt/job/20160112120026315063/new
[INFO    ] User root Published command cmd.run with jid 20160112120026315063
[DEBUG   ] Published command details {'tgt_type': 'glob', 'jid': '20160112120026315063', 'tgt': 'minion1', 'ret': '', 'user': 'root', 'arg': ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'], 'fun': 'cmd.run'}
[DEBUG   ] compile template: /srv/salt/base/reactor/jobs.sls
[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'_stamp': '2016-01-12T11:00:26.342095', 'minions': ['minion1']}
[DEBUG   ] LazyLoaded local_cache.get_load
[DEBUG   ] Sending event - data = {'tgt_type': 'glob', 'jid': '20160112120026336813', 'tgt': 'minion1', '_stamp': '2016-01-12T11:00:26.346361', 'user': 'root', 'arg': ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'], 'fun': 'cmd.run', 'minions': ['minion1']}
[DEBUG   ] get_iter_returns for jid 20160112120026315063 sent to set(['minion1']) will timeout at 12:00:31.345095
[INFO    ] User root Published command cmd.run with jid 20160112120026336813
[DEBUG   ] Published command details {'tgt_type': 'glob', 'jid': '20160112120026336813', 'tgt': 'minion1', 'ret': '', 'user': 'root', 'arg': ['/tmp/test.ksh minion1', '/tmp/', '/dev/null', 'myid'], 'fun': 'cmd.run'}
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'jinja' renderer: 0.0436570644379
[DEBUG   ] Rendered data from file: /srv/salt/base/reactor/jobs.sls:










[DEBUG   ] LazyLoaded local_cache.get_load
[DEBUG   ] Results of YAML rendering: 
{}
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'yaml' renderer: 0.00289511680603
[DEBUG   ] get_iter_returns for jid 20160112120026336813 sent to set(['minion1']) will timeout at 12:00:31.380916
[DEBUG   ] Gathering reactors for tag 20160112120026336813
[DEBUG   ] Gathering reactors for tag salt/job/20160112120026336813/new
[DEBUG   ] Compiling reactions for tag salt/job/20160112120026336813/new
[DEBUG   ] compile template: /srv/salt/base/reactor/jobs.sls
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'jinja' renderer: 0.0241320133209
[DEBUG   ] Rendered data from file: /srv/salt/base/reactor/jobs.sls:










[DEBUG   ] Results of YAML rendering: 
{}
[PROFILE ] Time (in seconds) to render '/srv/salt/base/reactor/jobs.sls' using 'yaml' renderer: 0.00102591514587
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] In saltenv 'base', looking at rel_path 'orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/master/files/base/orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://orchestration/test/test.sls'
[DEBUG   ] compile template: /var/cache/salt/master/files/base/orchestration/test/test.sls
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded pillar.get
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'jinja' renderer: 0.00727510452271
[DEBUG   ] In saltenv 'base', looking at rel_path 'orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[DEBUG   ] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test/test.sls:



notify_test_minion3:
  salt.function:
    - name: cmd.run
    - tgt: minion1
    - arg:
      - '/tmp/test.ksh minion3'
      - '/tmp/'
      - '/dev/null'
      - 'myid'



[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/master/files/base/orchestration/test/test.sls' to resolve 'salt://orchestration/test/test.sls'
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://orchestration/test/test.sls'
[DEBUG   ] compile template: /var/cache/salt/master/files/base/orchestration/test/test.sls
[DEBUG   ] LazyLoaded config.get
[DEBUG   ] Results of YAML rendering: 
OrderedDict([('notify_test_minion3', OrderedDict([('salt.function', [OrderedDict([('name', 'cmd.run')]), OrderedDict([('tgt', 'minion1')]), OrderedDict([('arg', ['/tmp/test.ksh minion3', '/tmp/', '/dev/null', 'myid'])])])]))])
[DEBUG   ] Jinja search path: '['/var/cache/salt/master/files/base']'
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'yaml' renderer: 0.0101249217987
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orchestration/test/test.sls' using 'jinja' renderer: 0.00700879096985
[DEBUG   ] LazyLoaded config.option
[DEBUG   ] Rendered data from file: /var/cache/salt/master/files/base/orchestration/test/test.sls:



notify_test_minion3:
  salt.function:
    - name: cmd.run
    - tgt: minion1
    - arg:
      - '/tmp/test.ksh minion3'
      - '/tmp/'
      - '/dev/null'
      - 'myid'


@basepi
Copy link
Contributor

basepi commented Jan 12, 2016

@tnypex I'm thinking that since we've fixed part of this issue we should open a new issue for the remaining piece and link back to this issue, so "what's left" is clearer. Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Confirmed Salt engineer has confirmed bug/feature - often including a MCVE Core relates to code central or existential to Salt fixed-pls-verify fix is linked, bug author to confirm fix P4 Priority 4 Runners severity-high 2nd top severity, seen by most users, causes major problems
Projects
None yet
Development

No branches or pull requests

7 participants