-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Event-Driven Processing #143
Comments
This is an extremely good write-up covering both pros and cons. Very good! Though an important thing here is that everything you mention seems to be triggered and setup from the plug-in source code. Nevertheless I think the real power comes in when the user can drag and drop those dependencies in a UI (like a node-graph). From a node graph I think the Slots and Events could respectively be seen as a Node's input and Outputs, correct? |
I agree, and I believe the source of the Here's how I'd imagine it to work currently.
To then load this data: 1-3 remain the same.
I can't be sure, but I'd imagine so, yes. |
+1 I found just doing a small amount of validators, that chaining them If this gets implemented, would we also be able to discard the Also "chains" might not be a good term, as branching could definitely |
In theory, yes, I think so. Though I wouldn't throw out our current concepts right away as I think they are still valuable from a non-technical standpoint when considering how to structure your plug-ins.
Chaining is a good description of how it works currently! But yeah, this should allow for any arbitrary order to be defined by you; including circular and self-referencing ones. It might even lay the groundwork for something like #133, in which I'd image we need a continous stream of validations to take place based on what the user is doing. I'm very excited about this and have already gotten a few ideas about how to visualize things. I'd be happy to take on suggestions about this too. We've spoken before about nodes and graphs (#41); and now we have something concrete to build it upon! What i'd like to see is practical thought about how the above proposal could be used when drawing a graph; I haven't got much experience with it and quite frankly wouldn't know where to even start. |
WebhooksHere's an idea from brainstorming with @tokejepsen; hook into events coming from an external source. pyblish.register_webhook("customEvent", "http://mystudio.ftrack.com/et19trisdsnj83u8fdhjr84y")
class MyPlugin(...):
on = "customEvent" |
I've just ran into this issue and just want to say, that it sounds greatly useful. It would essentially turn pyblish into extremely strong platform for not just publishing but also many other pipeline tasks that need to be automated. |
Glad you like it. I've been looking into various implementations of this, but have yet to find (or understand) any. Help is welcome! Here's some related reading I've encountered so far. |
Maybe interesting as well; https://github.com/LumaPictures/pflow |
Another link: https://github.com/honix/Pyno |
Ooo Pyno has some very nice visuals! |
Had a closer look at Pyno. In short, it's an unlikely fit as-is, but interesting as reference.
|
Another inspirational source; https://github.com/circuits/circuits. Don't think anything will quite hit the requirements we have, so we might be looking at custom solutions. |
Had a bit of a play: import inspect
from pyblish import api
connections = [("PluginA.finished", "PluginB.process"), ("PluginB.finished", "PluginC.process")]
class PluginA(api.ContextPlugin):
def emit(self, signal, connections):
plugins = api.registered_plugins()
signal = "{0}.{1}".format(self.__class__.__name__, signal)
plugins_dict = {}
for plugin in api.registered_plugins():
plugins_dict[plugin.__name__] = plugin
for connection in connections:
if signal == connection[0]:
plugin_name, method_name = connection[1].split(".")
plugin = plugins_dict[plugin_name]()
for name, method in inspect.getmembers(plugin, predicate=inspect.ismethod):
if name == method_name:
method([])
def process(self, context):
print "Processing PluginA"
self.emit("finished", connections)
class PluginB(PluginA):
def process(self, context):
print "Processing PluginB"
self.emit("finished", connections)
class PluginC(PluginA):
def process(self, context):
print "Processing PluginC"
self.emit("finished", connections)
api.register_plugin(PluginA)
api.register_plugin(PluginB)
api.register_plugin(PluginC)
p = PluginA()
p.process([]) The |
Played some more :) Main goal here was to expose what signals are available on a plugin, and how Context and Instance plugins would function together. Currently don't like that we have to register the signals, and emit them as well. (Obviously "finished" and other built in signals would not need to be emitted by the user) import inspect
from pyblish import api, logic
# These connections should be registered similar to how we register plugins.
# For a visual connector we could use https://github.com/LeGoffLoic/Nodz
registered_connections = [
("PluginA.finished", "PluginB.process"),
("PluginB.finished", "PluginC.process")
]
class Signal(object):
def __init__(self, signal):
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 2)
self.signal = "{0}.{1}".format(calframe[1][3], signal)
def emit(self, context):
plugins_dict = {}
for plugin in api.registered_plugins():
plugins_dict[plugin.__name__] = plugin
for connection in registered_connections:
if self.signal == connection[0]:
plugin_name, method_name = connection[1].split(".")
plugin = plugins_dict[plugin_name]()
members = inspect.getmembers(
plugin, predicate=inspect.ismethod
)
for name, method in members:
if name == method_name:
# Hacked proceessing
if issubclass(plugins_dict[plugin_name], api.ContextPlugin):
method(context)
if issubclass(plugins_dict[plugin_name], api.InstancePlugin):
for instance in logic.instances_by_plugin(context, plugin):
method(instance)
class PluginA(api.ContextPlugin):
signals = {"finished": Signal("finished")}
def process(self, context):
print "Processing PluginA"
context.create_instance(name="InstanceA1")
context.create_instance(name="InstanceA2")
self.signals["finished"].emit(context)
class PluginB(api.InstancePlugin):
signals = {"finished": Signal("finished")}
def process(self, instance):
print "Processing PluginB"
print instance.data["name"]
self.signals["finished"].emit(instance.context)
class PluginC(api.ContextPlugin):
signals = {"finished": Signal("finished")}
def process(self, context):
print "Processing PluginC"
self.signals["finished"].emit(context)
api.register_plugin(PluginA)
api.register_plugin(PluginB)
api.register_plugin(PluginC)
p = PluginA()
p.process(api.Context())
print "Signals:"
for plugin in api.registered_plugins():
print plugin
print plugin.signals |
Would we need "Group Events"? If we solve the dependency graph on plugins with orders, which will be a straight line of synchronous plugins executing, are there other use-cases for grouping nodes together? Maybe visually in a node graph it might be nice to group, but that shouldn't mean anything for how we process the plugins. Unless we mean that we want to support waiting to process a plugin, until two or more connections has been received? |
I don't think group event triggers are what we are looking for to support the existing CVEI workflow. The situation we have is that we want to support a CVEI workflow, but without ordering the plugins executions with a numerical variable.
|
And here is a way the validation could work, only using plugins as well:
|
Don't have much to add at the moment, except nice work and keep it up. :) For visualising and potentially editing these, there is also this: |
My current issue is dealing with how a plugin can wait for all inputs to have been processed. The won't be standard practice when making plugins, because plugins normally don't need the whole context processed but we will need this for the breakpoint plugins, ei. |
This is what I had in mind for those group plug-ins; where a series of plug-ins associate themselves with a group such as "collectors" and then another plug-in awaits a signal from this group, rather than any one particular plug-in. |
Getting this waiting framework up and running, will probably allow us to explore both options. Will need to investigate how other frameworks tackle this. |
Hmm, interesting: https://github.com/baffelli/pyperator |
I think this is basically what we need to solve. def A(b, c):
return b + c
def B(x, y):
return x + y
def C(y, z):
return y + z
def x():
return 1
def y():
return 1
def z():
return 1 From the above methods we need a way of describing this: |
In order to have a flow based processing, we need to solve some problems mainly creating a DAG framework. Here is the problem I think we should try to solve: def A(y, z):
return y * z
def B(x, y):
return x + y
def C(y, z):
return y + z
def x():
return 1
def y():
return 2
def z():
return 3 We need to have a way of describing the connections between the methods to end up executing this
I have experimented with solving the problem here, but this does not accommodate for the requirements since it evaluates the arguments as it solves the methods, and reusability of the methods can't happen. |
Have updated the the experiments here, with a solution that solves the two previous problems; Deferred Evaluation and Reusable Methods. Think the next issues to tackle are:
|
The current version here solves the previous two issue; Keyword arguments and Class methods. I did some quick prototyping of trying to use the system for Pyblish, and it worked well until getting to the previous mentioned issue; waiting for a collection of plugins to finish. Following the systems mentality of "Everything is just a method", we need a way of describing having an unknown amount of inputs to a method. So the next issue is to support |
I have updated the experiments here. This addresses the Without knowing very much about event driven processing, I'm pretty sure this system is not event driven. What it is however is a system for chaining plugins together in any arbitrary way. Similar to a DAG So the question is, would this be what we want or are we looking for event driven processing? And if we looking for event driven processing, what do we want out of it that chaining plugins together can't solve? |
I can't answer your first question, but an event driven framework would be able to choose which plug-in to process next based on what happens during processing. For example, right now we have a system where if validation fails, extraction doesn't begin. In a way, that's event driven because the subsequent plug-in is based on this one event. But that's all it is, a single hard-coded response to an expected event. An arbitrary system would be able to listen to any event, and emit any event. Like Signals and Slots in Qt. Some plug-ins may fire events, and some may listen. But nobody has to listen, and there is never any guarantee that anyone will. Therein also lies the danger or this approach; we can't know when the system is finished, and we can't know if it ever will finish. You can forget about a progress bar for example. Furthermore, it becomes impossibly difficult to debug such a system without corresponding tools to inspect why a particular route was chosen and in response to what event. Much like debugging signals in Qt, except Qt isn't relying on events being emitted nor reaching its destination; it's primary processing loop (from my understanding) is a fixed series of events that run predictably with optional events coming in and out of it. It doesn't hang for example if a particular signal is forgotten or broken. It works more akin to how Pyblish works right now; a fixed event loop, with optional events being emitted. That is, neither Qt nor Pyblish is event driven in this regard. For a truly event driven system to work, in order to develop and understand it, I expect we'll need a graphical view over it, something to highlight what is running and why; sort of like Nuke and how it highlights lines as they are transferring information. Sorry I haven't actually run most of your experiments so maybe you've already solved most of this, in which case that is most impressive! |
Yup, that was my thinking as well and the experimental system can handle this. It's based on the same end goal; to emulate Nukes graph or Mayas node editor graph with Python methods. |
In that case I wouldn't put too much weight onto the "event driven"ness of your system, a regular old DAG will do. A DAG is very much solvable and predictable, much like the current linear system of orderings. Maybe pop up a separate issue about a DAG version of Pyblish and leave this as-is (to rot), as it's probably not the way forwards anyway. |
Goal
To provide for customisable flow of control.
Currently, the execution model in Pyblish is linear; each group of plug-ins is assigned an
order
ranging from 0-3 and is processed sequentially in this order. Additionally, validationA
andB
both run but their order relative each other is undefined; i.e. you can't know whether A will finish before B and can therefore not define any behaviour based on it.This proposal aims to solve that.
Table of contents
Related
Implementation
Any implementation should allow for branches of execution to be defined.
Per-plugin events
In which each plug-in may have any number of interesting events, even custom ones similar to Qt Signals and Slots.
Multiple events
There may be many events associated with any one plug-in, some built-in, some custom.
Group events
Groups of plug-ins may also trigger events. This will be necessary in order to retain how processing is currently triggered; i.e. Extraction comes after Validation which comes after Selection.
As you can see, the linking of processes via events also opens up for interesting graphical potential!
Qt
Inspired by Qt, we could have a look at defining a simple observer pattern.
One disadvantage here is that plug-ins triggering events must be known before other plug-ins may subscribe to them; something which may not be possible due to Pyblish being driven by plug-ins that are unknown until after discovery.
CSS3
Taking inspiration from JavaScript and CSS3, we can work around the above disadvantage by resolving dependencies during plug-in discovery.
An attribute called
on
is added to a plug-in which determines upon which event it is to be processed.Additional potential
In addition to plain names, we can also take advantage of CSS3 psuedo-selectors.
In which
name
is the name of a particular event or object,class
represents a component of said object andelement
represents a particular state; eitherbefore
orafter
.Terminology borrowed from CSS3 psuedo-classes and psuedo-elements.
Built-in Events
Some events may be built-in, such as
finished
andfailed
, along with system-events such aslogged
orwritten
.Custom Event Handling
Specifying that a plug-in is to be triggered
onFinished
by another plug-in means to trigger a plug-in'sprocess()
function.But there may be other events of interest; here's how it may look when handling other types of events, some triggered by more specific events, some by a GUI such as Pyblish QML and others by you.
Syntax
Each plug-in provides an attribute
on
containing the exact name of an event.However, as an event may require further definition:
Additional data requires a well-defined syntax so as to provide for both built-in and custom events.
separate variable
dot-notation
psuedo-class
psuedo-class and psuedo-element
Psuedo-elements add additional syntactical possibilities.
psuedo-classes
andpsuedo-elements
for reference.# Repair and Feedback
Events could potentially replace current and proposed methods of repairing and providing interactive feedback, as mentioned here.
Explicit handling
Implicit handling
A note on backwards compatibility
An event corresponding to how processes are triggered currently is provided as default, thus not breaking backwards compatibility.
Augmenting the process with customised behaviour that was previously impossible.
Conclusion
Triggering by events can be both very powerful but also be the cause of difficult-to-find bugs. If we were able to visualise the network of triggers by means of a node-graph, say, then we would be able to get a clearer understanding of simple chain of events whilst also being able to design larger and more interesting networks.
For example, here.
We can roughly see the order in which the plug-ins will be processed.
But beyond that, there isn't much we can get; not even from inspecting each individual file, as they aren't capable of expression order beyond what is currently in this original design.
With an event-driven paradigm, we would be able to instead look at it like this.
It opens up both functional and graphical possibilities whilst at the same time eliminates the current fixed function pipeline-style of processing and leave room for full-on event-driven programming!
This, together with Dependency Injection and In-memory plug-ins will make Pyblish both easier to learn and more lucrative for advanced uses.
References
Event-driven programming is common in JavaScript when used together with web design, in which a webpage responds to external events via subscription..
..or by attaching an event handler.
In the former case, structure is de-coupled from behaviour whereas in the latter, behaviour is embedded into the structure; one favouring readability whereas the other favours separation of concerns.
Links
The text was updated successfully, but these errors were encountered: