Skip to content
mottosso edited this page Sep 28, 2014 · 4 revisions

Based on previous terminology I'll exchange some names to make it easier to explain this workflow, and some extra definitions:

  • Filter: a Node.
  • Instance: a collection of data of a single type which I'll refer to as an Attribute.
  • Graph: The overall scene/layout of a single publishing situation which the user can set up through the UI (or code) and save out to disk to reuse it throughout production.
  • GraphScene: The Graph stored on disk.

###References A good reference for how this could work can be seen in this coral pipeline demo, especially since it's used in a relatively similar context.

Whereas other node editors are good references for how the graph could behave, like compositing in Nuke/Fusion, the node editor in Maya/Houdini or ICE in softimage.

Also Depends seems to make good use of a node graph.

###Workflow

To set up Pyblish for your new publishing situation you'll have to define the Graph that'll be run to Select, Validate and Extract. For example this could be a simplified situation:

Graph
Selector: Select Meshes
Validator: Check Meshes
Extractor: Export Meshes

After testing the Graph we'll save it to disk and call it modeling.json. The next time we'll want to perform that series of Selections, Validations and Extractions we'll be able to open the graph and run it.

Encapsulation/Localisation

But then again we can't entirely escape the hosts specific code, so it will just be a matter of where to take that hit.

Yeah, I think so too.

This is how I see it - the host is going to be involved no matter what, and it can either infect each step, like it is now:

                         ______  
                        |      |
                        | Host |
                        |______|
                           |                                  
     ______________________|____________________________
    |                  |                 |              |
 ___v______      ______v_____      ______v_____      ___v_____ 
|          |    |            |    |            |    |         |
| Selector |--->| Validation |--->| Extraction |--->| Conform |
|__________|    |____________|    |____________|    |_________|

Or its interaction with Publish could start and end with Selection.

    ______  
   |      |
   | Host |
   |______|
      |
      |
      |
 _____v____      ____________      ____________      _________ 
|          |    |            |    |            |    |         |
| Selector |--->| Validation |--->| Extraction |--->| Conform |
|__________|    |____________|    |____________|    |_________|

The latter is of course an ideal and probably less practical, but I would at least aim for that.

Definitions

From Issue #50

I realised that there is some room for ambiguity about the number of connections per node. For example, having a single output doesn't necessarily mean it can't be plugged into many inputs and thus not facilitate branching.

Let me explain.

Single In, Single Out

single-in-single-out_small

This, similar to the SOP context within Houdini, or the majority of nodes in Nuke, only allows a single input and a single output, but the output can be plugged into multiple inputs on other nodes.

To us, this could mean plugging the output of SelectObjectSet into ValidateNamingConvention into ExtractAsMa. The would each take what they give; a context. This would certainly be a convenient and easily looked-upon layout.

In Python, it could look like this:

def single_input(input):
     return input + 1

Multi In, Single Out

multi-in-single-out_small

Similar to the above, but allowing multiple inputs. Merge is a good example of where this is useful.

def merge(a, b):
     return a + b

Off the top of my head, I'm unable to see any of our nodes being mergeable, @BigRoy what are your thoughts on this?

Single In, Multi Out

single-in-multi-out_small

Now we're getting complicated. Consider the equation x + y + z = a. It takes three inputs - x, y and z - and produces a single output - a. Then consider the function:

def add(x, y, z):
    return x + y + z

Again, three inputs and one output. This is probably what we're most familiar with.

Multiple outputs on the other hand:

def advanced_func(x):
     return [y, z]

To be honest, I'm having trouble imagining we'd ever get into a position where this is necessary. Maya does it so it's certainly not unheard of. But it is rare.

Multi In, Multi Out

multi-in-multi-out_small

Similar to the above, but probably more common and the complexity added by multiple inputs is slight.


To your points.

  • Support for branching in the graph.

Branching would be a really great feature to have I think, and is possibly the thing separating nodal workflows, like Maya, from linear workflows, like After Effects. I think branching should be possible with any of these connectivity options.

  • Order of processing is very clear (best option is likely depth-first)

Interesting choice of depth-first, I would actually go the other way and say breadth-first. Consider the following graph:

                      -- ValidateA --
                     /				 \
SelectInstances -------> ValidateB -------> ExtractAsMa
					 \  		     /
					  -- ValidateC --
                        

Depth-first would mean to run SelectInstances, followed by ValidateA followed by ExtractAsMa. I think we would expect all validations to complete, before running extraction.

  • The Context would get a deep copy per branch used to further operate with.

That's an interesting point. I imagined the context to remain the same shared object throughout, but deep copying is probably unavoidable. Consider the following graph:

                      -- ValidateA --
                     /				 \
SelectInstances -------> ValidateB -------> ExtractAsMa
					 \		         
					  -- FilterSelection --> ValidateC --> ExtractAsObj

If FilterSelection alters the Context, say it removes a few instances, then it would have a side-effect on the context as it entered into ExtractAsMa. Thus, it would need it's own copy of the context.

Is it possible that each node will have to get their own individual deep-copy?

Separating 'data' in the Context becomes unclear by just looking at the graph. Example given: one (selector) node outputs the meshes (list of objects) and another (selector) node outputs the cameras (list of objects). Both are lists of objects, how do we now down in the graph what list of objects a node operates on within the Context. We'll need to add dropdown menus (comboboxes) so we can select one of the created inputs that exist on the Context.

This may not be necessarily true. If two selectors follow each other, I think it would be reasonable to expect that the output from the last node to have performed both operations, thus outputting both cameras and meshes. Did I get this right?

Language, software and platform agnostic, feature film-strength quality assurance for content.

Table of contents

Architecture

Developer Resources

Strategies

More

Community

Clone this wiki locally