We want an assurance that the structure represented by a graph is a valid one, that it can be converted to a control recipe and run without starvation (task 2 waiting for task 1, but task 1 never runs) or stuttering (task 1 runs, and as a result, unintentionally, task 2 runs two or more times) in a plant. We leverage work already done in the area of Procedure Function Charts (PFCs).
This paper presents an algorithm used to make the determination of whether or not a recipe graph is correct (has neither starvation nor stuttering.)
A span is the series of actions that take place between a branch (the start of the span) and a label (the termination of the span). If the label precedes the branch, the span is the body of a loop. If the branch precedes the label, the span is the series of skippable tasks. If there is a multi-target branch, then the span is the sequence of operations between the branch and the farthest label.
Two spans are considered correlated if they act on the same evaluation instance of the same branch condition. (Need to think about this to make sure there’s not a chicken-and-egg problem here.)
There are two kinds of opportunities for starvation, pertaining to failures in synchronously and asynchronously correlated spans. In starvation in a synchronously correlated span, task 2 cannot run until task 1 runs, but task 1 never runs. In starvation around an asynchronously correlated span, task 1 runs ‘n’ times (e.g. doing an asynch transfer) per model, and task 2 must also run ‘n’ times but does not, although there is not the requirement that they run at the same time. In the fully unconstrained variation of asynchronous correlation, task 2 might run ‘n’ times after task 1 has finished all of its ‘n’ executions. We propose a hybrid of the two that we are calling semi-synchronous correlation. In semi-synchronous correlation, the transfer out and the transfer in do not have to run at the same time, but the two loops must always be in the same iteration at the same time. Limiting ourselves to synchronous and semi-synchronous correlation allows us to place a synchronization primitive at the start of the span, and adopt a much simpler analytical approach in version 1.2.
We need to decide if true asynchronous correlation is a condition we expect to encounter, and therefore want to analyze for, but on the assumption that it is not, we propose a mechanism for analysis that focuses on synchronous correlation (as in a liquid transfer) and semi-synchronous correlation (as in a solid transfer) by requiring that the entries to correlated spans be synchronized. The analysis mechanism must accommodate multi-span scenarios (the ion exchanger case with swing equipment can involve four correlated loops.)
In general, the validation question can be stated as:
We must know that for all values of all conditionals, that all correlated operations will run exactly zero times, or exactly one time. We must also be able to show that the send and receive operations in the graph will run an equal number of times. We ensure this by replacing synchronizers (including the one that frames a transfer operation pair) with the following construct:
Branch/Label pairs are in the same unit. A branch cannot transmit a train of execution to another unit. As a span is a single train of execution, and a train of execution must occur in only one unit, transmission of that train to another unit is a meaningless concept, so the construct is illegal. This implies that branch & label tasks share the same parent (a unit, if they are operations), or else (if they are operation steps) their parents are dedicated branch or label operations.
With correlated spans, if there is a branch out of, or a branch into one span, then (a) there must be a branch out of, or a branch into all spans, and (b) all such branches must be correlated.
We use pseudo-PFC notation to describe these rules. The following notation is used:
We will use a technique whereby portions of the graph that are both atomic and correct are eliminated from the graph in a series of reduction actions, resulting in its reduction to a null graph only if the graph is correct. Two scenarios will be irreducible, and represent an invalid graph. They will be called stuttering and deadlock.
Reduction rules are currently expected to include the following
In addition, when we are otherwise unable to further reduce a graph, we will use a technique we are calling demultiplexing, under which a subgraph is decomposed into parallel ‘instance subgraphs’ depicting the separate paths that are taken under mutually-exclusive situations. The procedure for accomplishing this is as follows.
1.) Locate a serial divergence. (An or-node with outbound edges.)
2.) Locate the serial convergence where all of the divergent paths rejoin. (In some cases, this may be the graph termination.) This identifies the “multiplexed subgraph”, and we will demultiplex it.
3.) Demultiplexing involves traversing the subgraph depth-first from the divergence to the convergence, creating a replica of each node encountered in the subgraph into a new subgraph. Each subgraph thus created is called an instance subgraph, and once all have been created, we replace the multiplexed subgraph with the separate demultiplexed, or instance subgraphs.
4.) The instance subgraphs are then subjected to the same reduction rules shown above.
The overall approach is to first perform Task Elimination (since it needs only to be performed once, reducing the graph to its control flow representation.) followed by iterations of the others in some heuristically-determined order. When we are unable to further reduce a graph, we will demultiplex a subgraph, and try again. We repeat this algorithm until nothing is left, or we can no longer reduce it.
 Wasim Sadiq and Maria E. Orlowska, Analyzing Process Models Using Graph Reduction Techniques, University of Queensland, Australia, 2000
 Charlotta Johnson, A Graphical Language For Batch Control, Lund Institute of Technology, Sweden, 1999 – Appendix B, Petri Net Reduction Techniques.