Lecture Notes On Petrinet Prof. Javier
Lecture Notes On Petrinet Prof. Javier
Lecture Notes
1 Basic definitions 9
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Decision procedures 43
3.1 Decision procedures for 1-bounded Petri nets . . . . . . . . . . . 43
3.1.1 Complexity for 1-bounded Petri nets . . . . . . . . . . . . 44
3.2 Decision procedures for general Petri nets . . . . . . . . . . . . . 48
3.2.1 A decision procedure for Boundedness . . . . . . . . . . 48
3.2.2 Decision procedures for Coverability . . . . . . . . . . . 50
3
4 CONTENTS
4 Semi-decision procedures 81
4.1 Linear systems of equations and linear programming . . . . . . . 81
4.2 The Marking Equation . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 S- and T-invariants . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.1 S-invariants . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.2 T-invariants . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.4 Siphons and Traps . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.4.1 Siphons . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.4.2 Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Sources
The main sources are:
The train examples of Chapter 2 belong to the Petri net folklore. They were
first introduced by H. Genrich.
6 CONTENTS
Part I
7
Chapter 1
Basic definitions
1.1 Preliminaries
Numbers
N, Z, Q and R denote the natural, rational, and real numbers.
Relations
Let X be a set and R ⊆ X × X a relation. R∗ denotes the transitive and reflexive
closure of R. R−1 is the inverse of R, that is, the relation defined by (x, y) ∈
R−1 ⇔ (y, x) ∈ R.
Sequences
A finite sequence over a set A is a mapping σ : {1, . . . , n} → A, denoted by the
string a1 a2 . . . an , where ai = σ(i) for every 1 ≤ i ≤ n, or the mapping : ∅ → A,
the empty sequence. The length of σ is n and the length of is 0.
An infinite sequence is a mapping σ : IN → A. We write σ = a1 a2 a3 . . . with
ai = σ(i).
The concatenation of two finite sequences or of a finite and an infinite sequence
is defined as usual. Given a finite sequence σ, we denote by σ ω the infinite con-
catenation σσσ . . ..
σ is a prefix of τ if σ = τ or σσ 0 = τ for some sequence σ 0 .
The alphabet of a sequence σ is the set of elements of A occurring in σ. Given
a sequence σ over A and B ⊆ A, the projection or restriction σ|B is the result of
removing all occurrences of elements a ∈ A \ B in σ.
9
10 CHAPTER 1. BASIC DEFINITIONS
Complexity Classes
We recall some basic notions of complexity theory. Formal definitions can be found
in standard textbooks.
A program is deterministic if it only has one possible computation for each
input. A program is nondeterministic if it may execute different computations for
the same input.
A program (deterministic or not) runs in f (n)-time for a function f : N → N
if for every input of length n (measured in bits) every computation takes at most
f (n) time. Given a set C of functions N → N (for example, C can be the set
of all polynomial functions), a program runs in C-time if it runs in f (n) time for
some function f (n) of C. Often we speak of a “polynomial-time program” or
“exponential-time” program, meaning a program that runs in time f (n) for some
polynomial resp. exponential function f (n).
1.1. PRELIMINARIES 11
We have
It is widely believed that all these inclusions are strict. However, all we know for
sure is the (rather trivial fact) P ⊂ EXPTIME. We also know:
12 CHAPTER 1. BASIC DEFINITIONS
1.2 Syntax
Definition 1.2.1 (Net, preset, postset)
A net N = (S, T, F ) consists of a finite set S of places (represented by circles),
a finite set T of transitions disjoint from S (squares), and a flow relation (arrows)
F ⊆ (S × T ) ∪ (T × S).
The places and transitions of N are called elements or nodes. The elements of
F are called arcs.
Given x ∈ S ∪ T , the set • x = {y | (y, x) ∈ F } is the preset of S x and
•
x = {y | (x, y) ∈ F } is the postset of x. For X ⊆ S ∪ T we denote X = • •x
x∈X
and X • =
S •
x .
x∈X
S = {s1 , . . . , s6 }
T = {t1 , . . . , t4 }
F = {(s1 , t1 ), (t1 , s2 ), (s2 , t2 ), (t2 , s1 ),
(s3 , t2 ), (t2 , s4 ), (s4 , t3 ), (t3 , s3 ),
(s5 , t3 ), (t3 , s6 ), (s6 , t4 ), (t4 , s5 )}
s1 s3 s5
t1 t2 t3 t4
s2 s4 s6
Subnets Non−subnets
s1 s1
t1 t1
s3 s3
t2 t3 t2 t3
s4 s4
t2
• T 0 ⊆ T , and
Remarks:
(1) N is connected iff there are no two subnets (S1 , T1 , F1 ) and (S2 , T2 , F2 ) of
N such that
• S1 ∪ T1 6= ∅, S2 ∪ T2 6= ∅;
• S1 ∪ S2 = S, T1 ∪ T2 = T , F1 ∪ F2 = F ;
• S1 ∩ S2 = ∅, T1 ∩ T2 = ∅.
(2) A connected net is strongly connected iff for every (x, y) ∈ F there is a path
leading from y to x.
Proof. Exercise.
1.3 Semantics
Definition 1.3.1 (Markings)
Let N = (S, T, F ) be a net.PA marking of N is a mapping M : S → IN. Given
R ⊆ S we write M (R) = M (s). A place s is marked at M if M (s) > 0. A
s∈R
set of places R is marked at M if M (R) > 0, that is, if at least one place of R is
marked at M .
M (s) − 1 if s ∈ • t \ t•
0
M (s) = M (s) + 1 if s ∈ t• \ • t
M (s) otherwise
Example 1.3.3 Let M be the marking of the net N in Figure 1.1 given by M (s1 ) =
M (s4 ) = M (s5 ) = 1 and M (s2 ) = M (s3 ) = M (s6 ) = 0. We denote this
marking by the vector (1, 0, 0, 1, 1, 0).
The marking enables transitions t1 and t3 , because • t1 = {s1 } and • t3 =
{s4 , s5 }. Transition t2 is not enabled, because M (s2 ) = 0. Transition t4 is not
enabled, because M (s6 ) = 0. We have
1 t
(1, 0, 0, 1, 1, 0) −→ (0, 1, 0, 1, 1, 0)
3 t
(1, 0, 0, 1, 1, 0) −→ (1, 0, 1, 0, 0, 1)
Example 1.3.5 Let N be the net of Figure 1.1 and let M = (1, 0, 0, 1, 1, 0) be a
marking of N . We have
t 1 t 3
(1, 0, 0, 1, 1, 0) −−→ (0, 1, 0, 1, 1, 0) −−→ (0, 1, 1, 0, 0, 1)
↓ t2
t4
(1, 0, 0, 1, 0, 1) −−→ (1, 0, 0, 1, 1, 0)
So M enables the finite sequence t1 t3 t2 t4 and the infinite sequence (t1 t3 t2 t4 )ω .
16 CHAPTER 1. BASIC DEFINITIONS
The following simple lemma plays a fundamental role in many results about
Petri nets.
(2): We show that every finite prefix of σ is enabled at M + L. The result then
follows from Proposition 1.3.6. By Proposition 1.3.6, every finite prefix of σ is
enabled at M . That is, for every finite prefix τ of σ there is a marking M 0 such that
τ τ
M −→ M 0 . By (1) we get (M + L) −→ (M 0 + L), and we are done.
t
• There is an edge from M to M 0 labeled by t iff M −→ M , that is, iff M
enables t and the firing of t leads from M to M 0 .
R EACHABILITY-G RAPH((S, T, F, M0 ))
1 (V, E, v0 ) := ({M0 }, ∅, M0 );
2 Work := {M0 };
3 while Work 6= ∅
4 do select M from Work ;
5 Work := Work \ {M };
6 for t ∈ enabled(M )
7 do M 0 := fire(M, t);
8 if M 0 ∈/V
9 then V := V ∪ {M 0 }
10 Work := Work ∪ {M 0 };
11 E := E ∪ {(M, t, M 0 )};
12 return (V, E, v0 )
The algorithm of Figure 1.3 computes the reachability graph. It uses two functions:
The set Work may be implemented as a stack, in which case the graph will be
constructed in a depth-first manner, or as a queue for breadth-first. Breadth first
search will find the shortest transition path from the initial marking to a given
(erroneous) marking. Some applications require depth first search.
18 CHAPTER 1. BASIC DEFINITIONS
Chapter 2
s1 s3 s5
t1 t2 t3 t4
s2 s4 s6
The addition of a new item is modeled by the firing of t1 . The firing of transition
ti models moving the item in cell i − 1 to cell i. Firing tn+1 models removing one
item. Observe that the buffer is concurrent: there are reachable markings at which
transitions t1 and tn+1 can occur independently of each other, that is, an item can
be added while another one is being removed.
Figure 2.2 shows the reachability graph of the buffer with capacity 3. By in-
spection of the reachability graph we can see that the following properties hold:
• Consistency: no cell is simultaneously empty and full (that is, no marking puts
tokens on si and si+1 for i = 1, 2, 3).
• 1-boundedness: every reachable marking puts at most one token in a given place.
19
20 CHAPTER 2. MODELLING WITH PETRI NETS
(10 10 10)
t1
(01 10 10) t4
t2
(10 01 10)
t4 t1 t3
t3 t1
t4
(01 10 01)
t2
t4
(10 01 01)
t1
(01 01 01)
• Deadlock freedom: every reachable marking has at least one successor marking.
Even more: every cell can always be filled and emptied again (every transition
can occur again).
• Capacity 3: the buffer has indeed capacity 3, that is, there is a reachable marking
that puts one token in s2 , s4 , s6 .
• The initial marking is reachable from any reachable marking (that is, it is always
possible to empty the buffer).
• Between any two reachable markings there is a path of length at most 6.
s1
t4 t1
l1
s4 l4 l2 s2
l3
t3 t2
s3
{l1 s2 l3 s4 }
t2 t4
{l1 l2 s3 s4 } {s1 s2 l3 l4 }
t3 t4 t2 t1
{s1 l2 s3 l4 }
t1 t3
{l1 s2 s3 l4 } {s1 l2 l3 s4 }
token on a place, we denote a marking by the set of places marked by it. For in-
stance, we denote by {l1 , s2 , l3 , s4 } the marking that puts a token on l1 , s2 , l3 and
s4 .
The Petri net of Figure 2.5 is a solution of the problem: The reader can con-
struct the reachability graph and show that the desired property holds. However,
the graph is pretty large!
22 CHAPTER 2. MODELLING WITH PETRI NETS
• Can the philosophers starve to death (because the system reaches a dead-
lock)?
4 3
1 2
thinking eating
l4 r3
fork
eating thinking
r4 b4 b3 l3
fork fork
l1 b1 b2 r2
thinking eating
fork
r1 l2
eating thinking
man is not around, and the goat may eat the cabbage when unattended (see Figure
2.7)
Can the man bring everyone across the river without endangering the goat or
the cabbage? And if so, how?
We model the system with a Petri net. The puzzle mentions the following
objects: Man, wolf, goat, cabbage, boat. Both can be on either side of the river. It
also mentions the following actions: Crossing the river, wolf eats goat, goat eats
cabbage.
Objects and their states are modeled by places. (We can omit the boat, because
it is always going to be on the same side as the man.) Actions are modeled by
transitions. Figure 2.7 shows the transitions for the three actions.
The Petri net of Figure 2.8 models this algorithm. The variable mi is modeled
by the places mi = true and mi = false. A token on mi = true means that
at the current state of the program (marking) the variable mi has the value true
(so the Petri net must satisfy the property that no reachable marking puts tokens
on both mi = true and mi = false at the same time). Variable hold is modeled
analogously.
A token on p4 (q4 ) indicates that the left (right) process is in its critical section.
Mutual exclusion holds if no reachable marking puts a token on p4 and q4 . The
Petri net has 20 reachable markings.
2.5. PETERSON’S ALGORITHM 25
Man Man
ML MR
Wolf Wolf
WL WR
WGL WGR
Goat Goat
GL GR
CGL CGR
Cabbage Cabbage
CL CR
p1 m1 = f m2 = f q1
u6 u1 v1 v6
m1 = t m2 = t
p2 q2
hold = 2
u3 u2 v2 v3
p3 q3
hold = 1
u4 u5 v5 v4
p4 q4
answer−rl
wait−l done−r
action−l reaction−r
request−lr
idle−l idle−r
request−rl
reaction−l action−r
done−l wait−r
answer−lr
a−rl
w−l d−r
r−lr
a−l r−r
r−l a−r
r−rl
d−l w−r
a−lr
a−rl
w−l d−r
r−lr
a−l r−r
ct−l
i−l i−r
ct−r
r−l a−r
r−rl
w−r
d−l
a−lr
The final attempt (Figure 2.12) is both deadlock-free and fair. The protocol
works in rounds. A “good” round consists of a request and an answer. In a “bad”
round both processes issue a request and they reach a crosstalk situation. Such a
round continues as follows: both processes detect the crosstalk, send each other an
“end-of-round” signal, wait for the same signal from their partner, and then move
to their initial states.
The solution is not perfect. In the worst case there are only bad rounds, and no
requests are answered at all.
end−of−round−l
a−rl
w−l d−r
r−lr
a−l r−r
r−rl
r−l a−r
d−l w−r
a−lr
end−of−round−r
A net with weighted arcs N = (S, T, W ) consists of two disjoint sets of places and
transitions and a weight function W : (S × T ) ∪ (T × S) → IN. A transition t is
enabled at a marking M of N if M (s) ≥ W (s, t) for every s ∈ S. If t is enabled
then it can occur leading to the marking M 0 defined by
ri wj
Ri Si m Vj Wj
si vj
m readers n writers
Exercise: Modify the Petri net so that reading processes can not indefinitely
prevent another process from writing.
The Black Ninjas. The Black Ninjas are an ancient secret society of warriors.
It is so secret that its members do not even know each other and how many they
are. When there is a matter to discuss, Sensei, the founder of the society, asks the
ninjas to meet at night, preferably during a storm as it minimizes the chance of
being surprised by the enemy.
As it happens, all ninjas have just received a note asking them to meet in a
certain Zen garden at midnight, wearing their black uniform, in order to decide
whether they should attack a nearby castle at dawn. The decision is taken by ma-
jority, and in the case of a tie the ninjas will not attack. All ninjas must decide their
vote in advance, the only purpose of the meeting is to compute the final outcome.
When the ninjas reach the garden in the gloomy night, dark clouds cover the
sky as rain pours vociferously. The weather is so dreadful that it is impossible to
see or hear anything at all. For this reason, voting procedures based on visual or
oral communication are hopeless. Is there a way for the ninjas to conduct their vote
in spite of these adverse conditions?
A first protocol. Sensei has foreseen this situation and made preparations. The
note sent to the ninjas contains detailed instructions on how to proceed. Each ninja
must wander randomly around the garden. Two ninjas that happen to bump into
32 CHAPTER 2. MODELLING WITH PETRI NETS
each other exchange information using touch according to the following protocol.
Each ninja maintains two bits of information:
• the first bit indicates whether the ninja is currently active (A) or passive (P);
and
• the second bit indicates the current expectation of each ninja on the final
outcome of the vote: yes, we will attack (Y) or no, we will not attack (N).
This gives four possible states for each ninja: AY , AN , PY , PN . Initially the
ninjas set their first bit to A, i.e., they are all active, and their second bit to their
vote. State changes obey interaction rules or transitions of the form p, q 7→ p0 , q 0 ,
meaning that if the interacting ninjas are in states p and q, respectively, they move
to states p0 and q 0 . Sensei specifies two rules, shown on Table 2.1, with the implicit
assumption that for any combination of states not covered by the rules, the ninjas
must simply keep their current states.
A second protocol. The protocol works fine for a time, but then disaster strikes.
At one gathering there is an equal number of Y-ninjas and N-ninjas. In this case —
and only in this case — the protocol is incorrect. There is an execution in which
the ninjas do not reach consensus, and after which the states of the ninjas cannot
change anymore. At dawn only some ninjas attack, they are decimated, and Sensei
commits harakiri.
The newly elected Sensei II analyzes the problem and quickly comes up with
a repair for the protocol. It is shown in Table 2.2: a new rule PY , PN 7→ PN , PN
is added. If all ninjas become passive, which can only happen in the case of a tie,
then the new rule guarantees that an N-consensus is eventually reached.
A third protocol. Again, the new protocol works fine . . . until it doesn’t. The
story repeats itself: At dawn no consensus has been reached, only some ninjas
attack, they are decimated. The successor, Sensei III, considers the general scenario
in which Y has a majority of only one ninja, and finds the following explanation:
2.8. SOME SYTEMS MODELED BY PETRI NETS WITH WEIGHT ARCS 33
In this situation, the protocol reaches with high probability a configuration with
one single ninja in state AY and many ninjas in states PN . There is now a struggle
between the single AY ninja, who turns PN -ninjas to PY using the second rule,
against the many PN -ninjas, who turn PY -ninjas back to PN using the new rule.
The AY -ninja eventually “wins”, and consensus Y is reached, but only after she
turns all PN -ninjas to PY before any of the PN -ninjas converts any of them back
to PY .
Sensei III wants a new protocol with a clean design. Since ties are the source
of all problems, she decides that the protocol should explicitly deal with them. So,
apart from being active or passive, ninjas can now have a more refined expectation
of the outcome: Y, N, and T (for “tie”). The protocol is shown on Table 2.3. When
two active ninjas meet, only one of them becomes passive, and both change their
expectation in the natural way. For example, if the expectations are Y and T, then
the ninja with expectation T changes it to Y. This explains rules 1 to 4. Rule 5 is
the usual one: passive ninjas adopt the expectation of active ninjas.
AY AN AY AN
2 2
PY PN PY PN
th 2
Figure 2.14: Weighted Petri nets for the first and second protocol. Transitions t
such that • t = t• are not shown.
t such that • t = t• (whose firing does not change the current marking) have been
omitted. Population protocols are designed to compute predicates ϕ : Nk → {0, 1}.
We first give an informal explanation of how a protocol computes a predicate,
and then a formal definition using Petri net terminology. A protocol for ϕ has a
distinguished set of input states {q1 , q2 , . . . , qk } ⊆ Q. Further, each state of Q,
initial or not, is labeled with an output, either 0 or 1. Assume for example k = 2.
In order to compute ϕ(n1 , n2 ), we first place ni agents in qi for i = 1, 2, and 0
agents in all other states. This is the initial configuration of the protocol for the
input (n1 , n2 ). Then we let the protocol run. The protocol satisfies that in every
fair run starting at the initial configuration (fair runs are defined formally below),
eventually all agents reach states labeled with 1, and stay in such states forever, or
they reach states of labeled with 0, and stay in such states forever. So, intuitively,
in all fair runs all agents eventually “agree” on a boolean value. By definition, this
value is the result of the computation, i.e, the value of ϕ(n1 , n2 ).
Formally, and in Petri net terms, fix a Petri net N = (S, T, W ) with |• t| =
2 = |t• | for every transition t. Further, fix a set I = {p1 , . . . , pk } of input places,
and a function O : P → {0, 1}. A marking M of N is a b-consensus if M (p) > 0
implies O(p) = b. A b-consensus M is stable if every marking reachable from
t1 t2
M is also a b-consensus. A firing sequence M0 −−→ M1 −−→ M2 · · · of N is fair
if it is finite and ends at a deadlock marking, or if it is infinite and the following
t
condition holds for all markings M, M 0 and t ∈ T : if M −→ M 0 and M = Mi
tj+1 t
for infinitely many i ≥ 0, then Mj −−−→ Mj+1 = M −→ M 0 for infinitely many
j ≥ 0. In other words, if a fair sequence reaches a marking infinitely often, then
all the transitions enabled at that marking will be fired infinitely often from that
marking. A fair firing sequence converges to b if there is i ≥ 0 such that Mj is
a b-consensus for every marking j ≥ i of the sequence. For every v ∈ Nk with
2.9. SOME PETRI NET MODELS TAKEN FROM THE LITERATURE 35
A model of a biological system. Petri nets are often used to model biological
systems. In these applications, tokens represent molecules or cells, and transi-
tions correspond to chemical reactions or biological processes. Figure 2.15, taken
from the paper “Executable cell biology”, by J. Fisher and T.A. Henzinger (Nature
biotechnology, 2007), shows in part (a) a simple, standard weighted Petri net. Part
( b ) shows a simplified logical regulatory graph for the biosynthesis of tryptophan
in E. coli. Each node of the regulatory graph represents an active component: tryp-
tophan (Trp), the active enzyme (TrpE) and the active repressor (TrpR). The node
marked by a rectangle accounts for the import of Trp from external medium. All
nodes are binary (that is, can take the value 0 or 1), except Trp, which is repre-
sented by a ternary variable (taking the values 0, 1, 2). Arrows represent activation
and bars denote inhibition (inhibitor arcs). Part ( c ) shows Petri net of the Trp
regulatory network. Each of the four components of part (b) is represented by two
complementary places and all the different situations that lead to a change of the
state of the system are modeled by one of the nine transitions (t1 , . . . , t9 ).
A model of a flexible manufacturing system. Figure 2.16, taken from the paper
“Optimal Petri-Net-Based Polynomial-Complexity Deadlock-Avoidance Policies
for Automated Manufacturing Systems” by Xing et al. (IEEE Trans. on Systems,
Man, and Cybernetics, 2009) shows a flexible manufactuting cell with has four
machines, modeled by places p20 to p23 , and three robots, modeled by places p24
to p26 . Tokens model parts, and so, for example, a token at p20 means that the part
represented by the token is currently being processed at the first machine. Each
machine can hold two parts at the same time, and each robot can hold one part.
A model of a business process. Figure 2.16, taken from the paper “Business
process management as the “Killer App” for Petri nets” by van der Aalst (Software
nd have been used to model T-cell activation and differentiation8,9, The initial T-cell model was followed by a more extensive animated
well as C. elegans development10,11,13,14. model of T-cell differentiation in the thymus9. A major advantage of
Interacting state machine models are particularly suitable for Statecharts compared to other state-based formalisms, such as Reactive
scribing mechanistic models of biological systems that are well Modules16, is the fact that this language is visual. The user can draw
derstood qualitatively. Such models do not require quantitative data states and state changes and the tool automatically creates an execut-
ating to the number of molecules and reaction rates. They allow the able model, enabling relatively easy and intuitive programming even
eation of abstract high-level models and the application of strong for nonspecialists. Efroni et al. used reactive animation (Box 2)9,53,
alysis tools such as model checking. The possibility of hierarchical where a reactive system drives the display of animation software to
ucturing is extremely useful in cases where the behavior is distrib- visualize the model. These studies were followed by ongoing efforts to
ed over many cells and where multiple copies of the same process model C. elegans development10,11,13,14, which used Statecharts and a
36 CHAPTER 2. MODELLING WITH PETRI NETS
e executed in parallel. visual language called Live Sequence Charts54 and more recently a lan-
There are many different languages to express interacting state guage called Reactive Modules16 that supports compositional analysis
achine models. Using the visual language (Box 2) of Statecharts15, techniques (Box 2).
196 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 39, NO. 1, JANUARY 2009
which can be obtained by the following three steps. Fig. 2. Petri-net model (N, M0 ) of an flexible manufacturing cell.
Figure 2.16: Petri net model of a flexible manufacturing system
Step 1) Delete the resource place r and its related arcs from
we can consider its input and output operation places as one
N , and let PAR = PR \ {r}. operation place because they use the same resource type. This
•
Step 2) For each transition t ∈ PAR ∩ • r, for example,
way, (N (r), MA0 ) can be considered as a marked S3 PR, in
r1 ∈ PAR such that (r1 , t) ∈ F and (t, r) ∈ F ,
which the concept of MPRT-circuits can be used. Thus, the
delete (r1 , t) from S3 PR. Let ps =(p) t. If |p•s | = conclusions in (N, M0 ) hold in (N (r), MA0 ). To be pointed out
1, then ∀ts ∈ • ps , add (r1 , ts ) if (ts , r1 ) ∈ / F and
later, the reduction procedure can be repeated for any number
delete (ts , r1 ) if (ts , r1 ) ∈ F . If |p•s | = k > 1, let of times, and the reduced model will remain as S3 PR. Thus, if
p•s = {t1 , t2 , . . . , tk } and (r) ti = ri , i = 1, 2, . . . , k, R(θ1 ) ∩ R(θ2 ) has multiple resources, we can reduce them one
and then, replace ps with k operation places
by one to an S3 PR without ξ-resources.
and, ∀ts ∈ • ps , with k transitions, i.e., delete Example 2: The flexible manufacturing cell considered in
ps and its related arcs and add k operation [4] has four machines m1 −m4 . Each machine can hold two
places, denoted as ps1 , ps2 , . . . , psk . Let ts ∈ • ps
parts at the same time. Moreover, the cell contains three ro-
and ps0 =(p) ts . Delete ts and its related arcs, bots r1 , r2 , and r3 , and each of them can hold one part.
add k transitions, denoted as ts1 , ts2 , . . . , tsk , (in Its Petri-net model (N, M0 ) is shown as in Fig. 2. The
which case, we will say that ts is separable
set of resource places is R = {m1 , m2 , m3 , m4 , r1 , r2 , r3 } =
and is separated into ts1 , ts2 , . . ., and tsk ) and (p20 , p21 , p22 , p23 , p24 , p25 , p26 ). The capacities of resources
add arcs (p , t ), (t , p ), (p , t ), (r , t ), and
2.10. ANALYSIS PROBLEMS 37
and Systems Modeling, 2014), shows a Petri net model of the life-cycle of a re-
quest for compensation. A transition may carry a label referring to some activity.
Transitions without a label are “silent”.
Proposition 2.10.2
(1) Liveness implies deadlock freedom.
Proof. (1) follows immediately from the definitions. (2) and (3) follow from the
definitions and from the fact that a Petri net has finitely many places.
39
41
Decision procedures
Proposition 3.1.1 Let (N, M0 ) be a bounded Petri net. (N, M0 ) is live iff for
every bottom SCC of the reachability graph of (N, M0 ) and for every transition t,
some marking of the SCC enables t.
Proof. (⇒) Assume (N, M0 ) is live. Let M be a marking of a bottom SCC of the
reachability graph of (N, M0 ), and let t be a transition of N . By the definition of
liveness, some marking reachable from M enables t. By the definition of bottom
SCC, this marking belongs to the same bottom SCC as M .
43
44 CHAPTER 3. DECISION PROCEDURES
(⇐) Assume that for every bottom SCC of the reachability graph of (N, M0 ) and
for every transition t, some marking of the SCC enables t. We show that (N, M0 )
is live. Let M be an arbitrary marking reachable from M0 , and let t be a transition.
By the definition of a bottom SCC, there is a bottom SCC such that every marking
of it is reachable from M . Since some marking of the SCC enables t, we are done.
The condition of Proposition 3.1.1 can be checked in linear time using Tarjan’s
algorithm, which computes all the SCCs of a directed graph in linear time. The
algorithm can be easily adapted to compute the bottom SCCs.
Rule of thumb 1:
All interesting questions about the behaviour of 1-bounded Petri
nets are PSPACE-hard.
Notice that a rule of thumb is not a theorem. There are behavioral properties
of 1-bounded Petri nets that can be solved in polynomial time. For instance, the
3.1. DECISION PROCEDURES FOR 1-BOUNDED PETRI NETS 45
question “Is the initial marking a deadlock?” can be answered very efficiently;
however, it is so trivial that hardly anybody would consider it really interesting.
So a more careful formulation of the rule of thumb would be that all questions
described in the literature as interesting are at least PSPACE-hard. Here are 14
examples:
• Is there a reachable marking that does not put a token in a given place?
• Is there a run which enables a transition infinitely often but contains it only
finitely often?
Turing machines. We use single tape Turing machines with one-way infinite
tapes, i.e., the tape has a first but not a last cell. For our purposes it suffices to con-
sider Turing machines starting on empty tape, i.e., initially the tape containing only
blank symbols. So we define a Turing machine as a tuple M = (Q, Γ, δ, q0 , F ),
where Q is the set of states, Γ the set of tape symbols (containing a special blank
symbol), δ : (Q × Γ) → Q × Γ × {R, L}) the transition function, q0 the initial
state, and F the set of final states. The size of a Turing machine is the number of
bits needed to encode its transition relation.
46 CHAPTER 3. DECISION PROCEDURES
The notion of simulation used here is very strong: a 1-bounded Petri net sim-
ulates a Turing machine if there is bijection f between the configurations of the
machine and the markings of the net such that the machine can move from a con-
figuration c1 to a configuration c2 in one step if and only if the Petri net can move
from the marking f (c1 ) to the marking f (c2 ) through the firing of exactly one
transition.
Let A = (Q, Γ, Σ, δ, q0 , F ) be a linearly bounded automaton of size n. The
computations of M visit at most the cells c1 , . . . , cn . Let C be this set of cells. The
simulating Petri net N (A) contains a place s(q) for each state q ∈ Q, a place s(c)
for each cell c ∈ C, and a place s(a, c) for each symbol a ∈ Γ and for each cell
c ∈ C. A token on s(q) signals that the machine is in state q. A token on s(c)
signals that the machine reads the cell c. A token on s(a, c) signals that the cell c
contains the symbol a. The total number of places is |Q| + n · (1 + |Σ|).
The transitions of N (A) are determined by the state transition relation of A. If
(q 0 , a0 , R) ∈ δ(q, a), then we have for each cell c a transition t(q, a, c) whose input
places are s(q), s(c), and s(a, c) and whose output places are s(q 0 ), s(a0 , c) and
s(c0 ), where c0 is the cell to the right of c. If (q 0 , a0 , L) ∈ δ(q, a) then we add a
similar set of transitions. The total number of transitions is at most 2 · |Q|2 · |Γ|2 · n,
and so O(n2 ), because the size of A is O(|Q|2 · |Γ|2 ).
1
Notice that we deviate from the standard definition, which says that an automaton is f (n)-
bounded if it can use at most f (k) tape cells for an input word of length k. Since we only consider
bounded automata working on empty tape, the standard definition is not appropriate for us.
3.1. DECISION PROCEDURES FOR 1-BOUNDED PETRI NETS 47
The initial marking of N (A) puts one token on s(q0 ), on s(c1 ), and on the
place s(B, ci ) for 1 ≤ i ≤ n, where B denotes the blank symbol. The total size of
the Petri net is O(n2 ).
It follows immediately from this definition that each move of A corresponds to
the firing of one transition. The configurations reached by A along a computation
correspond to the markings reached along its corresponding run. These markings
put one token in exactly one of the places {s(q) | q ∈ Q}, in exactly one of the
places {s(c) | c ∈ C}, and in exactly one of the places {s(a, c) | a ∈ Σ} for each
cell c ∈ C. So N (A) is 1-bounded.
In order to answer a question about a linearly bounded automaton A we can
construct the net N (A), which is only polynomially larger than A, and solve the
corresponding question about the runs of A. For instance, the question “does any
of the computations of A terminate?” corresponds to “has the Petri net N (A) a
deadlock?”
It turns out that most questions about the computations of linearly bounded
automata are PSPACE-hard. To begin with, the (empty tape) acceptance problem
is PSPACE-complete:
Many other problems can be easily reduced to the acceptance problem in poly-
nomial time, and so are PSPACE-hard too. Examples are:
• does A halt?,
Proof. By induction on k
Basis: k = 1. Then the elements of A are just numbers. The set {A1 , A2 , · · · }
has a minimum, say c1 . Choose i1 as some index (say, the smallest), such that
Ai1 = c1 . Consider now the set {Ai1 +1 , Ai1 +2 , · · · }. The set has a minimum c2 ,
which by definition satisfies c1 ≤ c2 . Choose i2 as the the smallest index i2 > i1
such that Ai2 = c2 , etc.
Step: k > 1. Given a vector Ai , let A0i be the vector of dimension k − 1 consisting
of the first k − 1 components of Ai , and let ai be the last component of Ai . We
write Ai = (A0i | ai ).
Since the vectors of A01 A02 A03 · · · have dimension k − 1, by induction hypothesis
there is an infinite subsequence A0i1 ≤ A0i2 ≤ A0i3 · · · . Consider now the sequence
ai1 ai2 ai3 · · · . By induction hypothesis there is a subsequence aj1 ≤ aj2 ≤ aj3 · · · .
But then we have Aj1 ≤ Aj2 ≤ Aj3 · · · , and we are done.
Theorem 3.2.3 (N, M0 ) is unbounded iff there are markings M and L such that
∗ ∗
L 6= 0 and M0 −→ M −→ (M + L)
Proof. (⇐) : Assume there are such markings M, L. By the Monotonicity Lemma
we have
∗ ∗ ∗
M1 −→ (M1 + L) −→ (M1 + 2 · L) −→ . . .
50 CHAPTER 3. DECISION PROCEDURES
Since L 6= 0, the set [M0 i of reachable markings is infinite and (N, M0 ) is un-
bounded.
(⇒) Assume (N, M0 ) is unbounded. Then the set [M0 i of reachable markings is
t1 t1
infinite. By Königs lemma there is an infinite firing sequence M0 −→ M1 −→
M2 . . . such that the markings M0 , M1 , M2 , . . . are pairwise distinct. By Dickson’s
∗ ∗
Lemma there are indexes i < j such that M0 −→ Mi −→ Mj and Mi ≤ Mj .
Choose M := Mi and L := Mj − Mi . Since Mi and Mj are distinct, we have
L 6= 0.
Proof. We give an algorithm that always terminates and always returns the cor-
rect answer: “ bounded” or “unbounded”. The algorithm explores the reachability
graph of the input Petri net (N, M0 ) using breadth-first search. After adding a
new marking M 0 , the algorithm checks if the part of the graph already constructed
∗ ∗
contains a sequence M0 −→ M −→ M 0 such that M ≤ M 0 (and M 6= M 0 ,
because M 0 is new). The algorithm terminates if it finds such a sequence, in which
case it returns “unbounded”, or if it cannot add any new marking, in which case it
returns “bounded”.
If (N, M0 ) is bounded, then by Theorem 3.2.3 the algorithm never finds a new
marking M 0 satisfying the condition above. So, since the Petri net has only finitely
many reachable markings, the algorithm terminates because it cannot find any new
marking, and correctly returns “bounded”.
If (N, M0 ) is unbounded, then there are infinitely many reachable markings,
and the algorithm cannot terminate because it runs out of reachable markings. On
the other hand, by Theorem 3.2.3 the algorithm eventually finds markings M 0 and
M as above, and so it correctly answers “unbounded”.
Coverability graphs
We show how to construct a coverability graph of a Petri net (N, M0 ). the cov-
erability graph is always finite, and satisfies the following property: a marking M
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 51
M
t1 t2 ... tn
M’
t1 t2 ... tn
M’’ ...
=
∆Μ ∆Μ
Μ+∆Μ Μ+2∆Μ ...
For the rest of the proof we start with a lemma. It states that if C OVERABILITY-
G RAPH adds an ω-marking M 0 , say M 0 = (ω, 2, 0, ω, 3, ω), then for every k, say
15, there is a reachable marking where the ω-components have at least the value 15;
for example, for 15 the marking could be (17, 2, 0, 234, 3, 15). In other words, if
the algorithm adds an ω-marking (ω, 2, 0, ω, 3, ω), it is possible to reach arbitrarily
large values for all ω-components simultaneously.
C OVERABILITY-G RAPH((P, T, F, M0 ))
1 (V, E, v0 ) := ({M0 }, ∅, M0 );
2 Work : set := {M0 };
3 while Work 6= ∅
4 do select M from Work ;
5 Work := Work \ {M };
6 for t ∈ enabled(M )
7 do M 0 := fire(M, t);
8 M 0 := AddOmegas(M, t, M 0 , V, E);
9 if M 0 ∈/V
10 then V := V ∪ {M 0 }
11 Work := Work ∪ {M 0 };
12 E := E ∪ {(M, t, M 0 )};
13 return (V, E, v0 );
A DD O MEGAS(M, t, M 0 , V, E)
1 for M 00 ∈ V
∗
2 do if M 00 < M 0 and M 00 −→E M
3 then M 0 := M 0 + ((M 0 − M 00 ) · ω);
4 return M 0 ;
We can now finally define the marking Mk0 . Choose a number ` large enough
to guarantee that
(σt)k+1
(2) the marking M ` given by M`00 −−−−−→ M ` satisfies M ` (s) > k for every
ω-place of M 00 .
Since the execution of σt adds at least one token to s0 , after the execution of
(σt)k+1 the place s0 has at least k + 1 tokens, and so M ` (s) > k. Since M `
also satisfies (2), we can take Mk0 := M ` .
Theorem 3.2.8 Let (N, M0 ) be a Petri net and let M be a marking of N . There
is a reachable marking M 0 ≥ M iff the coverability graph of (N, M0 ) contains an
ω-marking M 00 ≥ M .
Rackoff’s algorithm
The coverability graph allows us to answer coverability of any marking. However,
Coverability asks whether a particular marking M can be covered. The question is
whether we can give a bound on the size of the fragment of the coverability graph
we need to construct to find an ω-marking covering M .
We consider Petri nets in which places may have a negative number of tokens.
Transitions can occur independently of the number of tokens in their input places.
56 CHAPTER 3. DECISION PROCEDURES
G(s) − 1 if s ∈ • t \ t•
0
G (s) = G(s) + 1 if s ∈ t• \ • t
G(s) otherwise
t
We denote by G ,→ G0 that firing t at G leads to the g-marking G0 . An integer
t1 t2 tn
firing sequence of an integer net is a sequence G0 ,→ G1 ,→ · · · ,→ Gm .
Every marking is also a g-marking, ever Petri net is also an integer net, and
every firing sequence is also an integer firing sequence, but the converse does not
hold.
In the rest of the section we fix a net N with places {s1 , . . . , sk }, and identify
g-markings with vectors of Zk .
k
Pk 3.2.11 [Rackoff 1978] Let M ∈ N kbe a marking of N , and let n =
Theorem
1 + i=1 M (i). For every marking M0 ∈ N of N , if (N, M0 ) has a (k, M )-
O(k log k)
covering sequence, then it has one of length at most (2n)(k+1)! ∈ n2 .
• The bound is polynomial on n (for Petri nets with a fixed number k of places,
the bound is of the form O(nc ) for some constant c), but double exponential
on k (for markings with a fixed number n of tokens, the bound is of the form
ck log k
22 for some constant c).
• f (0) = 1, and
of (N, Gα+1 ) of length at most f (i − 1), that is, ` ≤ f (i − 1). Since Gα+1 (i) ≥
nf (i − 1), and a sequence of length f (i − 1) can remove at most f (i − 1) tokens
from the place si , after the execution of the new sequence the number of tokens in
si is at least (n−1)f (n−1) ≥ n−1. By the definition of n, we have n−1 ≥ G(i).
So the sequence
t1 tα tα+1 u1 u2 u`
σ 0 = G0 ,→ · · · ,→ Gα ,→ ,→ H1 ,→ · · · ,→ H`
Proof of Theorem 3.2.11. Assume that (N, M0 ) has a (k, M )-covering sequence.
By Lemma 3.2.12, it has one of length at most f (k). So it suffices to prove f (k) ≤
(2n)(k+1)! .
We prove by induction on i that f (i) ≤ (2n)(i+1)! for every i ≥ 0. Recall
that n ≥ 1 holds by the definition of n. Further, it follows immediately from the
definition of f that f (i) ≥ 1 for every i ≥ 0.
Base: i = 0. Since n ≥ 1, we have f (0) = 1 ≤ 2n = (2n)1! .
Step: i > 0. Assume f (i − 1) ≤ (2n)i! . We prove f (i) ≤ (2n)(i+1)! . We have:
f (i) = (nf (i − 1))i + f (i − 1) (definition of f )
≤ (n(2n)i! )i + (2n)i! (ind. hyp.)
≤ (n(2n)i! )i + (n(2n)i! )i (n ≥ 1, i ≥ 1)
= 2(n(2n)i! )i
= 2ii!+1 nii!+i
≤ 2(i+1)! n(i+1)! (n ≥ 1, i ≥ 1)
= (2n)(i+1)!
O(k log k)
Finally, let us prove (2n)(k+1)! ∈ n2 . We first show (k + 1)! ∈ 2O(k log k) .
Lemma 3.2.14 Every upward-closed set of markings has finitely many minimal
elements.
Proof. Assume M is upward closed and has infinitely many minimal markings
M1 , M2 , . . .. By Dickson’s Lemma there are i 6= j such that Mi ≤ Mj . But then
Mj is not minimal. Contradiction.
An important consequence of the lemma is that every upwards closed set can
be finitely represented by its set of minimal elements.
We define the set pre(M) of predecessors of a set of markings M, and the set
pre ∗ (M) of markings from which one ca reach some marking of M.
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 61
and further
pre 0 (M) = M
pre i+1 (M) = pre pre i (M) for every i ≥ 0
∞
[
∗
pre (M) = pre i (M)
i=0
Lemma 3.2.16 If M is upward closed, then pre(M) and pre ∗ (M) are also up-
ward closed.
Proof. We first show that pre(M) is upward closed. Let M 0 ∈ pre(M). We have
to prove that M 0 + M 00 ∈ pre(M) holds for every marking M 00 .
t
Since M 0 ∈ pre(M) there is M ∈ M and a transition t such that M 0 −→ M .
t
By the firing rule we have M 0 + M 00 −→ M + M 00 for every marking M 00 . Since
t
M is upward closed, we have M + M 00 ∈ M. Since M 0 + M 00 −→ M + M 00 , we
get M 0 + M 00 ∈ pre(M).
Now we prove that pre ∗ (M) is upward closed. By repeated application of teh
first part of this lemma we obtain that pre j (X) is upward closed for every j ≥ 0.
So pre ∗ (M) is a union of upward-closed sets. But it follows immediately from
the definition of an upward-closed set that the union of upward-closed sets if also
upward closed.
Proof. By Lemma 3.2.16, pre ∗ (M) is upward closed. By Lemma 3.2.14, the
set m∗ of minimal markings
Si of pre ∗ (M) is finite. therefore, there exists an in-
∗
dex i such that m ⊆ j=0 pre j (M). Since this union is upward closed, we get
pre ∗ (M) ⊆ ij=0 pre j (M). By the definition of pre ∗ (M), we have pre ∗ (M) =
S
Si j
j=0 pre (M).
This theorem leads to the algorithm on the left of of Figure 3.3. Observe that
the termination of the algorithm follows from Dickson’s Lemma, and does not
require knowledge of Petri nets. In particular, the termination argument does not
provide information on the number of iterations of the while loop. However, using
Rackoff’s theorem we can obtain an upper bound.
(2n)(k+1)!
[
∗
pre (M) = pre j (M)
j=0
Proof. Let M ∈ pre ∗ (M). By the definition of pre ∗ (M), there is a mark-
∗
ing M 0 ∈ M such that M −→ M 0 , and so M 0 ≥ Mi for some minimal marking
Mi . By Theorem 3.2.11 and the definition of n, there exists a firing sequence
σ
M −→ M 00 ≥ Mi such that |σ| ≤ (2n)(k+1)! . Since M is upward closed, we have
M 00 ∈ M, and so M ∈ pre j (M) for j = |σ| ≤ (2n)(k+1)! .
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 63
The algorithm on the left of of Figure 3.3 is not yet directly implementable,
because it manipulates infinite sets. For each operation (union and pre) and for
? ?
each test (the tests M = Old M and M0 ∈ M of the while-loop), we have to
supply an implementation that uses only the finite representation of the set, that is,
its set of minimal elements. For the tests this is easy. Given a set M, let min(M)
denote the set of minimal elements of M. We have:
(1) M0 ∈ M iff there exists M 0 ∈ min(M) such that M0 ≥ M 0 .
That is, the minimal markings of M that reverse-enable t are obtained by taking the
minimal markings of M, and computing their join with the marking R[t]. Putting
together (3)-(5) we obtain
(6) If M is upward closed, then
!
[
min(M ∪ pre(M)) = min min(M) ∪ pre R[t] ∧ min(M)
t∈T
5
Since M ∧R[t] ≥ M , if M is upward closed we have M ∧R[t] ∈ M for every M ∈ min(M).
64 CHAPTER 3. DECISION PROCEDURES
• The subtree order on the set of finite trees over a finite alphabet Σ.
We say that t1 t2 if there is an injective mapping from the nodes of tree
t1 into the nodes of t2 that preserves reachability: n0 is reachable from n
in t1 iff the image of n0 is reachable from the image of n in t2 . Kruskal’s
lemma states that every infinite sequence of trees contains an infinite chain
with respect to the subtree order.
Definition 3.2.20 Let A be a set and let A×A be a wqo. A set X ⊆ A is upward
closed if x ∈ X and x y implies y ∈ X for every x, y ∈ A. In particular, given
x ∈ A, the set {y ∈ A | y x} is upward-closed.
A relation →⊆ A × A is monotonic if for every x → y and every x0 x there
is y 0 y such that x0 → y 0 .
Given X ⊆ A, we define
pre(X) = {y ∈ A | y → x and x ∈ X}
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 65
Further we define:
pre 0 (X) = X
pre i+1 (X) = pre pre i (X) for every i ≥ 0
∞
[
pre ∗ (X) = pre i (X)
i=0
Observe that a semilinear set can be finitely represented as a set of pairs {(r1 , P1 ), . . . , (rn , Pn )}
giving the roots and periods of its linear sets.
Theorem 3.2.24 [Leroux 2012] Let (N, M0 ) be a Petri net and let M1 be a mark-
ing of N . If M1 is not reachable from M0 , then there exists a semilinear set M of
markings of N such that
(a) M0 ∈ M,
t
(b) if M ∈ M and M −→ M 0 for some transition t of N , then M 0 ∈ M, and
(c) M1 ∈
/ M.
has no solution. Finally, checking (b) is more complicated, but reduces to checking
validity of a formula of a theory called Presburger arithmetic for which decision
procedures exist.
Now, Theorem 3.2.24 can be used to give an algorithm for Reachability con-
sisting of two semi-decision procedures, one that explores the reachability graph
breadth-first and stops if it finds the goal marking M , and another one that enu-
merates all semilinear sets, and stops if one of them satisfies (a)-(c). The two
procedures run in parallel, and, since one of the two is bound to terminate, yield
together a decision procedure for Reachability.
Deadlock-freedom
We reduce Deadlock-freedom to Reachability. We proceed in two steps. First,
we reduce Deadlock-freedom to an auxiliary problem P, and then we reduce P to
reachability.
S = {R ⊆ S | ∀t ∈ T : • t ∩ R 6= ∅}
that is, an element of S contains for every transition t at least one of the input
places of t. We have
Suppose now that there is an algorithm that decides P. We can then decide Deadlock-
freedom as follows. For every R ∈ S we use the algorithm for P to decide if some
reachable marking M satisfies M (s) = 0 for every s ∈ R. It follows from (2) that
(N, M0 ) is deadlock-free if the answer is negative in all cases. Since, by (1), we
only have to solve a finite number of instances of P, Deadlock-freedom is decid-
able.
Proof. Let (N, M0 ) be a Petri net where N = (S, T, F ), and let R be a set
of places of N . We construct a new Petri net (N 0 , M00 ) by adding new places,
transitions, and arcs to (N, M0 ). We proceed in two steps (see Figure 3.4). In the
first step, we add
• a new transition ts and arcs (s, ts ), (r0 , ts ), (ts , r0 ) for every place s ∈ S \R.
68 CHAPTER 3. DECISION PROCEDURES
S\R
ts1
s1
.
. ..
. . ..
.
. ..
.
. ..
tsn ..
sn
t1 tm
.........
s0 t0 r0
Intuitively, these transitions are “garbage collectors”. The “garbage” are the
tokens in the places of S \ R. If r0 becomes marked, then the garbage collectors
can remove all tokens from these places. This concludes the definition of (N 0 , M00 ).
Let Mr0 be the marking of N 0 that puts one token on r0 and no tokens else-
where. We have
(1) If some reachable marking M of (N, M0 ) puts no tokens in R, then Mr0 is
a reachable marking of (N 0 , M00 ).
To reach Mr0 , we first fire a sequence of transitions of T leading from M0
to M , then we fire t0 , and finally we fire the transitions {ts | s ∈ S} until all
places of S are empty.
Liveness
Liveness can also be reduced to Reachability, but the proof is more complex. We
sketch the reduction for the problem whether a given transition t of a Petri net
(N, M0 ) is live.
Let Et be the set of markings of N that enable t. Clearly, Et is upward closed.
By Lemma 3.2.16, the set pre ∗ (Et ) is also upward closed. Now, pre ∗ (Et ) is the
set of markings of N that enable some firing sequence ending with t. Let Dt be
the complement of pre ∗ (Et ), that is, the set of markings from which t cannot be
enabled anymore. We have: (N, M0 ) is live iff [M0 i ∩ Dt = ∅.
If Dt is a finite set of markings, and we are able to compute it, then we are
done: we have reduced the liveness problem to a finite number of instances of
Reachability. However, the set Dt may be infinite, and we do not yet know how
to compute it. We show how to deal with these problems.
Every upward-closed set of markings is semilinear (exercise). Using the back-
wards reachability algorithm, we can compute the finite set min(pre ∗ (Et )), and
from it we can compute a representation of pre ∗ (Et ) as a semilinear set. Now we
use a powerful result: the complement of a semilinear set is also semilinear; more-
over, there is an algorithm that, given a representation of a semilinear set X ⊆ Nk ,
computes a representation of the complement Nk \ X. So we are left with the prob-
lem: given a Petri net (N, M0 ) and a semilinear set X, decide if some marking of
X is reachable from M0 .
This problem can be reduced to Reachability as follows (brief sketch). We
construct a Petri net that first simulates (N, M0 ), and then transfers control to an-
other Petri net which nondeterministically generates a marking of X on “copies”
of the places of N . This second net then transfers control to a third, whose tran-
sitions remove one token from a place of N and a token from its “copy”. If X is
reachable, then the first net can produce a marking of X, the second net can pro-
duce the same marking, and the third net can then remove all tokens from the first
and second nets, reaching the empty marking. Conversely, if the net consisting of
the three nets together can reach the empty marking, then (N, M0 ) can reach some
marking of X.
70 CHAPTER 3. DECISION PROCEDURES
3.2.4 Complexity
Unfortunately, all the problems we have seen so far have very high complexity.
We prove that all of them are EXPSPACE-hard. That is, the memory needed
by any algorithm solving one of these problems grows at least exponentially in
the size of the input Petri net. Rackoff’s algorithm shows that Coverability is
EXPSPACE-complete, that is, that exponentially growing memory suffices. The
same can be proved for Boundedness. For a long time it was conjecture that
Deadlock-freedom, Liveness, and Reachability were EXPSPACE-complete as
well. However, the conjecture was disproved in 2019: these three problems have
non-elementary complexity. To explain what this means, define inductively the
functions exp k (x) as follows:
• exp 0 (x) = x;
The complexity class k-EXPSPACE contains the problems that can be solved by
a Turing machine using at most exp k (n) space for inputs of length n. The class of
elementary problems is defined as
∞
[
k-EXPSPACE
k=0
n + 1
if m = 0
A(m, n) = A(m − 1, 1) if m > 0 and n = 0
A(m − 1, A(m, n − 1)) if m > 0 and n > 0
In particular, all the questions we asked about 1-safe Petri nets can be reformu-
lated for Petri nets, and turn out to have at least this space complexity. As in the
case of 1-safe Petri nets, this is a consequence of one single fundamental fact:
l: x := x + 1
l: x := x − 1
l: goto l1 unconditional jump
l: if x = 0 then goto l1 conditional jump
else goto l2
l: halt
n
Now, it suffices to show that a 22 -bounded counter program of size O(n) can
be simulated by a Petri net of size O(n2 ). This is the goal of the rest of this section.
Since a direct description of the sets of places and transitions of the simulating
net would be very confusing, we introduce a net programming notation with a very
simple net semantics. It is very easy to obtain the net corresponding to a program,
and execution of a command corresponds exactly to the firing of a transition. So we
can and will look at the programming notation as a compact description language
for Petri nets.
A net program is rather similar to a counter program, but does not have the
possibility to branch on zero; it can only branch nondeterministically. However, it
has the possibility of transferring control to a subroutine. The basic commands are
as follows:
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 73
l: x := x + 1
l: x := x − 1
l: goto l1 unconditional jump
l: goto l1 or goto l2 nondeterministic jump
l: gosub l1 subroutine call
l: return end of subroutine
l: halt
l l
x x
l1 l1
l : x:=x+1; l : x:=x-1;
l1: ... l1: ...
l l l
l1 l1 l2 halt
requires quite a bit of low-level programming. But the reward is worth the hacking
effort.
The notion of simulation is not as strong as in the case of 1-safe Petri nets.
In particular, net programs are nondeterministic, while counter programs are deter-
ministic. A net program N simulates a counter program C if the following property
holds: C halts (executes the command halt) if and only if some computation of N
halts (other computations may fail).
Each variable x of N (be it a variable from C or an auxiliary variable) has an
n
auxiliary complement variable x. N takes care of setting x = 22 at the beginning
of the program. We call the code that takes care of this Ninit (C).7 The rest of
N (C), called Nsim (C), simulates C and takes care of keeping the invariant x =
n
22 − x.
We design Nsim (C) first. This program is obtained through replacement of
each command of C by an adequate net program. Commands of the form x :=
x + 1 (x := x − 1) are replaced by the net program x := x + 1; x := x − 1
n
Recall that by definition all variables of N have initial value 0. Therefore, if we need x = 22
7
1_calls_4 4
2 5 6
2_calls_4 return_4
halt
because the values of x and x are swapped 0 times if x > 0 or twice if x = 0, and
so Testn has no side effects.
The key to the design of Test0n lies in the following observation: Since x never
n
exceeds 22 , testing x = 0 can be replaced by nondeterministically choosing
If we choose wrongly, that is, if for instance x = 0 holds and we try to decrease
x by 1, then the program fails; this is not a problem, because we only have to
guarantee that the program may (not must!) terminate, and that if it terminates then
it provides the right answer.
n
Decreasing x by 1 is easy. Decreasing x by 22 is the difficult part. We leave it
for a routine Decn to be designed, which must satisfy the following specification:
8
Executions leading to NONZERO must still be free of side-effects.
3.2. DECISION PROCEDURES FOR GENERAL PETRI NETS 77
n
If the initial value of s is smaller than 22 , then every execution of
n
Decn fails. If the value of s is greater than or equal to 22 , then all
executions terminating with a return command have the same effect
n n
as s := s − 22 ; s := s + 22 ; in particular, there are no side-effects.
All other executions fail.
It is easy to see that Test0n meets its specification: if x > 0, then we may choose
n
the nonzero branch and reach NONZERO. If x = 0, then x = 22 . After looping
n
22 times on loop the values of x, x and sn , sn have been swapped. The values of
sn and sn are swapped again by the subroutine Decn , and then the program moves
to ZERO. Moreover, if x = 0 then no execution reaches the NONZERO branch,
because the program fails at x := x − 1. If x > 0, then no execution reaches the
n
ZERO branch, because sn cannot reach the value 22 , and so Decn fails.
The next step is to design Decn . We proceed by induction on n, starting with
0
Dec0 . This is easy, because it suffices to decrease s by 22 = 2. So we can take
Now we design Deci+1 under the assumption that Deci is already known. The
definition of Deci+1 contains two copies of a program Test0i , called with different
parameters. We define this program by substituting i for n everywhere in Test0n .
Test0i calls the routine Deci at the address deci . Notice that this is correct, because
we are assuming that the routine Deci has already been defined.
78 CHAPTER 3. DECISION PROCEDURES
i+1
The key to the design of Deci+1 is that decreasing by 22 amounts to decreas-
i i
ing 22 times by 22 , because
i+1 i i i
22 = (22 )2 = 22 · 22
i+1
So decreasing by 22 can be implemented by two nested loops, each of which is
i
executed 22 times, such that the body of the inner loop decreases s by 1. The loop
i
variables have initial values 22 , and termination of the loops is detected by testing
the loop variables for 0. This is done by the Test0i programs.
Observe also that both instances of Test0i call the same routine at the same label.
It could seem that Deci+1 swaps the values of yi , y i and zi , z i , which would
be a side-effect contrary to the specification. But this is not the case. These swaps
are compensated by the side-effects of the ZERO branches of the Test0i programs!
Notice that these branches are now the inner exit and outer exit branches.
When the program leaves the inner loop, Test0i swaps the values of zi and z i . When
the program leaves the outer loop, Test0i swaps the values of yi and y i .
This concludes the description of the program Testn , and so the description of
the program Nsim (C). It remains to design Ninit (C). Let us first make a list of the
initializations that have to be carried out. Nsim (C) contains
• the variables x1 , . . . , xl of C with initial value 0; their complementary vari-
n
ables x1 , . . . , xl with initial value 22 ;
Ninit (C) uses only the variables in the list above; every successful
execution leads to a state in which the variables have the correct initial
values.
Ninit (C) calls programs Inci (v1 , . . . , vm ) with the following specification:
These programs are defined by induction on i, and are very similar to the family of
Deci programs. We start with Inc0 :
It is easy to see that these programs satisfy their specifications. Now, let us
consider Ninit (C). Apparently, we face a problem: in order to initialize the vari-
i+1
ables v1 , . . . , vm to 22 the variables yi and zi must have already been initialized
i
to 22 ! Fortunately, we find a solution by just carrying out the initializations in the
right order:
80 CHAPTER 3. DECISION PROCEDURES
This concludes the description of N (C), and it is now time to analyze its size.
Consider Nsim (C) first. It contains two assignments for each assignment of C,
an unconditional jump for each unconditional jump in C, and a different instance
of Testk for each conditional jump. Moreover, it contains (one single instance of)
the routines Decn , Decn−1 , . . . , Dec0 (notice that Testn calls Decn , which calls
Decn−1 , etc.). Both Testn and the routines have constant length. So the number of
commands of Nsim (C) is O(n).
Ninit (C) contains (one single instance of) the programs Inci 1 ≤ i ≤ n. The
programs Inc1 , . . . , Incn−1 have constant size, since they initialize a constant num-
ber of variables. The number of commands of Incn is O(n), since it initializes
O(n) variables.
So we have proved that N (C) contains O(n) commands. It follows that its cor-
responding Petri net has size O(n2 ), which concludes our presentation of Lipton’s
result.
Chapter 4
Semi-decision procedures
1
In practice we often use the Simplex algorithm, which has exponential worst-case complexity,
but is very efficient for most instances.
81
82 CHAPTER 4. SEMI-DECISION PROCEDURES
Example 4.2.2
s1 s2
t1 t2 t3 t4
t1
s1 −1 0 1 0
s3
s2 −1 0 0 1
t3 t4
s3 1 −1 0 0
t2
s4 0 1 −1 0
s5 0 1 0 −1
s4 s5
t1 t2 t3
Example 4.2.5 In the previous net we have (11000) −−→ (10001), and
−1
1 1 0 1 0
1
0
1
−1 0 0 1
1
0 =
0 +
1 −1 0 0 ·
1
0 0 0 1 −1 0
0
1 0 0 1 0 −1
P
Proof. Let n be the optimal solution of the problem. Then n ≥ M (s) holds
s∈S
for every marking M for which there
P exists a vector X such that M = M0 +N·X.
By Lemma 4.2.4 we have n ≥ M (s) for every reachable marking M , and so
s∈S
n ≥ M (s) for every reachable marking M and every place s.
Exercise: Change the algorithm so that it checks whether a given place is bounded.
P = M0 + N• · X
M
M (s) < | t| for every transition t.
s∈·t
Remark 4.2.10 The converses of these propositions do not hold (that is why they
are semi-algorithms!). Counterexamples are:
• To Proposition 4.2.7:
s1
t1
s2
s1 0
t1 s2 1
4.3. S- AND T-INVARIANTS 85
• To Proposition 4.2.8:
Peterson’s algorithm: the marking (p4 , q4 , m1 = true, m2 = true, hold =
1) ist not reachable, but the Marking Equation has a solution (Exercise: find
a smaller example).
• To Proposition 4.2.9:
Peterson’s algorithm with an additional transition t satisfying • t = {p4 , q4 }
and t• = ∅. The Petri net is deadlock free, but the Marking Equation has a
solution for (m1 = true, m2 = true, hold = 1) that satisfies the conditions
of Proposition 4.2.9 (Exercise: find a smaller example).
The value of the expression I · M is therefore the same for every reachable
marking M , and so it constitutes an invariant of (N, M0 ).
86 CHAPTER 4. SEMI-DECISION PROCEDURES
s1 s2
t1 t2 t3
s3 s4
Figure 4.1
Proposition 4.3.4 The S-invariants of a net form a vector space over the real num-
bers.
This definition of S-invariant is very suitable for machines, but not for humans,
who can only solve very small systems of equations by hand. There is an equiva-
lent definition which allows people to decide, even for nets with several dozens of
places, if a given vector is an S-invariant.
P
Proposition 4.3.5 I is an S-invariant of N = (S, T, F ) iff. ∀t ∈ T : I(s) =
P s∈• t
I(s).
s∈t•
4.3. S- AND T-INVARIANTS 87
s1 s2
t1
t2
s3
t3
s4
Figure 4.2
Proof. I · N = 0 is equivalentP to I · t P
= 0 for every transition t. So for every
transition t we have: I · t = I(s) − I(s).
s∈t• s∈• t
With the help of S-invariants we can give sufficient conditions for boundedness
and necessary conditions for liveness and for the reachability of a marking.
N · X = I · M.
We also have the following consequences:
M is reachable from L
6⇑ ⇓
M = L + N · X has a solution X ∈ N|T |
6⇑ ⇓
M = L + N · X has a solution X ∈ Q|T |
m
M ∼L
4.3.2 T-invariants
Definition 4.3.13 (T-invariants)
Let N = (S, T, F ) be a net. A vector J : T → Q is a T-invariant of N if N·J = 0.
P
Proposition 4.3.14 J is a T-invariant of N = (S, T, F ) iff ∀s ∈ S : J(t) =
P t∈• s
J(t).
t∈s•
Example 4.3.16 We compute the T-invariants of the net of Figure 4.1 as the solu-
tions of the system of equations
1 −1 0
0 −1 j1
1
−1 j2 = 0
1 0
j3
0 1 −1
t2 t4
t3
s1 s2 s3 s4
t1 t5
Figure 4.3
und
{s1 , s2 }• = s•1 ∪ s•2 = {t1 } ∪ {t2 , t3 } = {t1 , t2 , t3 }
4.4. SIPHONS AND TRAPS 91
Proof. Since • R ⊆ R• , the transitions that can mark R can only occur at markings
that already mark R.
Loosely speaking, a siphon that becomes unmarked (or “empty”), remains un-
marked forever.
We can easily check in polynomial time if this condition holds. For this we first
observe that, if R1 and R2 are siphons of N , then so is R1 ∪R2 (exercise). It follows
that there exists a unique largest siphon Q0 unmarked at M0 (more precisely, R ⊆
Q0 for every siphon R such that M0 (R) = 0). We claim that the condition holds if
and only if M (Q0 ) = 0.
• If the condition holds, then, since M0 (Q0 ) = 0 by definition, we get M (Q0 ) =
0.
• If the condition does not hold, then there is a siphon R such that M0 (R) = 0
and M (R) > 0. Since R ⊆ Q0 , we also have M (Q0 ) > 0.
The siphon Q0 can be determined with the help of the following algorithm,
which computes the largest siphon Q contained in a given set R of places—it suf-
fices then to choose R as the set of places unmarked at M0 .
begin
while there are s ∈ Q and t ∈ • s such that t ∈
/ Q• do
Q : = Q \ {s}
endwhile
end
Exercise: Show that the algorithm is correct. That is, prove that the algorithm
terminates, and that after termination Q is the largest siphon contained in R.
92 CHAPTER 4. SEMI-DECISION PROCEDURES
4.4.2 Traps
Definition 4.4.7 (Trap)
Let N = (S, T, F ) be a trap. A set R ⊆ S of places is a trap if R• ⊆ • R. A trap
R is proper if R 6= ∅.
So, loosely speaking, marked traps stay marked. Notice, however, that this
does not mean that the number of tokens of a trap cannot decrease. The number
can go up or down, just not become 0.
4.4. SIPHONS AND TRAPS 93
M (p4 ) ≥ 1 ∧ M (q4 ) ≥ 1
⇒ {(2), (3)}
⇒ {(1)}
In the three sections of this chapter we study three classes of Petri nets: S-systems,
T-systems, and free-choice systems. The sections have a similar structure. After the
definition of the class, we introduce three theorems: the Liveness, Boundedness,
and Reachability Theorem. The Liveness Theorem characterizes the live Petri nets
in the class. The Boundedness Theorem characterizes the live and bounded sys-
tems. The Reachability Theorem characterizes the reachable markings of the live
and bounded systems. The proof of the theorems requires some results about the
structure of S- and T-invariants of the class, which we also present.
The theorems immediately yield decision procedures for Liveness, Bounded-
ness and Reachability whose complexity is much lower than those for general
Petri nets.
At the end of the section we present a final theorem, the Shortest Path Theorem,
which gives an upper bound for the length of the shortest firing sequence leading
to a given reachable marking.
The reader may ask why boundedness only for live Petri nets, and why reach-
ability only for live and bounded Petri nets. A first reason is that, in many applica-
tion areas, a Petri net model of a correct system must typically be live and bounded,
and so, when one of these properties fails, it does not make much sense to further
analyze the model. The second reason is that, interestingly, the general characteri-
zation of the bounded systems or the reachable markings is more complicated and
less elegant than the corresponding characterization for live or live and bounded
Petri nets.
The proofs of the theorems are very easy for S-systems, a bit more involved for
T-systems, and relatively complex for free-choice systems. For this reason we just
95
96CHAPTER 5. PETRI NET CLASSES WITH EFFICIENT DECISION PROCEDURES
sketch the proofs for S-systems, explain the proofs in some detail for T-systems,
and omit them for free-choice systems.
5.1 S-Systems
Definition 5.1.1 (S-nets, S-systems) A net N = (S, T, F ) is a S-net if |• t| = 1 =
|t• | for every transition t ∈ T . A Petri net (N, M0 ) is a S-system if N if N is a
S-net.
Proof. Every transition consumes one token and produces one token.
Proof. (Sketch.)
(⇒): If N is not strongly connected, then there is an arc (s, t) such that N has no
path from t to s. For every marked place s0 such that there is a path from s0 to s,
we fire the transitions of the path to bring the tokens in s0 to s, and then fire t to
empty s. We have then reached a marking from which no tokens can “travel” back
to s, and so a marking from which t cannot occur again. So (N, M0 ) is not live.
If M0 marks no places, then no transition can occur, and (N, M0 ) is not live.
(⇐): If N is strongly connected and M0 puts at least one token somewhere,
then the token can freely move, reach any other place, and so enable any transition
again.
Proof. Trivial.
Theorem 5.1.5 [Reachability Theorem] Let (N, M0 ) be a live S-system and let M
be a marking of N . M is reachable from M0 iff M0 (S) = M (S).
5.2. T-SYSTEMS 97
Proof.
Each transition t ∈ T has exactly one input place st and an output place s0t . So
we have X X
I(s) = I(st ) and I(s) = I(s0t )
s∈• t s∈t•
and therefore
I is a S-invariant
⇔ {Proposition 4.3.5 (alternative definition of S-invariant)}
∀t ∈ T : I(st ) = I(s0t )
⇔ {N is connected}
∀s1 , s2 ∈ S : I(s1 ) = I(s2 )
⇔ {}
∃x ∈ Q∀s ∈ S : I(s) = x.
5.2 T-systems
Definition 5.2.1 (T-nets, T-systems) A net N = (S; T, F ) is a T-net if |• s| = 1 =
|s• | for every place s ∈ S. A system (N, M0 ) is a T-system if N is a T-net.
5.2.1 Liveness
Theorem 5.2.3 [Liveness Theorem] A T-system (N, M0 ) is live iff M0 (γ) > 0 for
every circuit γ of N .
Proof.
(⇒) Let γ be a circuit with M0 (γ) = 0. By Proposition 5.2.2 we have M (γ) =
0 for every reachable marking M . So no transition of γ can ever occur.
(⇐) Let t be an arbitrary transition and let M be a reachable marking. We
show that some marking reachable from M enables t. Let SM be the set of places
s of N satisfying the following property: there is a path from s to t that contains
no place marked at M . We proceed by induction on |SM |. Basis: |SM | = 0. Then
M (s) > 0 for every place s ∈ • t, and so M enables t.
Step: |SM | > 0. By the fundamental property of T-systems, every circuit of N is
marked at M . So there is a path Π such that:
(1) Π leads to t;
(3) Π has maximal length (that is, no path longer than Π satisfies (1) and (2)).
Let u be the first element of Π. By (3) u is a transition and M marks all places of
• u. So M enables u. Moreover, we have u 6= t because M does not enable t. Let
u
M −→ M 0 . We show that SM 0 ⊂ SM , and so that |SM 0 | < |SM |.
1. SM 0 ⊆ SM
Let s ∈ SM 0 . We show s ∈ SM . There is a path Π0 = s . . . t containing
no place marked at M 0 . Assume Π0 contains a place r marked at M . Since
u
M 0 (r) = 0 and M −→ M 0 we have u ∈ r• and so {u} = r• . So u is
the successor of r in Π0 . Since u 6= t, M 0 marks the successor of u in Π0 ,
contradicting the definition of Π0 .
5.2.2 Boundedness
Theorem 5.2.4 [Boundedness Theorem] A place s of a live T-system (N, M0 ) is
b-bounded iff it belongs to some circuit γ such that M0 (γ) ≤ b.
We claim that (N, L) is not live. Otherwise there would be a firing sequence
σ
L −→ L0 such that L0 (s) > 0, and by the Monotonicity Lemma we would have
σ
M −→ M 0 for some marking M 0 satisfying M 0 (s) = L0 (s) + M (s) > M (s),
contradicting the maximality of M (s). By the Liveness Theorem some circuit γ is
unmarked at L but marked at M . Since L and M only differ in the place s, the cir-
cuit γ contains s. Further, s is the only place of γ marked at M . So M (γ) = M (s),
and since M (s) ≤ b we get M (γ) ≤ b.
5.2.3 Reachability
We need to have a closer look at the T-invariants of T-systems.
M = M0 + N.X (5.1)
N · (X + λJ) = N · X
for every place s, where {t1 } = • s and {t2 } = s• . Both M (s) and M0 (s)
are integers. By the definition of Y we get
Sy = {s ∈ • hY i | M0 (s) = 0}
M1 + N(Y − t) = M
where
|Y − t| = |Y | − 1 < |Y |
∗ t ∗
By induction hypothesis we have M1 −→ M . Since M0 −→ M1 −→ M ,
∗
we get M0 −→ M .
Theorem 5.2.8 Let N be a strongly connected T-net. For every marking M0 the
following statements are equivalent:
Proof. (1) ⇒ (2) ⇒ (3) follow immediately from the definitions. We show (3) ⇒
(1).
σ
Let M0 −→ be an infinite firing sequence. We claim that every transition of N
occurs in σ. Since N is strongly connected, (N, M0 ) is bounded (Theorem 5.2.4).
t1 t2 t3
Let σ = t1 t2 t3 . . ., and M0 −→ M1 −→ M2 −→ . . .. Since (N, M0 ) is bounded,
there are indices i and j with i < j such that Mi = Mj . Let σij be the subsequence
of σ containing the transitions between Mi and Mj . By the fundamental property
of T-invariants (Proposition 4.3.15) σij is a T-Invariant . By Proposition 5.2.6 there
102CHAPTER 5. PETRI NET CLASSES WITH EFFICIENT DECISION PROCEDURES
Proof. Since N is strongly connected, any marking that puts tokens on all places of
N is live, because it marks all circuits (Liveness Theorem), and bounded, because
all markings of N are (Corollary 5.2.5).
Let (N, M ) be live and bounded, but not 1-bounded. We construct another live
marking L of N satisfying the following two conditions:
By Theorem 5.2.4, at least one place of N has a smaller bound under L as under
M . Iterating this construction we obtain a 1-bounded marking of N .
Let s be a non-1-bounded place of (N, M ). Some reachable marking M 0 sat-
isfies M 0 (s) ≥ 2. Let L be the marking that puts exactly one token in s, and as
many tokens as M elsewhere.
Since M is live, it marks all circuits of N . By construction L also marks all
circuits, and so L is also live. Condition (1) is a consequence of the definition of L.
Condition (2) holds for all circuits containing s (and there is at least one, because
N is strongly connected).
σ σ t
Lemma 5.2.11 Let (N, M0 ) be a T-system and let M0 −−1−−
2
→ for some sequences
∗
σ1 σ2 ∈ T , some t ∈ T such that
• t∈
/ A(σ1 ), and
• A(σ2 ) ⊆ A(σ1 ).
σ tσ
Then M0 −−1−−→.
2
Proof. We only prove the case b = 1. The general case requires a slight general-
ization of Lemma 5.2.11 and 5.2.12.
By repeated application of Lemma 5.2.12 there exists an occurrence sequence
σ σ ··· σn
M0 −−1−−2−−−→ M such that
• σi 6= for every 1 ≤ i ≤ n,
Figure 5.1
This definition is very concise and moreover symmetric with respect to places
and transitions. If the reader finds it cryptic, the following equivalent definitions
may help.
(t1 6= t2 ∧ • t1 ∩ • t2 6= ∅) ⇒ • t1 = • t2
Proof. Exercise.
5.3.1 Liveness
We showed in the last chapter that a Petri net in which every siphon contains an ini-
tially marked trap is deadlock-free, but the converse does not hold. For free-choice
systems we obtain Commoner’s Theorem, a much stronger result characterizing
liveness.
Free−choice systems
We prove that if (N, M0 ) is not live, then some proper siphon of N does not
contain any trap marked at M0 . Let T be the set of transitions of N . Since (N, M0 )
is not live, then, by the definitions above, there is a marking M reachable from M0
such that T = DM ∪ LM , that is, every transition is either live or dead at M , and
DM 6= ∅.
We claim: for every transition t ∈ DM there exists st ∈ • t such that M (st ) = 0
and every t0 ∈ • st is dead at M .
Let St be the set of input places of t not marked at M . Since t ∈ DM , the set
St is nonempty. Since N is free-choice, for every s ∈ St every transition of s•t is
dead at M (otherwise we could fire t). So along any occurrence sequence starting
at M the number of tokens in each place of St does not decrease. Therefore, if all
transitions of • St are live at M then we can reach a marking that marks all of them.
But such a marking enables t, contradicting that t is dead at M . So at least one
place st ∈ • t is dead at M , which proves the claim.
Let now R = {st | t ∈ DM }. By the claim, and since DM 6= ∅, the set R
is a siphon unmarked at M . If R would contain a trap marked at M0 then, since
5.3. FREE-CHOICE SYSTEMS 107
s1 s2
s3 s4 s5 s6
s7 s8
marked traps remain marked, R would be marked at M . So R does not contain any
trap marked at M0 .
A siphon is minimal if it does not properly contain any proper siphon. Clearly,
the Liveness Theorem still holds if we replace “siphon” by “minimal siphon”.
The net of Figure 5.3 has four minimal siphons: R1 = {s1 , s3 , s5 , s7 }, R2 =
{s2 , s4 , s6 , s8 }, R3 = {s2 , s3 , s5 , s7 } and R4 = {s1 , s4 , s6 , s8 }. R1 , R2 , R3 and
R4 are also traps, and so, in particular, they contain traps. By the Liveness Theo-
rem, every marking that marks R1 , R2 , R3 and R4 is live.
We now proceed to prove the second part of the theorem. We have to show
that if some proper siphon R of a free-choice system (N, M0 ) does not contain
an initially marked trap, then (N, M0 ) is not live. If such a siphon exists, then
the maximal trap Q ⊆ R is unmarked at M0 , and so M0 only can mark places of
D := R \ Q. Loosely speaking, we construct a firing sequence that “empties” the
places of D without marking the places of Q. In this way we reach a marking at
which the siphon R is empty, which proves that (N, M0 ) is not live.
We need the notion of a cluster.
It follows from the definition that every node of a net belongs to exactly one
cluster, that is, the set of clusters is a partition of S ∪ T .
Figure 5.4 shows the clusters of the net of Figure 5.3.
The following proposition is easy to prove:
Lemma 5.3.7 Let N be a net, let R be a set of places of N , and let Q be the
maximal trap included in Q, and let D = R \ Q. Let C = {[t] | t ∈ D• }. There
exists a circuit-free allocation α : C → T such that α(C) ∩ • Q = ∅.
(
t if t ∈ c
α(c) = 0
α (c) otherwise
110CHAPTER 5. PETRI NET CLASSES WITH EFFICIENT DECISION PROCEDURES
(iv) α(C) ⊆ α(C 0 ) ∪ {t}. Use (iii) and the definition of α.)
To show that α is circuit-free, assume D∪α(C) contains a circuit γ. By (ii) and (iv)
we have D ∪ α(C) ⊆ D0 ∪ α0 (C 0 ) ∪ {t} ∪ • t. By induction hypothesis D ∪ α0 (C 0 )
is circuit-fee. So γ contains transition t. Since all places of γ belong to R and
t∈/ • R, we have that γ contains no place of t• , contradicting that γ is a circuit.
To prove α(C) ∩ • Q0 = ∅ we first observe that α(C) ∩ • Q0 ⊆ (α(C 0 ) ∪ {t}) ∩
• Q0 , which is equal to {t} ∩ • Q0 by induction hypothesis, and equal to ∅ because
t∈/ • R and • Q ⊆ • R.
We now prove that we can find an infinite occurrence sequence that “respects a
given allocation”. This part crucially requires the free-choice property.
Proof. Let F be a proper siphon of N , and let Q be the maximal trap included in
Q. We prove M0 (Q) > 0.
Since (N, M0 ) is live, we have M0 (R) > 0 by Proposition 4.4.4. Let D =
R \ Q. If D• = ∅ then D is a trap and so D ⊆ Q, but then D = ∅ and we are done.
If D• 6= ∅ then let C = {[t] | t ∈ D• }. By Lemma 5.3.7 there is an allocation
σ
with domain C and circuit-free for D satisfying α(C) ∩ • Q = ∅. Let M0 −→ be
the occurrence sequence of Lemma 5.3.8. It is easy to see that
• Q cannot become marked during the occurrence of σ.
Because transitions of • Q are not allocated, and so do not occur in σ.
• Q is marked at some point during the occurrence of σ.
Since α is circuit-free, there is an allocated transition t that occurs infinitely
often in σ, and whose input places are not output places of any allocated
transition. So the input places of t must get tokens from transitions that do
not belong to the clusters of C. But these transitions are necessarily output
transitions of Q.
A1 A2 A3
x1 x1 x2 x2 x3 x3
C1 C2 C3
False
s1 s2
s3 s5 s4 s6
s7 s8
5.3.2 Boundedness
Definition 5.3.11 (S-component) Let N = (S, T, F ) be a net. A subnet N 0 =
(S 0 , T 0 , F 0 ) of N is an S-component of N if
Proof. Firing a transition either takes no tokens from a place of the component
and adds none, or it takes exactly one token and adds exactly one token.
114CHAPTER 5. PETRI NET CLASSES WITH EFFICIENT DECISION PROCEDURES
Theorem 5.3.10 shows that there is no polynomial algorithm for Liveness (un-
less P = N P ). Now we ask ourselves what is the complexity of deciding if a
free-choice system is simultaneously live and bounded. We can of course first use
the decision procedure for liveness, and then, if the net is live, check the condi-
tion of the Boundedness Theorem. But there are more efficient algorithms.1 . The
fastest known algorithm runs in O(n · m) time for a net with n places and m tran-
sitions. A not so efficient but simpler algorithm follows immediately from the next
theorem:
3. The rank of the incidence matrix (N) is equal to c − 1, where c is the number
of clusters of N .
Proof. Omitted.
Conditions (1) and (2) can be checked using linear programming, condition (3)
using well-known algorithms of linear algebra, and condition (4) with the algo-
rithm of Section 4.4.1.
5.3.3 Reachability
The reachability problem is NP-hard for live and bounded free-choice nets.
Theorem 5.3.16 Reachability is NP-hard for live and bounded free-choice nets.
Figure 5.7 shows the net N , the markings M0 and M , and the sets T=1 , T≥1 for
the formula x1 ∧ (x1 ∨ x2 ) ∧ (x1 ∨ x2 ). the formula has three clauses C1 , C2 , C3 .
The black tokens correspond to M0 , and the white tokens to M . Intuitively, the
net chooses a variable xi , and assigns it a value by firing txi or f xi . This sends
tokens to the three modules at the bottom of the figure, one for each clause. More
precisely, for each clause the transition sends exactly one token to one of the two
transitions of the module: if the value makes the clause true, then the token goes
to the input place of the transition that belongs to T≥1 ; otherwise the token goes to
the input place of the other transition. The formula is satisfiable iff the Petri net has
a firing sequence that fires each transition of T=1 exactly once, (this corresponds
to choosing a truth assignment) and each transition of T≥1 at least one (so that at
least one of the literals of each clause is true under the assignment).
Now we reduce the problem above to the reachability problem for live and
bounded free-choice nets. Given a net with sets T=1 , T≥1 ⊆ T , we “merge” each
transition of T≥1 with the transition t≥1 of a separate copy of the “module” shown
in Figure 5.8. Similarly, we merge each transition of T=1 with the transition t=1 of
a separate copy of the “module” shown in Figure 5.9.
The first module ensures that in order to reach the marking M the transition
t≥1 has to be fired at least once. The second module ensures that the transition t=1
has to be fired exactly once.
116CHAPTER 5. PETRI NET CLASSES WITH EFFICIENT DECISION PROCEDURES
Start
=1 =1
x1 x2
tx 1 fx1 tx 2 fx2
>
=1 >
=1 >
=1
C1 C2 C3
End
Marking M0
Marking M
Figure 5.7: Result of the reduction for the formula x1 ∧ (x1 ∨ x2 ) ∧ (x1 ∨ x2 )
5.3. FREE-CHOICE SYSTEMS 117
>1 Marking M 0
t=
Marking M
1111 00000
0000 11111 0000
1111
1111111111111111
0000000000000000
0000000000000000
1111111111111111
0000000000000000
1111111111111111
0000000000000000
1111111111111111 000
111
0000000000000000
1111111111111111
0000000000000000
1111111111111111 000
111
0000000000000000
1111111111111111
0000000000000000
1111111111111111
Marking M0
t= 1
00
110000000000000000
1111111111111111
0000000000000000
1111111111111111
001111111111111111
0000000000000000
Marking M
110000000000000000
1111111111111111
0000000000000000
1111111111111111
0000000000000000
1111111111111111
11111
00000 11111
00000
11111
00000
• T0 = U,
• S 0 = • U ∪ U • , and
• F 0 = F ∩ ((S 0 × T 0 ) ∪ (T 0 × S 0 )).
• M = M0 + N · X, and
Proof. Omitted.
Proof. Omitted.
This result is only useful if we are able to check efficiently if a live and bounded
free-choice system is cyclic. The following theorem shows that this is the case:
Theorem 5.3.21 A live and bounded free-choice system (N, M0 ) is cyclic iff M0
marks every proper trap of N .
Proof. Omitted.
This gives a simpler prove that the reachability problem for live and bounded
free-choice nets is in NP: just guess in polynomial time an occurrence sequence
leading to M .