On The Approximation Method and The P Versus NP Problem
On The Approximation Method and The P Versus NP Problem
NP Problem
arXiv:1708.03486v3 [cs.CC] 15 Jun 2020
Norbert Blum
Institut für Informatik, Universität Bonn
Endenicher Allee 19a, D-53115 Bonn, Germany
email: blum@cs.uni-bonn.de
Abstract
First of all we give some reasons that “natural proofs” built not
a barrier to prove P 6= NP using Boolean complexity. Then we in-
vestigate the approximation method for its extension to prove super-
polynomial lower bounds for the non-monotone complexity of suitable
Boolean functions in NP or to understand why this is not possible. It
is given some evidence that the approximation method alone cannot
be used to prove a super-linear lower bound for any function f ∈ Bn .
Additionally, an overview on the methods for proving lower bounds of
the non-monotone and the monotone complexity of Boolean functions
is given. Finally, a personal opinion how to proceed the research on the
P versus NP problem and also on proving a super-linear lower bound
for the non-monotone complexity of a Boolean function in NP is given.
1
size at most the square of the number of steps [40, Chapter 3.9], a super-
polynomial lower bound for the non-monotone network complexity of such a
function would imply P 6= NP. For the monotone complexity of such a func-
tion, exponential lower bounds are known [4, 6, 36, 2, 25, 20, 22, 7, 3, 21].
But until now, no one could prove a super-linear lower bound for the non-
monotone complexity of any Boolean function in NP. An obvious attempt to
get a super-polynomial lower bound for the non-monotone complexity of the
clique function could be the extension of the method which has led to the
proof of an exponential lower bound of its monotone complexity. This is the
so-called “method of approximation” developed in 1984 independently by
Andreev [4] and Razborov [36]. In 1989, at the 21st STOC, Razborov [38]
has presented the sketch of a proof that his approximation method cannot
be used to prove better than quadratic lower bounds for the non-monotone
complexity of any Boolean function. But Razborov uses a very strong dis-
tance measure in his proof for the inability of the approximation method.
As elaborated in [10], one can use the approximation method with a weaker
distance measure to prove super-polynomial lower bounds. Our goal is the
extension of the approximation method to non-monotone Boolean networks
to prove a super-polynomial lower bound for the non-monotone complexity
of a function in NP or to understand why this is not possible.
Firstly, we give some basic definitions. Bn := {f | f : {0, 1}n → {0, 1}}
is the set of all n-ary Boolean functions. The ith variable is denoted by
xi : {0, 1}n → {0, 1}, 1 ≤ i ≤ n. Let Vn := {xi | 1 ≤ i ≤ n} and
V n := {¬xi | 1 ≤ i ≤ n}. Variables and negated variables are called
literals. A function m : {0, 1}n → {0, 1} which is the conjunction of some
literals is called a monomial. If we delete some literals from a monomial
m then we obtain a submonomial m′ of m and we write m′ ⊆ m. The
empty monomial ε is the constant function 1. The disjunction of monomials
is a formula in disjunctive normal form (DNF). The disjunction of some
literals is called a clause. If we delete some literals from a clause d then we
obtain a subclause of d. The empty clause is the constant function 0. The
conjunction of clauses is a formula in conjunctive normal form (CNF). A
monomial m is called an implicant of the function f if for all a ∈ {0, 1}n ,
m(a) = 1 implies f (a) = 1. An implicant m is a prime implicant of f if
no proper submonomial of m is an implicant of f . A clause d is called an
f -clause if for all a ∈ {0, 1}n , d(a) = 0 implies f (a) = 0. A prime clause
d of f is an f -clause where no proper subclause of d is an f -clause. Let
a := (a1 , a2 , . . . , an ), b := (b1 , b2 , . . . , bn ) ∈ {0, 1}n . We write a ≤ b iff
ai ≤ bi for 1 ≤ i ≤ n. A function f ∈ Bn is monotone iff a ≤ b implies
f (a) ≤ f (b) for all a, b ∈ {0, 1}n . Let Ω0 := {∧, ∨, ¬} and Ωm := {∧, ∨}.
2
For Ω ∈ {Ω0 , Ωm , B2 }, an Ω-network β is a directed, acyclic graph such that
each node has indegree at most two. The nodes g with indegree zero are
input nodes and are labelled with op(g) ∈ Vn ∪ {0, 1}. The nodes g with
indegree larger than zero are the gates of β. Each gate g is labelled with an
operator op(g) ∈ Ω where the indegree of g is equal the number of operands
of op(g). A node with outdegree zero is an output node. For a node g in β let
pred(g) := {h | h → g is an edge in β} be the set of its direct predecessors.
With each node g, we associate a function resβ (g) : {0, 1}n → {0, 1} which
is defined by
op(g) g is an input node,
resβ (g) := ¬resβ (h1 ) op(g) = ¬, pred(g) = {h1 },
resβ (h1 ) op(g) resβ (h2 ) otherwise, where pred(g) = {h1 , h2 }.
3
of a “natural proof”. They say that the known proofs of lower bounds on the
complexity of explicit Boolean functions in non-monotone models fall within
their definition of natural. They have shown that natural proofs cannot be
used for separating P and NP unless hard pseudorandom generators do
not exist. Since the existence of such generators is widely believed, natural
proofs are widely accepted to be a barrier for proving P 6= NP using Boolean
complexity. We discuss this in the subsequence.
Firstly, we give their definition of a “natural proof”. This is a proof
which uses a natural combinatorial property. A combinatorial property is a
subset {Cn ⊂ Bn | n ∈ N} of Boolean functions. Cn is called natural if there
is Cn∗ ⊆ Cn which satisfies:
The first property means that the characteristic function of Cn∗ can be com-
puted in polynomial time in the size of the truth table of the input function
f ∈ Bn . The second property says that a function randomly chosen from
Bn is contained in Cn∗ with non-negligible probability. P/poly is the set of
languages which are recognizable by a family of Boolean networks of polyno-
mial size. Note that P ⊆ P/poly. A combinatorial property is useful against
P/poly if the network complexity of any sequence f1 , f2 , . . . , fn , . . . where
fn ∈ Cn is super-polynomial; i.e., for all k ∈ N there is nk ∈ N such that
the network complexity of fn is larger than nk for all n > nk . A proof that
a Boolean function does not have polynomial network complexity is natural
against P/poly if the proof uses a natural combinatorial property Cn which
is useful against P/poly.
Razborov and Rudich mention that “from experience it is plausible to
say that we do not yet understand the mathematics of Cn outside expo-
nential time (as a function of n) well enough to use them effectively in a
combinatorial style proof.” This means that combinatorial properties used
in a today’s lower bound proof are constuctive. With respect to the largeness
property, they write: “In Section 5 we give some solid theoretical evidence
for largeness, by showing that any Cn based on a formal complexity measure
must be large.” We discuss now the “solid theoretical evidence for largeness”
given by Razborov and Rudich.
A formal complexity measure is a function µ : Bn 7→ R+ such that
4
b) µ(f ∧ g) ≤ µ(f ) + µ(g) and µ(f ∨ g) ≤ µ(f ) + µ(g) for all f, g ∈ Bn .
5
The first problem which we have to investigate is the treatment of the
negated variables. In [11], I have tried to treat the negated variables in a
standard network which computes a given non-constant monotone Boolean
function f at its output node gt in such a way that we can use the approx-
imators developed for f with respect to monotone networks on standard
networks. The proof of Theorem 6 in [11] is wrong. The mistake in the
proof is explained in [12]. The motivation of the approach in [11] was the
avoidance of the explicit consideration of the negated variables. The conclu-
sion in [12] is that the negated variables have to be approximated as well.
Therefore, we have to consider the negated variables explicitely. Before
doing this, a lot of work has to be done.
We shall investigate the computation in standard networks in the next
section. In Section 3, an overview on the methods for proving lower bounds
of the non-monotone and of the monotone complexity of Boolean functions
is given. We describe the general idea of the approximation method in
Section 4. For monotone networks, two kinds of approximators are known,
the CNF-DNF-approximators and the sunflower-approximators. Section 5
is devoted the description of these approximators with respect to monotone
networks. The extension of these approximators to standard networks is
treated in Section 6. It is given evidence that CNF-DNF- and also sunflower-
approximators alone cannot be used to prove a super-linear lower bound for
the standard complexity of any function in Bn . In Section 7, a personal
opinion how to proceed the research on the P versus NP problem and also
on proving a super-linear lower bound for the non-monotone complexuty of
a Boolean function in NP is given.
6
implicants in DNFβ (gt ). Similarly, it is useful to recognize the prime clauses
in CNFβ (gt ). For doing this, let p1 , p2 , . . . , pk be the prime implicants and
c1 , c2 , . . . , cs W
be the prime clauses of the function f . Each monomial mj in
DNFβ (gt ) = tj=1 o
mj is an implicant of the function f . Otherwise, it would
exist an input (a1 , a2 , . . . , an ) ∈ {0, 1}n such that f (a1 , a2 , . . . , an ) = 0 but
resβ (gt )(a1 , a2 , . . . , an ) = 1. This means that each monomial in DNFβ (gt )
contains at least one prime implicant of the function f as a submonomial. If
a monomial mj contains l > 1 prime implicants then we add l − 1 copies of
mj to the DNF-representation. By separating in each monomial containing
a prime implicant pj the prime implicant and the other literals, we can write
lj
k _
_
resβ (gt ) = pj ∧ m′ji ,
j=1 i=1
7
Wt1
3. If g is an ∧-gate
W 2 with pred(g) = {h1 , h2 }, DNFβ (h1 ) = i=1 mi and
DNFβ (h2 ) = tj=1 m′j then
t1 _
_ t2
DNFβ (g) := (mi ∧ m′j ).
i=1 j=1
Each input a ∈ resβ (g)−1 (1) satisfies a monomial mj of DNFβ (g). Each
input b ∈ resβ (g)−1 (0) does not satisfy any monomial in DNFβ (g). Hence,
each monomial in DNFβ (g) contains a variable xi with bi = 0 or a negated
variable ¬xj with bj = 1.
Starting at the input nodes, the network β constructs the CNF-formulas
in the following way:
1. If g is an input node with op(g) = xi or op(g) = ¬xi then
Each input b ∈ resβ (g)−1 (0) falsifies a clause dj of CNFβ (g). Each input
a ∈ resβ (g)−1 (1) does not falsify any clause in CNFβ (g). Hence, each clause
in CNFβ (g) contains a variable xi with ai = 1 or a negated variable ¬xj
with aj = 0.
The following theorem characterizes exactly the DNF-representation and
the CNF-representation of resβ (gt ) with respect to a standard network which
computes a Boolean function f ∈ Bn at its output node gt .
Theorem 2 Let β be a standard network which computes a Boolean func-
tion f ∈ Bn at its output node gt . Then the following hold:
a) DNFβ (gt ) contains only implicants of the function f . Furthermore,
for each a ∈ f −1 (1), DNFβ (gt ) contains an implicant ma of f such
that ma (a) = 1.
8
b) CNFβ (gt ) contains only f -clauses. Furthermore, for each b ∈ f −1 (0),
CNFβ (gt ) contains an f -clause db such that db (b) = 0.
Proof: Assume that DNFβ (gt ) contains a monomial m which is not an
implicant of f . Then, by the definition of an implicant of f , there exists b ∈
{0, 1}n such that m(b) = 1 but f (b) = 0. This contradicts the assumption
that β computes f at its output node gt . Hence, all monomials of DNFβ (gt )
are implicants of f .
Assume that there is a ∈ f −1 (1) such that m(a) = 0 for all implicants
m in DNFβ (gt ). Then resβ (gt )(a) = 0 but f (a) = 1. This contradicts
the assumption that β computes f at its output node gt . Hence, for each
a ∈ f −1 (1), DNFβ (gt ) contains an implicant ma of f such that ma (a) = 1.
This proves part a) of the theorem. Analogously, part b) of the theorem
can be proved.
Every DNF-formula W 0 can be transformed into an equivalent CNF-formula.
To see this let α = ti=1 mi be a DNF-formula which computes a Boolean
function f ∈ Bn . To obtain an equivalent CNF-formula γ, we pick from each
monomial mi , 1 ≤ i ≤ t0 one literal and perform the disjunction of all chosen
literals. Then the conjunctionVs0of all clauses which can be constructed in this
way is a CNF-formula γ = j=1 dj which corresponds to the DNF-formula
α. The following lemma shows that γ computes the function f .
Lemma 1 Let α = ti=1
W0
mVi 0be a DNF-formula which computes a Boolean
function f ∈ Bn . Let γ = sj=1 dj be the CNF-formula constructed from α
as described above. Then γ computes f .
Proof: Consider a ∈ f −1 (1). Then there is a monomial ml in α such that
ml (a) = 1. Since each clause of γ contains a literal of ml , the input a satisfies
all clauses in γ. Hence γ(a) = 1.
Let b ∈ f −1 (0). Then each monomial in α contains a literal which is not
satisfied by b. Consider a clause dl of γ which picks from each monomial a
literal which is not satisfied by b. Obviously, dl (b) = 0. Hence, γ(b) = 0.
Altogether, we have shown that γ computes f .
We call such a transformation of a DNF-formula to an equivalent CNF-
formula a DNF/CNF-switch. A DNF/CNF-switch can be organized as the
construction of a tree T in the following way:
1. Each edge in T is labelled by a literal. With each node w in T we
associate the clause d(w) which is obtained by the disjunction of the
literals on the unique path from the root of T to w. T is constructed
while expanding the monomials m0 , m1 , m2 , . . . , mt0 where m0 is the
empty monomial.
9
2. While expanding m0 , the root of T is created. The associated clause
is the empty clause.
After the construction of the tree T , the clauses corresponding to the paths
from the root of T to the
W 0leaves are the clauses contained in the CNF-formula
γ obtained from α = ti=1 mi by performing a DNF/CNF-switch.
Analogously, every CNF-formula can be transformed into an equivalent
DNF-formula. We call such a transformation of a CNF-formula to an equiv-
alent DNF-formula a CNF/DNF-switch.
10
What happen if we consider Boolean functions with many outputs as the
Boolean matrix multiplication or the Boolean convolution? With respect to
the convolution, each output depends on nearly all variables. Moreover, as
shown by Valiant [45], the graph of any network computing the convolution
is an n-superconcentrator. An n-superconcentrator is a directed graph with
n input and n output nodes such that for each subset of the input nodes and
each subset of the output nodes of the same size r there are r mutually node-
disjoint paths connecting the set of input nodes with the set of output nodes.
Aho, Hopcroft and Ullman [1] have conjectured that an n-superconcentrator
has at least n log n edges. But Valiant [45] itself has shown that there exist
superconcentrators of linear size destroying the hope to prove a super-linear
lower bound using graph theoretical arguments only. Also for functions
with many outputs, no super-linear lower bound for its network complexity
is known.
The inability to prove super-linear lower bounds for the non-monotone
complexity of explicit Boolean functions has led to the consideration of
restricted models of Boolean networks like monotone or bounded-depth
Boolean networks. For both restricted models, exponential lower bounds
for the complexity of an explicit Boolean function in NP are known. We are
interested in proving a super-linear lower bound of the non-monotone com-
plexity of a Boolean function in NP. Bounding the depth of the network to
be constant seems to be a much harder restriction than allowing only mono-
tone networks. Some functions with linear network complexity are used to
prove an exponential lower bound for the size of a constant depth network
computing the function. Techniques for proving super-linear lower bounds
for the monotone complexity of functions which are also candidates for prov-
ing a super-linear lower bound for its non-monotone complexty seems to be
more suitable for their extension to get a super-linear lower bound for the
non-monotone complexity of a function in NP. Therefore, we are interested
in the methods developed for the proof of lower bounds for the monotone
complexity of Boolean functions.
The core of each super-linear lower bound proof for the monotone com-
plexity of a Boolean function is the successful application of certain replace-
ment rules. In a monotone network β computing a Boolean function f , a
replacement rule replaces a node u with resβ (u) = h by a node u′ which
computes a function h′ . In most cases, h′ depends on h. The first replace-
ment rules used to prove some lower bounds have the additional property
that the resulting monotone network β ′ still computes the function f . Such
a replacement rule is of Type 1 . For a presentation of replacement rules of
Type 1 see [28, 49].
11
Replacement rules of Type 1 in combination with the gate elimination
method are used to prove lower bounds for the monotone complexity of
Boolean sums [30, 33, 47, 29], Boolean matrix multiplication [35, 31, 28] and
generalized Boolean matrix multiplication [46]. Replacement rules of Type
1 are used in different ways. With respect to Boolean matrix multiplication
[31, 28], they are used for the characterization of an optimal monotone
network. For the generalized matrix product [46] and for Boolean sums
[47, 29] they are used explicitely; i.e., the gate u is replaced by a subnetwork
which computes the function h′ . This should be possible without additional
cost. To get this, Wegener [46] has introduced a technique, very common
in algebraic complexity, into Boolean complexity. Certain functions are
given for free as inputs of the network. A lower bound for such a network
implies the same lower bound for the monotone complexity of the considered
function. Wegener [46] uses this technique in combination with the gate
elimination method to prove an Ω(n2 / log2 n) lower bound for the monotone
complexity of the generalized Boolean matrix product. In [48], Wegner has
introduced a further technique improving this lower bound to Ω(n2 / log n).
Instead using the gate elimination method, he has defined a suitable value
function to estimate the contribution of each ∧-gate for the computation of
the outputs. At each ∧-gate, the value function distributes at most the value
1 among the prime implicants. Then he has proved the necessity to give to
each prime implicant at least the value 12 obtaining a lower bound of half the
number of prime implicants. The definition of the value function depends on
the structure of the function computed at the ∧-gate under consideration.
Important for the proof is that the function has many outputs and also the
structure of the prime implicants of the functions computed at the output
nodes.
The first super-linear lower bound for the monotone complexity of an
explicit Boolean function has been proved by Neciporuk [30] in 1969 for
a function in Mn,n , a set of so-called Boolean sums. A Boolean sum fi
is the disjunction of a subset Fi ⊆ Vn of variables. He has considered
the monotone complexity of sets of Boolean sums which have “nothing in
common”. Nothing in common means that two distinct Boolean sums have
at most one variable in common. We say then that the set of Boolean sums
is (1, 1)-disjoint. One can think that ∧-gates cannot reduce the monotone
complexity of a set of Boolean sums in comparision to networks which use
only ∨-gates. But Tarjan [43, 49] has given an example which shows that
using ∧-gates can reduce the monotone complexity. For (1, 1)-disjoint sets
of Boolean sums, Neciporuk has proved that optimal monotone Boolean
networks contain only ∨-gates. A well known construction of Kővári, Sós
12
and Turán [27] leads to an explicitely constructed (1, 1)-disjoint set f =
(f1 , f2 , . . . , fn ) of Boolean sums with Ω(n3/2 ) prime implicants such that an
Ω(n3/2 ) lower bound for the monotone complexity of this function has been
proved. Some years later, Pippenger [33] and Mehlhorn [29] have generalized
the approach of Neciporuk to sets of Boolean sums which are (h, k)-disjoint;
i.e., any h + 1 different Boolean sums have at most k variables in common.
Such a set of Boolean sums corresponds to a bipartite graph which does
not contain a Kh+1,k+1 as a subgraph where Kh+1,k+1 denotes the complete
bipartite graph with node sets of sizes h+1 and k +1, respectively. Bipartite
graphs can be represented by Boolean matrices. A Boolean matrix A is
(h, k)-free if it does not contain any (h+1)×(k+1) submatrix containing only
ones. A bipartite graph contains no Kh+1,k+1 iff the corresponding Boolean
matrix is (h, k)-free. The question about the maximal number of ones in a
(h, k)-free (n × n)-matrix is the famous problem of Zarankievicz. Pippenger
and Mehlhorn showed that using ∧-gates for the computation of a set of
(h, k)-disjoint Boolean sums can save at most the factor max{h − 1, k − 1}.
Using a construction of Brown [13], a (2,2)-disjoint set of Boolean sums
with Ω(n5/3 ) prime implicants has been constructed such that an Ω(n5/3 )
lower bound for the monotone complexity of this Boolean function has been
proved. In the subsequence, the explicit construction of further dense (h, k)-
free Boolean matrices [5, 26] has led to larger lower bounds.
Boolean sums, the Boolean matrix multiplication and the generalized
Boolean matrix multiplication have some disjointness properties which the
convolution does not have. Therefore, to prove a lower bound for the convo-
lution, the situation becomes more difficult. The first approach for proving
a lower bound for the monotone complexity of the convolution uses graph-
theoretical properties of monotone networks realizing the convolution. Pip-
penger and Valiant [34] have studied shifting graphs and have proved that
each monotone network for the convolution has to be a shifting graph obtain-
ing an Ω(n log n) lower bound for monotone complexity of the convolution.
To prove a lower bound of size Ω(n4/3 ) for the number of ∧-gates needed in
a monotone network which computes the convolution, the author [9] has in-
troduced two further techniques into Boolean complexity. For the first time,
a replacement rule changing the function computed at the output nodes of
the network is used. An ∧-gate g such that the function computed at the
output of the gate g has a certain property is replaced by 0. Therefore, the
gate g is eliminated but, at the output nodes, the construction of some prime
implicants could be destroyed. Because of the property of the function com-
puted at the output of the gate g, the number of destroyed prime implicants
is bounded. Only for inputs such that an output has to be one, a wrong
13
value could be computed at the output node because of an application of
the replacement rule. Such a replacement rule is of Type 2 . To apply the re-
placement rule, we need that the monotone network has a certain structure.
With respect to a given proof technique, we call a monotone network which
allow the application of the proof technique a normal form network . Given
any monotone network β computing a given monotone Boolean function, the
network is transformed into normal form first and then, the corresponding
proof technique is applied. Maybe, the transformation increases the size of
the network such that this increase has to be taken into consideration to ob-
tain a lower bound for the monotone complexity from the lower bound for a
normal form network. Weiß [50] observed that on each path from the input
node ai to an output node ck which depends on ai and has at least two prime
implicants there has to be a first ∨-gate such that some aj bl with j 6= i is an
implicant of the function. Using a replacement rule of Type 1, he showed
that all these ∨-gates can be eliminated after setting ai to zero. Consider
the assignment α which we obtain after setting with respect to each such
an ∨-gate the variables of the prime implicant aj bl to one. Then all outputs
of the network do not depend on ai . Therefore for each output function ck
which depends on ai , the assignment α has to satisfy any prime implicant
of ck . Since the Boolean convolution is semi-disjoint, each conjunction of a
variable in A = {a0 , a1 , . . . , an−1 } and a variable in B = {b0 , b1 , . . . , bn−1 } is
prime implicant of exactly one output function. Therefore, at most p2 prime
implicants can be constructed if the assignment α is defined with respect to
p first ∨-gates having the needed property. Since n output functions depend
on ai , p2 has to be at least n such that an n3/2 lower bound for the number
of ∨-gates needed in a monotone network which computes the convolution
could be proved. Grinschuk and Sergeev [17] have constructed (h, k)-disjoint
Boolean circulant matrices with many ones. The complexity of the corre-
sponding set of Boolean sums is Ω(n2 log−6 n). Since circulant matrices are
related to cyclic convolution and cyclic Boolean convolution can be reduced
to Boolean convolution [17, 24], they obtain an Ω(n2 log−6 n) lower bound
for the number of ∨-gates in a monotone network computing the Boolean
convolution. Therefore, the used proof technique to prove the lower bound
for the Boolean convolution was reduction.
Although since 1969 super-linear lower bounds for the monotone com-
plexity of explicit functions in Mn,m where m = Θ(n) have been proved,
before 1984, the largest lower bound for the monotone complexity of an ex-
plicit single output function was of size 4n [44]. All super-linear lower bound
proofs for the monotone complexity of functions in Mn,m strongly depend
on the property that a set of functions has to be computed. With respect
14
to single output monotone Boolean functions, no technique for counting a
super-linear number of gates has been developed before 1984. In 1984, An-
dreev [4, 6] and Razborov [36, 37] independently achieved the breakthrough.
They have proved super-polynomial lower bounds for certain single output
functions in NP. The functions resβ (g) computed at the gates g are replaced
by a function which approximates resβ (g). The main point was the introduc-
tion of replacement rules which change the value of the function computed at
the output node with respect to inputs in f −1 (0) where f is the considered
function. Such a replacement rule is of Type 3 . The so-called approximation
method was born. In the next section, we will describe the approximation
method in detail.
15
the approximation of DNFβ (gi−1 ) (DNFβ (gi )) and before the approximation
of DNFβ (gi ) (DNFβ (gi+1 )). We say that the approximator of the node gi
introduces an error with respect to the input c ∈ {0, 1}n if f1 (c) = f (c) but
f2 (c) 6= f (c). Note that f ′ (c) 6= f (c) implies that there exists a node gi
in β such that the approximator of gi introduces an error with respect to
the input c. The approximators should be designed in a way such that the
following is fulfilled:
1. After the replacement of DNFβ (gt ), the number of inputs c ∈ {0, 1}n
with f ′ (c) 6= f (c) is “large”.
Note that these properties imply that a monotone network computing the
function f has to contain “many” gates. How to approximate DNF-formulas
such that these properties are fulfilled?
The general idea is to bound the size of the monomials in the DNF-
formulas constructed at the nodes in β. The size of a monomial can be its
length; i.e., its number of distinct literals, or another measure. Let r be the
upper bound for the size of a monomial in an approximator with respect to
a node gi . An obvious way to bound the size of the monomials would be the
following:
The effect of the removal of monomials from DNFβ (gi ) to the DNF-represen-
tation DNFβ (gt ) of resβ (gt ) is the removal of some monomials in DNDβ (gt ).
Hence, an error could be introduced only for inputs c ∈ f −1 (1). Next we
describe the construction of the approximators more in detail.
For input nodes, the approximator and the original DNF-representation
are the same. For the comparision of the approximator DNF’β (gi ) and the
DNF-representation DNFβ (gi ) of the current function computed at gi im-
mediately before the approximation of DNFβ (gi ) suppose that gi1 and gi2
are the direct predecessors
Wt1 of the gate gi . By construction,
Wt2 each monomial
′
in DNF’β (gi1 ) = j=1 mj and in DNF’β (gi2 ) = l=1 ml has size at most
r. Therefore, if gi is an ∨-gate, each monomial in the DNF-representation
DNFβ (gi ) of the current function computed at gi has size at most r. Hence,
16
we define DNF’β (gi ) := DNFβ (gi ). No error is introduced by the approxi-
mator DNF’β (gi ). But the number of monomials in DNF’β (gi ) could be the
double of the number of monomials in DNF’β (gi1 ) or in DNF’β (gi2 ). If gi is
an ∧-gate then
t1 _
_ t2
DNFβ (gi ) = (mj ∧ m′l ).
j=1 l=1
To obtain DNF’β (gi ), we remove from DNFβ (gi ) all monomials mj m′l of size
larger than r. To get a large lower bound for the function f ∈ Bn , the
function f must have the following property:
A1 Only “few” monomials are removed to obtain DNF’β (gi ) from DNFβ (gi ).
5.1 CNF-DNF-Approximators
CNF-DNF-approximators are introduced implicitly by Haken [20] and ex-
plicitly by Jukna [22], Berg and Ulfberg [7] and Amano and Maruoka [3]. To
17
understand the idea of CNF-DNF-approximators let us consider the orga-
nization of a CNF/DNF-switch as the construction of a tree T as described
in Section 2. Obviously, the outdegree of an inner node w cannot be larger
than the number of literals in the clause which is expanded at the leaf w
during the construction of the tree T . Let m(w) denote the monomial as-
sociated with the path from the root to the node w. After the performance
of the CNF/DNF-switch consider any path P from the root to a leaf w in
T . Let v be any node on P . Obviously, the monomial m(v) is a prefix of
the monomial of m(w). If we can ensure that on each edge starting from
a node with outdegree at least two, the size of the corresponding mono-
mial increases by one then the number of different prefixes of size exactly r
could be bounded by l(k)r where l(k) is the maximal number of literals in
a clause expanded during the CNF/DNF-switch. k will be an upper bound
for the size of a clause used in the construction of an approximator. Af-
ter the construction of T , all monomials of size larger than r are removed.
Each such a monomial has a prefix of size r which has to be fulfilled by
each input which fulfills the monomial. Therefore, an upper bound for the
number of different such prefixes can be used in a lower bound proof. We
call a CNF/DNF-switch followed by the elimination of all monomials of size
larger than r an CNF/DNF-approximator switch. Analogously, if l(r) is
the maximal number of literals in a monomial of size at most r expanded
during a DNF/CNF-switch then we obtain an upper bound of l(r)k for the
number of different prefixes of size exactly k. After the construction of
T , all clauses of size larger than k are removed. Such a switch is called
DNF/CNF-approximator switch.
This observation yields the idea to approximate with respect to each
node gi in β DNFβ (gi ) by a DNF-formula DNF’β (gi ) which contains only
monomials of size at most r and also CNFβ (gi ) by a CNF-formula CNF’β (gi )
which contains only clauses of size at most k. Note that by the removal
of clauses from CNFβ (gi ) only for inputs c ∈ f −1 (0) an error could be
introduced. To get a large lower bound for the monotone complexity of
f using CNF-DNF-approximators, the function f must have the following
additional property:
F2 Only “few” inputs in f −1 (0) falsify a clause of size larger than k.
To get a large lower bound for the monotone complexity of f , the number
of inputs for which CNF’β (gt ) or DNF’β (gt ) compute the wrong value has
to be “large”. Therefore, the function f must have the following additional
property:
F3 At least one of the following two properties is fulfilled:
18
1. If CNF’β (gt ) is not the constant function one then “many” inputs
in f −1 (1) do not fulfill CNF’β (gt ).
2. If DNF’β (gt ) is not the constant function zero then “many” inputs
in f −1 (0) do not falsify DNF’β (gt ).
Instead of using the whole sets f −1 (1) and f −1 (0), more appropriate
subsets T1 ⊆ f −1 (1) and T0 ⊆ f −1 (0) could be used to prove the lower
bound. Now we are prepared to give a precise description of CNF-DNF-
approximators. We distinguish three cases.
Case 1: gi is an input node.
Then
19
1. There is a “small” upper bound n1 for the number of inputs in T1
which fulfill any monomial of size r + 1.
5.2 Sunflower-Approximators
To bound the number of monomials which could be removed during the
construction of an approximator, sunflower-approximators use the sunflower
lemma or a modification of the sunflower lemma. Since CNF-DNF-approxi-
mators do not use such a combinatorial lemma, they seem to be simpler
than sunflower-approximators. But the properties which a Boolean function
must have to get a large lower bound for its monotone complexity using the
methods are different. Both methods use the following property:
20
the application of the methods results into a large lower bound for the
monotone complexity may be different. Note that with respect to the perfect
matching function, we know a proof of a super-polynomial lower bound
which uses a sunflower-approximator [37] but no such a proof which uses a
CNF-DNF-approximator. First of all, we will review the sunflower lemma
and its use for proving a lower bound.
The sunflower lemma of Erdős and Rado [15] is the central combinato-
rial property to bound the number of monomials removed during the con-
struction of an approximator. The basis of our description is the excellent
presentation of Jukna in [23, 24].
A sunflower with p pedals and core T is a collection S1 , S2 , . . . , Sp of p
sets such that Si ∩ Sj = T for 1 ≤ i < j ≤ p. Note that p pairwise disjoint
sets is a sunflower with empty core. The following sunflower lemma means
that each family of nonempty sets which is large enough must contain a
sunflower with p pedals.
If we relax the property that the core T lies entirely in all sets S1 , . . . , Sp
such that the differences Si \ T , 1 ≤ i ≤ p are non-empty and mutually
disjoint then we obtain a lemma proved by Füredi in 1978 S [19]. The common
part of p distinct finite sets S1 , S2 , . . . , Sp is the set T := i6=j (Si ∩ Sj ).
Razborov’s approximator [36, 37] are based on the sunflower lemma and
on Füredi’s lemma. Andreev [4, 6] uses his own modification of the sunflower
lemma. No matter which modification of the sunflower lemma is used, the
essential properties of the approximators are the same. Therefore, we only
describe approximators which uses the sunflower lemma directly.
To use the sunflower lemma, a set S(m) of the same size as m is con-
structed for each monomial m. For example, if its length is the size of m,
S(m) is the set of all variables in m. The set S(m) corresponds to the mono-
mial m. The idea is to use an appropriate r as the upper bound for the size
of a monomial and l := r!(p − 1)r as the upper bound for the number of
monomials in the approximator DNF’β (gi ), 1 ≤ i ≤ t. For the construction
of the approximators, the nodes in β are considered in a topological order
such that the approximators of the direct predecessors gi1 and gi2 are al-
ready constructed when DNF’β (gi ) is constructed. Note that all monomials
21
in DNF’β (gi1 ) and in DNF’β (gi2 ) have size at most r and the number of
monomials in both approximators is at most l.
After the construction of DNFβ (gi ) where gi is an ∨-gate, the number
of monomials in DNFβ (gi ) can exceed the upper bound l but is at most 2l.
Then, by the sunflower lemma, there are p monomials m1 , m2 , . . . , mp such
that the corresponding sets S(m1 ), S(m2 ), . . . , S(mp ) form a sunflower. The
core T of the sunflower corresponds to a monomial m(T ). Then in DNFβ (gi ),
the monomials m1 , m2 , . . . , mp are replaced by the single monomial m(T )
which is a submonomial of each monomial mj , 1 ≤ j ≤ p. This operation is
called a plucking. The effect of a plucking to DNFβ (gt ) is the replacement
of some monomials by a proper submonomial. Hence, a plucking can only
introduce an error for inputs in f −1 (0). As long as possible pluckings are
performed leading to the approximator DNF’β (gi ). By the sunflower lemma,
the number of monomials in DNF’β (gi ) is at most l. Since at the beginning,
the number of monomials is at most 2l, less than 2l pluckings are performed.
After the construction of DNFβ (gi ) where gi is an ∧-gate, the size of some
monomials can exceed the upper bound r. We remove all these monomials
first. Since DNF’β (gi1 ) and also DNF’β (gi1 ) contain at most l monomials, at
most l2 monomials are removed. Then, the plucking procedure is applied to
the remaining monomials obtaining the approximator DNF’β (gi ). Since the
approximators of the direct predecessors of gi contain at most l monomials,
at most l2 pluckings are performed.
For the application of sunflower-approximators to a monotone function
f ∈ Bn , to obtain a large lower bound for its monotone complexity, it has
to exist large sets T1 ⊆ f −1 (1) and T0 ⊆ f −1 (0) of inputs such that the
following properties are fulfilled:
(a) There is a constant d1 < 1 such that DNF’β (gt ) computes the
constant function one or DNF’β (gt ) is satisfied by at most d1 |T1 |
inputs in T1 .
(b) There is a constant d0 < 1 such that DNF’β (gt ) computes the
constant function zero or DNF’β (gt ) is falsified by at most d0 |T0 |
inputs in T0 .
22
To get a lower bound for the monotone complexity of the function f ,
sunflower approximators are used in the following way. Assume that the
constant d1 exists. If DNF’β (gt ) computes the constant function one then
for no input c ∈ T0 , the value f (c) is computed correctly. Only pluckings
introduce an error for an input in T0 . Since a single plucking introduces
an error for at most n0 inputs in T0 , at least |Tn00 | pluckings are performed.
Since at most l2 pluckings are performed at a gate in β, we obtain the
lower bound l|T 2n
0|
0
for the monotone complexity of f . Otherwise, there are
(1 − d1 )|T1 | inputs in T1 which do not satisfy DNF’β (gt ). Only the removal
of a monomial at an ∧-gate can introduce an error for an input in T1 . The
removal of one monomial of size larger than r can introduce an error for
at most n1 inputs in T1 . Hence, at least (1−dn11)|T1 | monomials are removed.
Since at most l2 monomials are removed at an ∧-gate, we obtain the lower
bound (1−d 1 )|T1 |
l 2 n1
for the monotone complexity of f . The case that d0 exists
can be discussed analogously.
Before discussing both cases, let us review the needed properties of the
function f such that a large lower bound could be proved for f using the
approximation method. Since the approximation method obtains DNF’β (gi )
by the removal of all monomials of size larger than a given bound r, the first
23
property is that the number of inputs c ∈ T1 which fulfill a monomial of
size larger than r is small enough. Since at the output node gt , the value
of many inputs c ∈ T1 ∪ T0 has to be computed incorrectly, the second
property is that the number of inputs c ∈ T1 ∪ T0 with DNF’β (gt )(c) 6= f (c)
is large enough. The approximation method takes care that the number of
monomials which are removed from DNFβ (gi ) to obtain DNF’β (gi ) is small
enough. This is done in the following way:
CNF-DNF-approximators:
At each gate gi , the CNF-formula CNFβ (gi ) is approximated by a CNF-
formula CNF’β (gi ) as well. CNF’β (gi ) is obtained from CNFβ (gi ) by the
removal of all clauses of larger size than a given bound k. Therefore, the
additional property that the number of inputs c ∈ T0 which falsify a clause
of size larger than k is small enough is needed. In dependence of r and k,
upper bounds hm (r) and hc (k) of the number of variables in a monomial of
size at most r and in a clause of size at most k, respectively are derived.
Then, upper bounds h1 (k)r and h2 (r)k for the number of different prefixes of
size exactly r of the monomials removed at an ∧-gate gi and the number of
different prefixes of size exactly k of the clauses removed at an ∨-gate gi are
estimated. To get a large lower bound, p := h1 (k) has to be non-constant.
Sunflower-approximators:
To bound the number of monomials removed from DNFβ (gi ) to obtain
DNF’β (gi ), sunflower-approximators use parameters 2 ≤ r, p ≤ n where r
is an given upper bound for the size of the monomials in DNF’β (gi ) and
p is the number of pedals with respect to the sunflower lemma. Using the
sunflower lemma, the number of monomials in the approximators is bounded
by l := r!(p − 1)r . Hence, an upper bound of l2 for the number of monomials
removed at an ∧-gate gi is obtained. To get a large lower bound, p has to
be non-constant.
Now we will discuss the case that the size of a monomial depends on
both kind of literals. We will give some evidence that in this case, the
approximation method cannot be extended to prove a super-linear lower
bound for the standard complexity of any Boolean function f ∈ Bn .
As described above, with respect to each known approximation method,
the upper bound for the number of monomials removed at an ∧-gate gi
for obtaining DNF’β (gi ) from DNFβ (gi ) is at least pr where r is the upper
bound for the size of the monomials in DNF’β (gi ) and p is non-constant. In
the case that both kinds of literals are approximated, such an upper bound
seems to be too large. We will explain this for sizes which depend on the
number of negated and the number of non-negated variables in a monomial
24
m. Let r0 (r1 ) be the upper bound for the number of negated (non-negated)
variables of a monomial m in DNF’β (gi ); i.e., each monomial which does
not fulfill both bounds is removed from DNFβ (gi ). Let r := r0 + r1 . Note
that each monomial of length larger than r cannot fulfill both bounds. The
following lemma shows that that 2r monomials of length r are sufficient such
that each input c ∈ f −1 (1) fulfills at least one of these monomials.
Proof: Fix any r variables xi1 , xi2 , . . . , xir . Consider any c ∈ f −1 (1). Let
where for 1 ≤ j ≤ r
x ij if ci1 = 1
yij :=
¬xij if cij = 0
By construction, m′c (c) = 1 and |m′c | = r. There are 2r monomials which
use exactly the variables xi1 , xi2 , . . . , xir .
Note that each submonomial of m′c is fulfilled by c as well. At most 2r
monomials of length at most r could suffice such that their removal could
introduce an error for each input c ∈ f −1 (1). Since pr > 2r for non-constant
p, the approximation of the DNF-formula with respect to one ∧-gate could
destroy the correct computation of f (c) for all c ∈ f −1 (1). This shows
that the first property seems not be fulfilled if both kind of literals are
approximated.
It remains the consideration of the case that the size of a monomial
depends only on one kind of literals. We discuss CNF-DNF-approximators
first.
25
any approximation, DNFβ (gt ) contains for each c ∈ f −1 (1) a monomial mc
such that mc (c) = 1. Only for c = (1, 1, . . . , 1), we can exclude that mc
contains any negated variable. Furthermore, CNFβ (gt ) contains for each
c ∈ f −1 (0) an f -clause dc such that dc (c) = 0. Before any approximation,
each f -clause d in CNFβ (gt ) contains a literal in mc for each c ∈ f −1 (1).
We have to define large sets T1 ⊆ f −1 (1) and T0 ⊆ f −1 (0) of inputs such
that the following properties are fulfilled:
But without any further information about the structure of the network it
seems to be impossible to fulfill the third property without destroying the
first or the second property. Since the third property divides into two cases,
we have to consider the two situations that CNF’β (gt ) is not the constant
function one and that DNF’β (gt ) is not the constant function zero.
If CNF’β (gt ) is not the constant function one then CNF’β (gt ) must con-
tain at least one clause d. Note that d is a subclause of any f -clause d′ .
Let pc(d) be a prime clause which is a subclause of the f -clause d′ . For
each c ∈ f −1 (1) such that c fulfills a literal contained in d, it cannot be
excluded that f (c) is computed correctly by CNF’β (gt ). Therefore, for each
c ∈ f −1 (1) such that d is not satisfied by c, cj = 1 if ¬xj is contained in
the clause d. d can contain each ¬xj where the variable xj is not contained
in pc(d). To get the property that a clause of size k could be fulfilled by at
most d1 |T1 | inputs in T1 , the set T1 should only contain inputs c ∈ f −1 (1)
such that there is a prime clause d(c) with cj = 1 for all xj not contained in
d(c). But for such an input c ∈ T1 , a monomial of size r + 1 which is fulfilled
by c must not be a submonomial of a prime implicant which is fullfilled by
c. Therefore, a small upper bound n1 for the number of inputs in T1 which
fulfill any monomial of size r + 1 cannot be proved in the usual way. Note
that usually, the set T1 contains exactly those inputs which correspond to
26
the prime implicants of the function. Because of the structure of the prime
clauses of the considered functions, I have found no other way to prove a
small upper bound.
If DNF’β (gt ) is not the constant function zero then DNF’β (gt ) must
contain at least one monomial m. Note that m is a submonomial of an
implicant m′ of f . Let pi(m) be a prime implicant which is a submonomial
of the implicant m′ . For each c ∈ f −1 (0) such that c falsifies a literal
contained in m, it cannot be excluded that f (c) is computed correctly by
DNF’β (gt ). Therefore, for each c ∈ f −1 (0) such that m is satisfied by c,
cj = 0 if ¬xj is contained in the monomial m. m can contain each ¬xj
where the variable xj is not contained in pi(m). To get the property that a
monomial of size r could be falsified by at most d0 |T0 | inputs in T0 , the set
T0 should only contain inputs c ∈ f −1 (0) with the property that there is a
prime implicant p(c) such that cj = 0 for all xj not contained in p(c). For
such a set T0 , a small upper bound n0 for the number of inputs in T0 which
falsify any clause of size k + 1 cannot be proved in the usual way. Because
of the structure of the prime implicants of the considered functions, I have
found no other way to prove a small upper bound.
This gives evidence that with respect to CNF-DNF-approximators, the
needed properties cannot be fulfilled if only one kind of literals is approxi-
mated and no other properties of the network are used.
Altogether, we have given some evidence that extended CNF-DNF-ap-
proximators alone cannot be used to prove a lower bound for the standard
complexity of any Boolean function f ∈ Bn .
mj = mj 0 mj 1
where mj0 contains exactly the negated variables and mj1 contains exactly
the non-negated variables of mj . If mj does not contain any negated (non-
negated) variable then mj0 = ε (mj1 = ε). After doing this, we write for
27
Ws
DNFβ (g) = j=1 mj
s
_
DNFβ (g) = mj 0 mj 1 .
j=1
28
Case 2: op(gi ) = ¬xj , j ∈ {1, 2, . . . , n}.
Let
M (gi ) := {ε} and Di (ε) := {¬xj }.
Then
DNF’β (gi ) := ¬xj ε.
Obviously, ¬xj and DNF’β (gi ) compute the same function.
Case 3: gi is an ∨-gate with direct predecessors gi1 and gi2 .
Before the performance of any plucking we have
and
M (gi ) := M (gi ) \ {m1 , m2 , . . . , mp } ∪ {m(T )}.
and [
Di (m) := Di (m, m1 , m2 ).
m1 ∈M (gi1 ),m2 ∈M (gi2 ):m1 m2 =m
29
Then these sets are modified because of the performed pluckings as described
in Case 3. This finishs the description of the extended approximators.
A sunflower-approximator for standard networks consists of two compo-
nents. One component is the approximated part, the other component the
non-approximated part. The combinatorial properties of the approximated
part are the same as in sunflower-approximators for monotone Boolean
networks. With respect to the non-approximated part, no combinatorial
properties usable in a lower bound proof can be extracted without any fur-
ther information about the structure of the network. Moreover, the non-
approximated part causes that arguments used in the lower bound proof for
the monotone complexity do not work now.
The main problem is caused by the performance of pluckings. A nec-
essary property is that the number of inputs in T0 for which a plucking
introduces an error can be bounded. Let m1 , m2 , . . . , mp be p monomials
where the corresponding sets V (m1 ), . . . , V (mp ) form a sunflower with core
T . With respect to monotone networks, an input c ∈ T0 for which an error is
introduced because of the performance of the corresponding plucking cannot
fulfill any monomial in {m1 , m2 , . . . , mp }. Otherwise, the error with respect
to c would be already exist before the performance of the plucking. Exactly
this fact is used to get the needed upper bound. But with respect to stan-
dard networks, an input c ∈ T0 for which an error is introduced because of
the performance of the corresponding plucking can fulfill some monomials
in {m1 , m2 , . . . , mp }. This comes from the non-approximated part of the
monomials in the approximators. To see this consider c ∈ T0 for which an
error is introduced. Then there is m ∈ Di (m(T )) such that c fulfills mm(T ).
By construction, there is l ∈ {1, 2, . . . , p} such that m ∈ Di (ml ). Obviously,
c cannot fulfill the monomial ml . Otherwise, c would fulfill mml such that
the plucking does not introduce an error with respect to the input c. But
with respect to each j ∈ {1, 2, . . . , p} with m 6∈ Di (mj ) c could fulfill the
monomial mj if each monomial in Di (mj ) contains a literal not fulfilled by
c. Therefore, we have to estimate an upper bound for the number of inputs
in c ∈ T0 with
1. c satisfies a monomial mm(T ) where m ∈ Di (m(T )) and
2. for all 1 ≤ j ≤ p, c does not fulfill mj or each monomial in Di (mj )
contains a literal not fulfilled by c.
Without any knowledge about the structure of the monomials in the non-
approximated part of the approximators, I see no way to prove an upper
bound for such inputs in T0 which is small enough. This gives evidence that
30
sunflower-approximators alone cannot be used to prove a super-linear lower
bound for the standard complexity of any Boolean function f ∈ Bn .
2. ri=1 m′i = 1
W
31
To prove a lower bound for the size of a standard network which com-
putes a given Boolean function f , we can only use the structure of the
function f . We need an intuition which properties of the prime implicants
or the prime clauses of a given function forces a standard network to use a
certain amount of gates. How to get such an intuition? The structure of the
prime implicants of the exact-clique function tells us directly how we can
realize the function by a standard network of linear size. The knowledge
about upper bounds may help us to get an intuition what make the function
easy or difficult. Hence, before looking for properties which enable us to
prove a lower bound, I would look for upper bounds for the function under
consideration. Which functions should be chosen for the try to get the first
proof of a super-linear lower bound for the non-monotone complexity of an
explicit Boolean function?
As for monotone networks, I would first try to obtain a super-linear lower
bound for a Boolean function with many outputs as the Boolean convolution,
the Boolean matrix multiplication or (1,1)-disjoint Boolean sums. A non-
constant lower bound with respect to each output would result in a super-
linear lower bound for the function. Firstly, upper bounds for the chosen
function should be investigated to learn something about the usefulness of
negations with respect to the considered function. Can we adapt any method
developed for monotone networks to obtain a super-linear lower bound? The
paper of Weiß [50] could be a good starting point.
How to proceed the work with respect to the P versus NP problem?
Currently, I am convinced that we are far away to prove a super-polynomial
lower bound for the non-monotone complexity of any explicit Boolean func-
tion. On the other hand, the strongest barrier towards proving P 6= NP
could be that it holds P = NP. To ensure that the whole time spent for
working on the P versus NP problem is not used to prove an impossible
theorem, I would switch to the try to develop a polynomial algorithm for
the solution of an NP-complete problem. Moreover, also in the case that
P 6= NP, understanding why it is not possible to develop a polynomial algo-
rithm for the solution of the considered NP-complete problem could be of
help to prove a lower bound for the standard complexity of the corresponding
Boolean function. What kind of NP-complete problem should be chosen? I
think that a good candidate would be an NP-complete optimization problem
for the following reasons.
A general approach for the solution of an optimization problem is the
following: Start with a feasible solution of the optimization problem under
consideration. As long as possible apply to the current feasible solution an
32
improvement step. The improvement step replaces a part of the current
solution by a part which is outside of the current solution such that the
obtained solution is feasible and the value of the objective function is im-
proved. To get a polynomial algorithm for the optimization problem, the
following properties should be fulfilled:
A well known optimization problem where this approach has led to a poly-
nomial time algorithm is the maximum matching problem. Let G = (V, E)
be an undirected graph. M ⊆ E is a matching if no two edges in M have
a common node. A matching M is maximal if there is no edge e ∈ E \ M
such that M ∪ {e} is a matching. A matching M is maximum if there exists
no matching M ′ ⊆ E of larger size. A maximal matching M is minimum
if there is no maximal matching M ′ of G such that |M ′ | < |M |. Given an
undirected graph G = (V, E), the maximum matching problem is finding a
maximum matching M ⊆ E. The minimum maximal matching problem is
finding a minimum maximal matching M ⊆ E. Note that the minimum
maximal matching problem is NP-complete [16]. Let M ⊆ E be a matching
of G. A node v ∈ V is M -free iff v is not incident to an edge in M .
In 1891, Peterson [32] introduced the technique of alternating paths. A
path P = v0 , v1 , . . . , vk is M -alternating if it contains alternately edges in
M and in E \ M . Let P = v0 , v1 , . . . , vk be a simple M -alternating path.
P is M -augmenting if v0 and vk are M -free. M ⊕ P denotes the symmetric
diffence of M and P ; i.e., M ⊕ P = M \ P ∪ P \ M . If P is an M -
augmenting path then M ⊕ P is a matching of G, and |M ⊕ P | = |M | + 1.
In 1957, Berge [8] proved that a matching M ⊆ E is maximum iff there
is no M -augmenting path in G. Until 1963, for non-bipartite graphs only
exponential time algorithms for the construction of M -augmenting paths
has been known. Then in 1963, Edmonds [14] has shown how to construct
an M -augmenting path in a non-bipartite graph in polynomial time if an
M -augmenting path exists. This resulted in a polynomial algorithm for the
maximum matching problem.
Berge’s characterization theorem has resulted in the construction of an
improvement step. Edmonds has shown how this improvement step can be
33
performed in polynomial time. If we try to apply an analogous approach to
an NP-complete optimization problem, we need such a problem which allows
the proof of a characterization theorem which can be used for the construc-
tion of an improvement step. Then we can try to develop a polynomial
implementation of the improvement step. I think that the minimum maxi-
mal matching problem could be a good candidate for such an NP-complete
problem.
Acknowledgment
I would like to thank Stasys Jukna for many helpful discussions in 2017 after
my mistake.
References
[1] Aho, A. V., Hopcroft J. E., Ullman J., D.: The Design and Analysis of
Computer Algorithms, Addison-Wesley (1974), 12.22, 12.37.
[2] Alon, N., Boppana, R. B.: The monotone circuit complexity of Boolean
functions, Combinatorica 7 (1987), 1–22.
[3] Amano, K., Maruoka, A.: The potential of the approximation method,
SIAM J. Comput. 33 (2004), 433–447.
[4] Andreev, A. E.: On a method for obtaining lower bounds for the com-
plexity of individual monotone functions, Soviet Math. Dokl. 31 (1985),
530–534.
[6] Andreev, A. E.: A method for obtaining efficient lower bounds for
monotone complexity, https://10.1007/BFO1978380, translated from
Algebra and Logics 26:1 (1987), 1–18.
[7] Berg, C., Ulfberg, S.: Symmetric approximation arguments for mono-
tone lower bounds without sunflowers, Comput. Complex. 8 (1999),
1–20.
[8] Berge, C.: Two theorems in graph theory, Proc. Natl. Acad. Sci. U.S.A.
43 (1957), 842–844.
34
[9] Blum, N.: An Ω(n4/3 ) lower bound on the monotone network complex-
ity of the nth degree convolution, TCS 36 (1985), 59–69. (A preliminary
version is in Proc. 22nd FOCS (1981), 101–108.)
[14] Edmonds, J.: Paths, trees, and flowers, Canad. J. Math. 17 (1965),
449–467.
[15] Erdős P., Rado R.: Intersection theorems for systems of sets, J. London
Math. Soc. 35 (1960), 85–90.
[18] Find M. G., Golovnev A., Hirsch E., A., Kulikov A. S.: A better-than-
3n lower bound for the circuit complexity of an explicit function, Proc.
57th FOCS (2016), 89 – 98.
[21] Harnik, D., Raz, R.: Higher lower bounds on monotone size, Proc. 32nd
STOC (2000), 191–201.
35
[22] Jukna, S.: Combinatorics of monotone computations, Combinatorica
19 (1999), 65–85.
[25] Karchmer, M.: On proving lower bounds for circuit size, Proc. 8th
Structure in Complexity Theory (1993), 112–118.
[26] Kollár, J., Rónyai, L., Szabó, T.: Norm-graphs and bipartite Turán
numbers, Combinatorica 16 (1996), 399–406.
[28] Mehlhorn, K., Galil, Z.: Monotone switching circuits and Boolean ma-
trix product, Computing 16 (1976), 99–111.
[31] Paterson, M., S.: Complexity of monotone networks for Boolean matrix
product, TCS 1 (1975), 13–20.
[32] Petersen, J.: Die Theorie der regulären Graphen, Acta Mathematica 15
(1891), 193–220.
[34] Pippenger, N., Valiant, L. G.: Shifting graphs and their applications,
JACM 23 (19760, 423–432.
36
[37] Razborov, A. A.: A lower bound on the monotone network complexity
of the logical permanent, Math. Notes Acad. Sci. USSR 37 (1985),
485–493.
[39] Razborov A. A., Rudich, S.: Natural proofs, JCSS 55 (1997), 24–35.
[42] Tardos, É.: The gap between monotone and non-monotone circuit com-
plexity is exponential, Combinatorica 8, 141–142.
[47] Wegener, I.: A new lower bound on the monotone network complexity
of Boolean sums, Acta Informatica 13 (1980), 109–114.
[50] Weiß, J.: An n3/2 lower bound on the monotone complexity of the
Boolean Convolution, Information and Control 59 (1983), 184–188.
37