# Theory of the Maximum Flow Problem

📅November 09, 2020 | ☕️25 minutes read

a flow network example In this blog post, I am going to develop a basic formalism of one of my favourite computer science problems - the problem of maximum flow in a flow network. The fact that this problem can be expressed in relatively abstract language of graph theory makes it a very powerful tool in algorithmic toolbox, because many seemingly different problems can be reduced to it. Apart from its practical importance, this problem setting can be used to prove some other results in graph theory, for instance the Hall’s marriage theorem in a very intuitive way. Some other applications of this problem are counting edge-disjoint paths in polynomial time (yes, that is possible 😀), computing maximal bipartite matching…

In this post I am going to derive basic formalism of this problem and prove the famous MaxFlow = MinCut theorem:

The value of maximum flow in a flow network equals the capacity of a minimal cut.

However, before we are able to understand what this actually means, we need to rigorously develop the intuitive notion of flow networks and flows. Some proof techniques and ideas presented here a mix of the CLRS MaxFlow Chapter 26 and and lecture notes written by Prof. Mayr. All important proofs are mine, but the convenient definition of flow network using skew symmetry constraint is taken from the lecture notes and Wikipedia.

This post is going to be a rather long one and it is divided in three large parts:

• Theory of flow networks and network flows - in this section we carefully and rigorously build definitions and series of lemmas from which we gain a strong background in the difficult problem we are dealing with
• The Max-Flow Min-Cut Theorem - in this section we state and prove the core theorem which will make us confident enough that we can build a correct ‘fast enough’ algorithm for the maximum flow problem. Moreover, the algorithm we develop will be completely natural after the section Theory of flow networks and network flows. One of the reasons why this post exists is simply the feeling that everyone knowing Theory of flow networks and network flows could invent such an algorithm.
• Ford Fulkerson’s Algorithm - we develop the famous Ford Fulkerson’s Algorithm and prove it is correct and ‘fast enough’. Moreover, this algorithm is a canonical example of algorithmic technique know as ‘incremental improvement’ which is a very general algorithm design paradigm.

## Theory of flow networks and network flows

### Definition of a Flow Network

A flow network is a four-tuple $(G, c, s, t)$ such that: $G=(V, E)$ is a directed graph with two distinguished vertices, a source vertex $s$ and a sink vertex $t$. $c: V \times V \to \mathbb{R}$ is a capacity function such that:

• $\forall (u, v) \in E, c(u,v) \ge 0$
• $\forall (u, v) \not \in E, c(u,v) = 0$

Furthermore, we assume that each vertex is on some path from $s$ to $t$. In other words, we require a flow network to be connected, but not necessarily strongly connected.

Usually, we visualise flow networks as directed graphs where each edge has a weight representing the capacity function value at that edge. For a real world analogy it is useful to think of flow networks as pipelines where each corner of pipeline is a flow network vertex and each pipe in a pipeline represents an edge with some capacity.

### Definition of a Network Flow

Let $N = (G=(V, E), c, s, t)$ be a flow network as defined above. Then a network flow $f$ is a function $f: V \times V \to \mathbb{R}$ satisfying following conditions:

Capacity constraint: $\forall (u, v) \in V \times V, f(u,v) \le c(u,v)$

Intuitively, the flow in some pipe should not exceed capacity of the pipe.

Skew symmetry: $\forall (u, v) \in V \times V, f(u,v) = -f(v, u)$

Intuitively, the positive flow in one direction induces a negative flow in the opposite direction

Flow conservation: $\forall u \in (V - \{s, t\}), \sum_{v \in V}f(u, v) = 0$

Intuitively, the net inflow at corner where pipes meet equals the net outflow from that corner, except at source vertex which provides the flow to the whole network and sink vertex which absorbs all the incoming flow. Observe that due to skew symmetry, this formulation of flow conservation captures the intuitive meaning that net flow to u is equal to net flow out of u.

We define the value of flow $|f|$ to be $|f|$ = $\sum_{v \in V}f(s, v)$. Hence the value of flow is the net inflow from the source vertex to entire network.

### Definition of a Max Flow Problem

Given the flow network $N$, compute the flow function $f$ such that $|f|$ is maximal amongst all possible flows.

Before we give an algorithm, we will state and prove a series of lemmas which will help us understand the structure of problem we are dealing with. Here is the first one:

#### Lemma 1 (Hygiene lemma)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $u, v \in V$. Then:

1. $f(u, u) = 0$.
2. For any $v \in (V-\{s,t\})$, $\sum_{u \in V}f(u, v) = 0$.
3. If $(u, v)\not \in E$ and $(v, u)\not \in E$ then $f(u, v) = f(v, u) = 0$.

Proof:

1. By the skew symmetry, $f(u, u) = -f(u, u)$. Then $2f(u,u) = 0$ so $f(u, u) = 0$.
2. By the skew symmetry, we have $\sum_{u \in V}f(u, v) = \sum_{u \in V}-f(v, u)$. Now rewrite as $\sum_{u \in V}f(u, v) = -\sum_{u \in V}f(v, u)$. By the flow conservation, $\sum_{u \in V}f(v, u) = 0$. But then $\sum_{u \in V}f(u, v)= 0$, as claimed.
3. Since $(u,v) \not \in E$, we have $c(u,v) = 0$. Since $(v,u) \not \in E$, we have $c(v,u) = 0$. By the skew symmetry, $f(u,v) = -f(v,u)$. Since $f$ is a flow, $f(u,v) \le c(u, v) = 0$. Since $f$ is a flow, $f(v, u) \le c(v, u) = 0$. Then $-f(v, u) \ge 0$. But then $f(u, v)=-f(v,u) \ge 0$. Since $f(u,v) \le 0$ we conclude $f(u, v)=0$. Hence $f(u, v)=-f(v,u)=0$, as claimed.

This completes the proof. $\blacksquare$

#### Definition of flows on sets

After we have demonstrated that some intuitive properties of flow holds, we will define action of flows on sets. This will simplify later proofs a lot because we will be able to avoid many double summations and the whole mess which would otherwise arise.

So, let $X,Y \subseteq V$. Then define $f(X,Y)=\sum_{u \in X}{\sum_{v \in Y}{f(u, v)}}$. Now we will define a bit of notational abuse:

• Let $f(u, Y)$ mean $f(u, Y)=f(\{u\}, Y)$
• Let $f(X, y)$ mean $f(X, y)=f(X, \{y\})$

Observe that we can express the flow conservation in single statement

$\forall u \in (V-\{s,t\}), f(u,V) = 0$.

Now, we will generalise observations stated above to the set notation.

#### Lemma 2 (Lifting lemma)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $X,Y,Z \subseteq V$. Then:

1. $f(X, X) = 0$.
2. $f(X, Y) = -f(Y,X)$.
3. If X and Y are disjoint then:.

• $f(X \cup Y, Z) = f(X, Z) + (Y, Z)$
• $f(X, Y \cup Z) = f(X, Y) + (X, Z)$

Proof:

1. By the claim below, $f(X,X) = -f(X,X)$. Now by adding $f(X,X)$ to the right side we obtain $2f(X, X)=0$ which implies $f(X, X)=0$.
2. We have:
\begin{aligned} f(X, Y) &= \sum_{u \in X}{\sum_{v \in Y}{f(u, v)}} & (\text{By definition})\\ &= \sum_{u \in X}{\sum_{v \in Y}{-f(v, u)}} & (\text{By the skew symmetry})\\ &= -\sum_{u \in X}{\sum_{v \in Y}{f(v, u)}} & \\ &= -\sum_{v \in Y}{\sum_{u \in X}{f(v, u)}} & (\text{By interchanging the summation order})\\ &= -f(Y,X) & (\text{By definition}) \end{aligned}
1. We have:
\begin{aligned} f(X \cup Y, Z) &= \sum_{u \in {X \cup Y}}{\sum_{z \in Z}{f(u, z)}} & (\text{By definition})\\ &= \sum_{u \in X}{\sum_{z \in Z}{f(u, z)}} + \sum_{v \in Y}{\sum_{z \in Z}{f(v, z)}} - \sum_{u \in {X \cap Y}}{\sum_{z \in Z}{f(u, z)}} \\ &= \sum_{u \in X}{\sum_{z \in Z}{f(u, z)}} + \sum_{v \in Y}{\sum_{z \in Z}{f(v, z)}} & (\text{Since X and Y are disjoint.}) \\ &= f(X, Z) + f(Y, Z) & (\text{By definition})\\ \end{aligned}

Now, we can easily prove the last statement by reusing the previously proved one and applying the ‘Lifting Lemma’.

\begin{aligned} f(X, Y \cup Z) &= -f(Y \cup Z, X) & (\text{By the Lifting Lemma - 2})\\ &= - (f(Y, X) + f(Z, X)) & (\text{By the previous statement})\\ &= -f(Y, X) -f(Z, X) & \\ &= f(X, Y) + f(X, Z) & (\text{By the Lifting Lemma - 2})\\ \end{aligned}

This completes the proof. $\blacksquare$

Now we prove the important observation about the flow value. By the strict skew symmetry and capacity constraints, it is intuitive to imagine that the flow value is equal to the outflow of network to the sink vertex.

#### Lemma 3 (Alternative definition of the flow value)

Claim:: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Then $|f| = f(V, t)$.

Proof:

\begin{aligned} |f| &= f(s,V) & (\text{By definition of flow value})\\ &= f(V,V) - f(V-\{s\}, V) & (\text{Since } V = (V-\{s\}) \cup \{s\}, \text{Lifting Lemma - 3})\\ &= - f(V-\{s\}, V) & (\text{By the Lifting Lemma - 1, } f(V,V) = 0) \\ &= - (f(V-\{s,t\}, V) + f(t, V)) & (\text{Since } V-\{s\}= ( V-\{s,t\}) \cup \{t\}, \text{Lifting Lemma - 3})\\ &= - f(t, V) & (\text{By the Hygiene Lemma - 2 in language of Lifting Lemma}) \\ &= f(V, t) & (\text{By the Lifting Lemma - 2}) \end{aligned}

This completes the proof. $\blacksquare$

Many maximum flow algorithms rely on the idea of residual network induced by some flow $f$. The purpose of residual network is to capture the potential amount of flow that can be added after the flow $f$ is applied. It is therefore important to rigourously define what we mean by ‘potentially more flow’ can be added to some edge.

#### Definition of Residual Networks

Let $N = (G=(V, E), c, s, t)$ be a flow network as defined above. Let $f$ be a network flow in $N$. Then the residual network $N_f$ is a four tuple $(G_f=(V, E_f), c_f, s, t)$ where:

1. $c_f$ is a function $c_f: V \times V \to \mathbb{R}$ defined as:

$\forall (u, v) \in V \times V, c_f(u, v) = c(u, v) - f(u, v)$

1. $E_f= \{ (u, v) | (u,v) \in V \times V \text{ such that } c_f(u, v) \gt 0 \}$

Observe that residual capacities induce edges that may not exist in the original network $N$. Those edges are usually named back edges. An example where that may happen is when $f(u,v)=c(u,v)$ and $(v, u)$ does not exist in $E$. By the skew symmetry, $f(v,u) = -f(u,v)$. Now $c_f(v, u) = c(v, u) - f(v,u) = 0 - (-f(u,v)) = f(u,v)$. This implies that residual network will get an edge of residual capacity $f(u,v)$ which did not exist in the original network.

Moreover, many maximum flow algorithms rely on the idea of ‘adding’ or ‘augmenting’ flows to other previously computed flows. It is very important to make that idea precise.

#### Lemma 4 (Augmentation lemma)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $N_f$ be the residual network induced by flow $f$. Let $g: V \times V \to \mathbb{R}$ be a flow function defined with respect to the residual network $N_f$. Then $f+g : V \times V \to \mathbb{R}$ defined as $\forall (u, v) \in V \times V, (f + g)(u,v) = f(u,v) + g(u,v)$ is a network flow in the original network $N$ with capacity $|f+g|=|f| + |g|$.

Proof: In order to prove this claim, we need to prove two subclaims:

1. $f + g$ is a network flow in $N$
2. $|f+g|=|f| + |g|$

We will firstly prove 1.

We will prove by checking that capacity constraint, skew symmetry and flow conservation properties hold in the original network N. Let $(u,v) \in V \times V.$ Then, we have:

\begin{aligned} (f + g)(u, v) &= f(u,v) + g(u,v) & (\text{By definition of f + g}) \\ &\le f(u, v) + c_f(u,v) & (\text{Since g is a flow in }N_f) \\ &= f(u, v) + c(u,v) -f(u,v ) & (\text{By definition of } c_f(u,v)) \\ &= c(u,v) \end{aligned}

This proves the capacity constraint.

Now check the skew symmetry:

\begin{aligned} (f + g)(u, v) &= f(u,v) + g(u,v) & (\text{By definition of f + g}) \\ &= -f(v,u) -g(v, u) & (\text{Since f is a flow in } N \text{ and } \text{g is a flow in }N_f) \\ &= -(f(v, u) + g(v,u)) & (\text{By definition of } c_f(u,v)) \\ &= -(f + g)(v, u) & (\text{By definition of f + g}) \\ \end{aligned}

This proves the skew symmetry.

For the flow conservation, start from definition. Let $u \in V - \{s, t\}$. Then:

\begin{aligned} \sum_{v \in V}{(f + g)(u, v)} &= \sum_{v \in V}{[f(u, v) + g(u,v)]} & (\text{By definition of f + g}) \\ &= \sum_{v \in V}{f(u, v)} + \sum_{v \in V}{g(u, v)} & (\text{Splitting summations}) \\ &= 0 + 0 & (\text{Since f is a flow in } N \text{ and } \text{g is a flow in }N_f) \\ &= 0 \end{aligned}

This prove the flow conservation property. This proves the function $f+g$ is indeed a flow in $N$.

Now let us compute its value:

\begin{aligned} |f + g| &= \sum_{v \in V}{(f + g)(s, v)} & (\text{By definition of f + g' s value}) \\ &= \sum_{v \in V}{[f(s, v) + g(v,v)]} & (\text{By definition of f + g}) \\ &= \sum_{v \in V}{f(s, v)} + \sum_{v \in V}{g(s, v)} & (\text{Splitting summations}) \\ &= |f| + |g| & (\text{Since f is a flow in } N \text{ and } \text{g is a flow in }N_f) \\ \end{aligned}

This completes the proof. $\blacksquare$

Now we introduce the important idea of augmenting paths which are the key part of many network flow algorithms. Augmenting paths capture the notion of computing the amount by which the current flow can safely be augmented.

#### Definition of Augmenting paths in flow networks

Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $N_f$ be the residual network induced by flow $f$. Then an augmenting path for f is a simple path $\pi$ from source vertex $s$ to the sink vertex $t$ in the residual network $N_f$ induced by flow $f$. The residual capacity of the augmenting path $\pi$ is the value $c_f(\pi)$ defined as: $c_f(\pi) = min\{ c_f(u,v) | (u,v) \in \pi \}$.

Remark: Observe that, by definition of $E_f$, $c_f(u,v) > 0$ for every $(u,v) \in \pi$. Hence $c_f(\pi) > 0$ and this observation will be very important in following sections.

In the following very important lemma, we rigorously capture the idea of safely augmenting the current flow with the ‘safe’ value originating from the residual network.

#### Lemma 5 (Flow-Path Augmentation lemma)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $\pi$ be the augmenting path for the flow $f$. Then $f_{\pi} : V \times V \to \mathbb{R}$ defined as:

$f_{\pi}(u, v) = \begin{cases} c_f(\pi) & \text{if (u, v)} \in \pi \\ -c_f(\pi) & \text{if (v, u)} \in \pi \\ 0 & \text{otherwise} \\ \end{cases}$

is a flow in the residual network $N_f$ of value $c_f(\pi)$.

Proof:

In order to prove this claim, we need to prove two subclaims:

1. $f_{\pi}$ is a network flow in $N_f$
2. $|f_{\pi}|=c_f(\pi)$

Let us check the properties of network flow.

Let $(u, v) \in E_f$. Then:

\begin{aligned} f_{\pi}(u, v) &\le max \{ c_f(\pi), -c_f(\pi), 0 \} & (\text{By definition of } f_{\pi})\\ &= c_f(\pi) & (\text{By definition of } c_f(\pi))\\ &\le c_f(u, v) \end{aligned}

This establishes capacity constraint.

Let us consider the skew symmetry constraint.

Suppose $(u, v) \in \pi$.

• Then $f_{\pi}(u, v) = c_f(\pi)$. But then $f_{\pi}(v, u) = -c_f(\pi)$ since $(u, v) \in \pi$. Therefore $f_{\pi}(u, v)=-f_{\pi}(v, u)$, as desired.

Suppose $(u, v) \not \in \pi$.

• Then either $(v, u) \in \pi$ or $(v, u) \not \in \pi$. If $(v, u) \in \pi$ the claim holds by symmetry to the previous case. Otherwise we have $f_{\pi}(u, v) = -f_{\pi}(v, u) = 0$ by definition of $f_{\pi}$.

This establishes the skew symmetry.

Let $u \in V-\{s, t\}$. Now we expand the definition of flow conservation:

\begin{aligned} \sum_{v \in V}{f_{\pi}(u, v)} &= \sum_{v \in V, (u,v) \in \pi}{c_f(\pi)} + \sum_{v \in V, (v,u) \in \pi}{-c_f(\pi)} + \sum_{v \in V, (u,v) \not \in \pi \text{ and } (v, u)\not \in \pi}{0} & (\text{By definition of } f_{\pi})\\ &= \sum_{v \in V, (u,v) \in \pi}{[c_f(\pi) - c_f(\pi)]} & (\text{By observing that for every (u, v) terms appear twice })\\ &= 0 \end{aligned}

This establishes the flow conservation property. Therefore $f_{\pi}$ is indeed a network flow in $N_f$.

Now we compute the flow value from definition.

\begin{aligned} |f_{\pi}| &= \sum_{v \in V}{f_{\pi}(s, v)} & (\text{By definition of} f_{\pi} \text{'s value}) \\ &= f_{\pi}(s, w) & (\text{Since } \pi \text{ is a simple path, there is only one edge from s, (s, w)}) \\ &= c_f(\pi) & (\text{By definition of} f_{\pi}) \\ \end{aligned}

This completes the proof. $\blacksquare$

Now we rigorously introduce the idea that augmenting current flow with a flow arising from augmented path can only improve the current flow.

#### Lemma 6 (Fundamental Estimate of Augmenting flows)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $\pi$ be the augmenting path for the flow $f$. Then $f + f_{\pi} : V \times V \to \mathbb{R}$ is a flow in the original flow network $N$ with the value of $|f| + c_f(\pi) > |f|$.

Proof: By the Flow-Path Augmentation lemma, $f_{\pi}$ is a flow in the residual network $N_f$ with value $|f_{\pi}|=c_f(\pi)$. By the Augmentation lemma, $f + f_{\pi}$ is the flow in the original flow network $N$ with the value of $|f| + c_f(\pi) > |f|$. This completes the proof. $\blacksquare$

Lastly we introduce the last formal idea of a graph cut.

#### Definition of the graph cut

Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Then a cut of $N$ is a pair of sets (S, T) such that:

1. $s \in S$ and $t \in T$
2. $V = S \cup T, S \cap T = \emptyset$

Now using the definition of a capacity of an edge in $N$, we lift that definition to the definition of capacity of the cut in the following way. We define the capacity of a cut $(S, T)$ to be: $c(S, T)= \sum_{u \in S, v \in T}{c(u,v)}$.

It is useful to think about capacity of a cut $(S, T)$ as the pipe capacity through the border between two disjoint parts of the network, $S$ and $T$.

Now we are coming to the important lemma which will help us prove the MaxFlow MinCut Theorem.

#### Lemma 7 (Relationship between cuts and flows)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Let $(S,T)$ be the cut of a graph according to the previous definition. Then: $|f|=f(S, T)$.

Remark: Before we give the proof, it is useful to think about the power and generality of the claim we are stating. We are claiming that flow through any cut in graph ‘precisely matches’ the value of the flow across the whole network.

Proof: We will argue using the Lemma 3, Alternative Definition of flow value.

\begin{aligned} |f| &= f(V, t) & (\text{By the alternative definition of flow value}) \\ &= f(V, V) - f(V, V - \{t\}) & (\text{Since (S, T) is a cut, apply Lifting Lemma - 3}) \\ &= 0 - f(V, V - \{t\}) & (\text{By Lifting Lemma - 1, f(V, V) = 0}) \\ &= f(V - \{t\}, V) & (\text{By Lifting Lemma - 2, skew symmetry}) \\ &= f(S \cup T) - \{t\}, V) & (\text{Since (S,T) is a cut}) \\ &= f(S - \{t\}, V) + f(T - \{t\}, V) & (\text{Applying set union identity}) \\ &= f(S, V) + f(T - \{t\}, V) & (\text{Observe that } \forall u \in T - \{t\}, u \in (V - \{s, t\}) \text { the flow conservation applies}) \\ &= f(S, V) & (\text{Since (S, T) is a cut, apply Lifting Lemma - 3}) \\ &= f(S, S) + f(S, T) & (\text{By Lifting Lemma - 1, f(S, S) = 0}) \\ &= f(S, T) \end{aligned}

This completes the proof. $\blacksquare$

Now we are ready to state and prove the ultimate bound on the flow value.

#### Lemma 8 (Fundamental estimate of the flow value)

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Then: $|f|$ is bounded above by the capacity of any graph cut.

Remark: Observe how general this claim is.

Proof: Let $(S,T)$ be any graph cut. Then by (Relationship between cuts and flows), we have $|f|=f(S, T)$.

Now expand this definition:

\begin{aligned} |f| &= f(S, T) \\ &= \sum_{u \in S}{\sum_{v \in T}{f(u, v)}} \\ &\le \sum_{u \in S}{\sum_{v \in T}{c(u, v)}} & (\text {Since f is a flow}) \\ &= c(S, T) & (\text {By definition of cut capacity}) \end{aligned}

Since $(S,T)$ was arbitrary, the claim holds for any cut. This completes the proof. $\blacksquare$

Now we state and prove the most important Theorem in this post.

### The Max-Flow Min-Cut Theorem

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network and let $f: V \times V \to \mathbb{R}$ be a flow function. Then the following three statements are equivalent:

1. $f$ is a maximum flow in $N$
2. Residual network $N_f$ contains no augmenting path
3. $|f|$ = $c(S, T)$ for some cut $(S,T)$

Proof: We will prove the chain of implications ($1 \implies 2$), ($2 \implies 3$), ($3 \implies 1$).

($1 \implies 2$) We will prove by contradiction.

Assume that $f$ is a maximum flow in $N$ but there exists an augmenting path of $f$, say $\pi$ inside the residual network $N_f$. Then, by (Fundamental Estimate of Augmenting Flows), there exists a network flow g inside $N$ such that: $|g| = |f| + c_f(\pi) > |f|$. This contradicts the fact that $f$ is a maximum flow in $N$. Therefore residual network $N_f$ cannot have an augmenting path.

($2 \implies 3$) Now assume that the residual network $N_f$ contains no augmenting path. We will deduce that there exists a cut $(S,T)$ such that $f$ = $c(S, T)$. Firstly, observe that since there is no augmenting path in $N_f$, vertices $s$ and $t$ are disconnected in $N_f$.

Now define $S$, such that $S=\{ u | u \in V, \text{ there exists a path from s to u in } N_f \}$. Let $T=V-S$. We will prove that $(S,T)$ is a cut. Since every vertex is either connected to $s$ in $N_f$ or not, we have $V=S \cup T$. Now we claim $S \cap T = \emptyset$. We will prove by contradiction. Assume, for the sake of contradiction, that there exists $u \in S \cap T$. Then there exists a path from $s$ to $u$. But then $u \not \in T$, since $u$ is reachable from $s$. This is a contradiction, since $u \in T$ by construction. Therefore $S \cap T = \emptyset$, as claimed.

Now we claim that $s \in S$ and $t \in T$. Trivially, $s$ is reachable from $s$ so it must be $s \in S$. We claim that $t \in T$. Since there exists no augmenting path in $N_f$, vertices $s$ and $t$ are disconnected in $N_f$, so clearly $t \in T$.

Now we claim that $\forall (u, v) \text{ such that } u \in S, v \in T, f(u, v) = c(u, v)$. We will prove by contradiction. Assume that $c(u, v) > f(u, v)$, since the capacity constraint implies $f(u, v) \le c(u, v)$. Then by definition of $c_f$, we have that $c_f(u, v) = c(u, v) - f(u, v) \gt 0$. But then, we have that $(u, v) \in E_f$. However, this implies following:

Since $u \in S$, there exists a path from $s$ $\rightsquigarrow$ $u$ in $N_f$. Now, since $(u, v) \in E_f$, there exists a path $u \rightsquigarrow v$ in $N_f$. Now by composing those two paths, we get that there exists the following path $s \rightsquigarrow u \rightsquigarrow v$ in $N_f$. Since, $v$ is reachable from $s$ we have $v \in S$. Since $(S, T)$ is a cut, we deduce $v \not \in T$. This is a contradiction, since $v \in T$ by assumption. Therefore, the assumption that $c(u, v) > f(u, v)$ is impossible, so we deduce $c(u, v) \le f(u, v)$. By capacity constraint, it must be the case that $c(u, v) = f(u, v)$.

Therefore, $\forall (u, v) \text{ such that } u \in S, v \in T, f(u, v) = c(u, v)$. Now consider the flow value $|f|$. By Relationship between cuts and flows, we have $f(u, v) = f(S,T)$.

Now:

\begin{aligned} f(u, v) &= f(S,T) & (\text{By Relationship between cuts and flows}) \\ &= \sum_{u \in S}{\sum_{v \in T}{f(u, v)}} & (\text{By definition of the flow from set to set}) \\ &= \sum_{u \in S}{\sum_{v \in T}{c(u, v)}} & \forall (u, v) \text{ such that } u \in S, v \in T, f(u, v) = c(u, v) \\ &= c(S,T) & (\text{By definition of cut capacity}) \\ \end{aligned}

Therefore ($2 \implies 3$), as desired.

Now it is left to show that ($3 \implies 1$).

Assume that $|f|$ = $c(S, T)$ for some cut $(S,T)$. By the Fundamental estimate of the flow value, we have that $|f| \le c(S, T)$. Since $|f|$ = $c(S, T)$, we deduce $|f|$ is a maximum possible flow value, so $f$ is a maximum flow.

This completes the proof. $\blacksquare$

## Ford-Fulkerson’s Algorithm

### Development of the Ford-Fulkerson algorithm

Now we summarise what insight about the maximum flow problem we deduced so far:

• Flow-Path Augmentation lemma (Lemma 5) tells us that we can always safely augment a flow with an augmenting path resulting from residual network induced by that flow. Resulting flow will be a valid flow in the residual network induced by that flow.
• Fundamental estimate of augmenting flows (Lemma 6) tells us that we can always improve the current flow using the augmenting path in the residual network induced by that by simply augmenting the current flow with the augmenting path.
• The Max-Flow Min-Cut Theorem tells us that we can safely stop any computation once we reach a flow for which its induced residual network contains no augmenting path. By the same Theorem, we are sure that the current flow is indeed the maximum flow.

Now the intuition tells us that to solve the maximum flow problem, we can perform the following steps:

1. Initialize the current flow $f$ to be trivial flow of value $0$.
2. While there exists an augmenting path in the network $N_f$, using the construction of Flow-Path Augmentation lemma, augment the current flow using the augmenting path we found
3. When there exists no augmenting path in the network $N_f$, simply output the flow $f$ value $|f|$. The result will be the maximum flow!

This is exactly the Ford-Fulkerson Algorithm.

Now we state and prove the last theorem in this post (I promise 😉).

### Correctness and Running time of Ford-Fulkerson

Claim: Let $N = (G=(V,E), c, s, t)$ be a flow network. Assume that every capacity in the network is of the integral value. Then the Ford-Fulkerson Algorithm terminates and correctly computes the maximum flow $f^{\star}$ in $O(|f^{\star}|)$ time.

Proof:

Let $f^{\star}$ be the maximum flow and let $|f^{\star}|$ be its value.

Firstly we need to prove that Ford-Fulkerson Algorithm terminates. Observe that, that the while loop of step 2 performs the exactly the Flow-Path Augmentation lemma (Lemma 5). Hence every iteration of the while loop results in a new valid network flow. Moreover, by Fundamental estimate of augmenting flows, every resulting flow has a strictly greater value than the previous flow. Since the maximum flow has value $|f^{\star}|$ and flow from previous step is improved by at least 1 (recall capacities are integral), the while loop of step 2 executes at most $|f^{\star}|$ times. Hence the Ford-Fulkerson terminates after $|f^{\star}|$ steps at most. Observe that the number of operations that the algorithm performs is bounded above by the work done in the while loop, so we deduce that the running time of Ford-Fulkerson algorithm is $O(|f^{\star}|)$.

Now we need to prove that step 3 outputs the maximum flow. Observe that the algorithm stops after there exists no augmenting path in the residual network induced by the current flow $f$. Now by The Max-Flow Min-Cut Theorem, the flow $f$ must be the maximal one. Hence $|f|=|f^{\star}|$.

This completes the proof. $\blacksquare$

In one of the following posts, we will apply the algorithm and theory we developed to solve some seemingly unrelated problems by using reduction to the maximum flow. Hi👋! I am a final year Computer Science and Mathematics student at 🎓University of Edinburgh. This is a place where I (will hopefully) write about topics I find interesting. I plan to write about my personal projects and learning experiences, as well as some random tech topics.

Built with a lot of