CS 411 Fall 2025 > Outline & Supplemental Notes for November 5, 2025
CS 411 Fall 2025
Outline & Supplemental Notes
for November 5, 2025
Outline
Spanning Trees II: Kruskal’s Algorithm & Union-Find (cont’d) [L 9.2]
- Optimizing a Disjoint-Set Forest
- Union by Rank optimization. Keep track of the rank of each rooted tree (for now, it’s height). Store the rank in the root vertex’s node. When doing a Union, point tree of smaller rank at tree of larger rank. If ranks are equal, then increment the rank in the new root.
- Path Compression optimization. When doing a Find, for each node visited, point its parent pointer at the root.
- Kruskal’s Algorithm: Implementation
- Given: adjacency lists, weight matrix.
- Operation:
- Create list of all edges in graph, sorted by weight, ascending.
- Create Union-Find structure with a one-point set for each vertex.
- Initialize spanning tree (list of edges, starts empty).
- Iterate through edges, least weight to greatest weight. If an edge’s endpoints lie in different sets, then add the edge to the spanning tree, and union the endpoints of the edge.
- Simple optimization: stop when \(n-1\) edges have been added (\(n\) is the number of vertices in the graph).
- Return the spanning tree.
- Kruskal’s Algorithm: Analysis & the Ackermann Function
- Analyzing Union-Find
- For an unoptimized Union-Find Structure (Quick Union), Find and Union are both \(\Theta(n)\).
- Adding the Union by Rank optimization makes Find and Union both \(\Theta(\log n)\).
- Adding both optimizations is as above, but the amortized time per operation is \(O\bigl(\alpha(n)\bigr)\), where \(\alpha\) is the extremely slow-growing inverse Ackermann function: not quite amortized constant time, but very close. (See the Supplemental Notes.)
- Analyzing Kruskal’s Algorithm. Creation of the tree is close to \(\Theta(|E|)\). Most time consuming operation is sorting the list of edges: \(\Theta(|E| \log |E|)\).
- Analyzing Union-Find
Shortest Path: Dijkstra’s Algorithm [L 9.3]
- Shortest Path
- Shortest-Path Problem: given a digraph with weights on the edges, and given vertices \(a\), \(b\) in the digraph, find the minimum-weight directed path (if any exists) from \(x\) to \(y\).
- Single-Source Shortest-Path Problem: given a digraph with weights on the edges, and given vertex \(x\) in the digraph, find, for each vertex \(y\) in the digraph, the minimum-weight directed path (if any exists) from \(x\) to \(y\).
- The union of all the paths from the SSSP problem forms a tree. If every vertex is reachable from \(x\), then it is a spanning tree. It is generally not a minimum spanning tree.
- All of the above generalizes easily to undirected graphs. Digraph algorithms typically work for graphs without modification.
- Dijkstra’s Algorithm: Idea
- Solves Single-Source Shortest-Path Problem.
- Like Prim’s Algorithm: reached and unreached vertices.
- Select next vertex to reach (\(y\)) to be a neighbor of a reached vertex \(x\) such that the distance from the start vertex to \(y\), along a path passing through \(x\), is minimized.
- Dijkstra’s Algorithm: Implementation
- If we select the next vertex to be reached using
a priority queue holding arcs,
then we can implement Dijkstra’s Algorithm
as a simple modification of Prim’s Algorithm.
- Make an array that holds the distance of each vertex from the start vertex. Each item is initialized to \(+\infty\).
- When we reach a vertex, set its distance in the array.
- The weight of an arc in the priority queue its weight plus the distance of its tail endpoint.
- If we select the next vertex to be reached using
a priority queue holding arcs,
then we can implement Dijkstra’s Algorithm
as a simple modification of Prim’s Algorithm.
- Dijkstra’s Algorithm: Analysis
- Our modifications of Prim’s Algorithm do not change the overall efficiency.
Supplemental Notes
The Ackermann Function
The Ackermann function [W. Ackermann 1928] is one of several mathematical functions that exhibit extremely fast increase.
Here is a standard formulation. Function \(A\) has two nonnegative integer parameters. Its value is a nonnegative integer.
\[ A(m,n) = \begin{cases} n+1, & \text{if \(m = 0\);}\\ A(m-1, 1), & \text{if \(m \gt 0\) and \(n = 0\);}\\ A(m-1, A(m, n-1), & \text{if \(m,n \gt 0\).} \end{cases} \]
Function \(A\) is a standard example of a function that cannot be computed if the only repetition structure allowed is a loop in which the number of iterations is known beforehand. That is, function \(A\) is not primitive recursive.
We can think about \(A(m,n)\) as follows. Essentially, parameter \(m\) determines the operation to apply (addition, multiplication, exponentiation, etc.), while \(n\) is the number of apply it to. Here are some values.
| \(m\) | \(A(m, n)\) |
|---|---|
| \(0\) | \(n+1\) |
| \(1\) | \(n+2=2+(n+3)-3\) |
| \(2\) | \(2n+3=2(n+3)-3\) |
| \(3\) | \(2^{n+3}-3\) |
| \(4\) | \(\underbrace{2^{2^{\cdot^{\cdot^{\cdot^{2}}}}}}_{n+3}-3\) |
We can define a single-parameter version of the above function as follows:
\[ f(n) = A(n,n). \]
Here are some values of this single-parameter Ackermann function.
| \(n\) | \(f(n)\) |
|---|---|
| \(0\) | \(1\) |
| \(1\) | \(3\) |
| \(2\) | \(7\) |
| \(3\) | \(61\) |
| \(4\) | \(2^{2^{2^{2^{2^{2^{2}}}}}}-3 = 2^{2^{2^{65536}}}-3\) |
| \(5\) | A value that is difficult to imagine |
Function \(f\) grows extremely quickly. Its inverse, often denoted by \(\alpha\), grows extremely slowly. Formally, we could define \(\alpha(k)\) to be the least integer \(n\) such that \(f(n) \ge k\). This is the inverse Ackermann function.
The value of \(\alpha(k)\) can be arbitrarily large. However, for all numbers \(k\) that matter in computing—or ever will matter in computing, we have \(\alpha(k) \le 4\).