CS 411 Fall 2025 > Outline & Supplemental Notes for October 3, 2025
CS 411 Fall 2025
Outline & Supplemental Notes
for October 3, 2025
Outline
The material below was started on October 3, 2025 and finished on October 6, 2025.
Binary Tree Traversals [L 5.3]
- Traversals
- Traverse a data structure: visit all items (all nodes)?
- We have seen graph traversals.
- Binary Trees
- Composed of nodes.
- Possibly empty (no nodes).
- Nonempty Binary Tree has a unique root node.
- Each node has a data item and two subtrees: left and right—and the distinction between the two is important.
- Binary Tree Traversals
- Recursively defined; Divide and Conquer strategy.
- Three kinds
- Preorder:
visit root, preorder left subtree, preorder right subtree.
- Note: Special case of DFS.
- Inorder: inorder left subtree, visit root, inorder right subtree.
- Postorder: postorder left subtree, postorder right subtree, visit root.
- Preorder:
visit root, preorder left subtree, preorder right subtree.
- Implementation & Applications
- Easily implemented using recursive functions.
- Example of expression parse tree.
- Using different traversals and printing with different delimiters gives expression in various programming languages.
Multiplication [L 5.4]
- Thought: Multiplication of large objects can have unexpectedly efficient Divide and Conquer algorithms.
- Multiplication of Integers
- Arbitrarily large integers.
- Problem: Compute the product of two very large integers—say two \(n\)-digit integers.
- Basic operation: multiplying, adding, or subtracting two very small integers (digits?).
- Brute-force algorithm: \(\Theta(n^2)\).
- Karatsuba’s Algorithm
[A. Karatsuba 1962]
- Two \(2\)-digit numbers in any base (say \(b\)) can be multiplied using \(3\) multiplications, instead of \(4\).
- For two \(n\)-digit numbers, pretend they are \(2\)-digit numbers in base \(b^{n/2}\), and apply above idea recursively.
- Result: Multiply two \(n\)-digit numbers using about \(3^{\log_2 n} = n^{\log_2 3} \approx n^{1.585}\) basic ops.
- Other algorithms have been developed
- Generally algorithms requiring fewer multiply operations require more add/subtract operations.
- Multiplication of Matrices
- Matrices.
- Problem: Compute the product of two square matrices. Say two \(n\times n\) matrices.
- Basic operation: multiplication, addition, or subtraction of numbers of whatever type the entries in the matrix are (or just multiplication).
- Brute force algorithm: \(\Theta(n^3)\).
- Strassen’s Algorithm [V. Strassen 1969]
- Two \(2\times 2\) matrices can be multiplied using \(7\) multiplications, instead of \(8\).
- For two \(n\times n\) matrices, pretend they are \(2\times 2\) matrices with entries that are matrices, and apply above idea recursively.
- Result: Multiply two \(n\times n\) matrices using about \(7^{\log_2 n} = n^{\log_2 7} \approx n^{2.807}\) basic ops.
- As with integer multiplication, other algorithms have been developed (see Supplemental Notes).
Supplemental Notes
Large Integer Multiplication in Practice
Multiplication of large integers has practical applications, particularly in cryptography. Integers multiplied in such applications can reach sizes where Karatsuba’s Algorithm is preferred to Brute-force integer multiplication.
Large-integer packages will typically include an implementation of Karatsuba’s Algorithm and perhaps other similar algorithms. They may choose the algorithm to be used for a particular multiplication based on the sizes of the numbers to be multiplied.
Matrix Multiplication Algorithms with a Lower Exponent
Strassen’s Algorithm is \(\Theta(n^{2.807})\), approximately. In 1990, a matrix-multiplication with a significantly lower exponent was found [D. Coppersmith & S. Winograd 1990]: about \(2.375477\). Over the years, other algorithms have been found, giving slightly smaller exponents, with minor improvements being offered by very recent research. The lowest exponent that I am currently aware of is that of an algorithm published by Alman et al in 2024.
Researchers | Publication Year |
Exponent (approximate) |
---|---|---|
D. Coppersmith & S. Winograd | 1990 | \(2.375477\) |
A. J. Stothers | 2010 | \(2.373\) |
V. V. Williams | 2011 | \(2.3728642\) |
F. Le Gall | 2014 | \(2.3728639\) |
J. Alman & V. V. Williams | 2020 | \(2.37286\) |
R. Duan, Wu, & R. Zhou | 2022 | \(2.371866\) |
V. V. Williams, Y. Xu, Z. Xu, & R. Zhou | 2023 | \(2.371552\) |
J. Alman, R. Duan, V. V. Williams, Y. Xu, Z. Xu, & R. Zhou | 2024 | \(2.371339\) |
Unlike Strassen’s Algorithm, the algorithms in the above table are generally considered to be impractical, because their actual running time is only better than Strassen’s Algorithm for extremely large matrices. Algorithms with this property—asymptotically fast, but only advantageous for impractically large problems—are sometimes called galactic algorithms.
And it is not uncommon for Brute-force matrix multiplication to be preferred even over Strassen’s Algorithm. The Brute-force method is typically faster for the sizes of matrices that are actually multiplied—particularly in view of its cache friendliness.
Some researchers think there might be an \(O(n^k)\) algorithm for matrix multiplication for every real number \(k > 2\). However, we seem to be far from figuring out how these might work.