CS 311 Fall 2007  >  Midterm Exam Review Problems, Part 2

CS 311 Fall 2007
Midterm Exam Review Problems, Part 2

This is the second of three sets of review problems for the midterm exam. For all three sets, see the class web page, or the following links:

Problems

Review problems are given below. Answers are in the Answers section of this document. Do not turn these in.

  1. Write a function div, declared as follows:
    double div(double a, double b);
    Function div should divide a by b and return the result. If this is impossible, because b is zero, it should throw an exception of type std::overflow_error, with an appropriate message string. Include relevant comments, preconditions, and postconditions. You may assume that all relevant standard header files have been included, but that no using statements have been executed.
     
  2. Write code (not a complete function) that attempts to allocate dynamically an array of 10 int’s, using the ordinary array form of new. If allocation is successful, a boolean flag should be set to true. Otherwise, the flag should be set to false and a warning message should be printed. Hint: Catch an exception.
     
  3. The Lucas numbers are defined as follows: L0 = 2. L1 = 1. For n ≥ 2, Ln = Ln–2 + Ln–1. Thus, the first few Lucas numbers are: 2, 1, 3, 4, 7, 11, 18, 29, 47, 76.
    1. Write a recursive function to compute the nth Lucas number. Include preconditions and postconditions.
    2. Write a reasonably efficient interative function to compute the nth Lucas number. Include preconditions and postconditions.

     
    1. Briefly describe the advantages and disadvantages of Binary Search, as compared with Sequential Search.
    2. Despite its advantages, we often do not use Binary Search for the “Find” operation in Insertion Sort, even when we can. Why not?

     
  4. If we want to jump a random-access iterator forward, we can add an integer to it.
    myIter += howFar;
    Or we can do the same thing with std::advance.
    std::advance(myIter, howFar);
    Why is std::advance often preferable?
     
    1. Explain the difference between “equality” and “equivalence”.
    2. How does this difference affect the way we do searching in sorted lists?

     
  5. Sometimes, we repeatedly call a function to compute the same value over and over again. In such situations, we can sometimes achieve great gains in efficiency by applying a certain technique. What is it called, and how does it work?
     
  6. What disadvantages does recursion have, compared with iteration?
     
    1. What is “tail recursion”?
    2. Why is tail recursion important?

     
  7. The following function is tail-recursive. Rewrite this function in iterative form, making only minimal changes.
    template <typename FDIter>
    void recursiveSelectionSort(FDIter first, FDIter last)
    {
        // BASE CASE
        // Empty list? Done.
        if (first == last)
            return;
    
        // RECURSIVE CASE
        // Find minimum item
        FDIter minItem = std::min_element(first, last);
        // Put it at the beginning
        std::iter_swap(first, minItem);
        // Find iterator to item after first
        FDIter afterFirst = first;
        ++afterFirst;
        // And recurse
        recursiveSelectionSort(afterFirst, last);
    }
    By the way, this code works. It does a Selection Sort backwards from the way we discussed it in class; the sorted list grows from the beginning, instead of the end. This is so the algorithm will work with forward iterators. Standard algorithms min_element and iter_swap are defined in the header <algorithm>.
     
  8. Explain the relationship between recursion and mathematical induction.
     
    1. Name and explain the three parts of an induction proof, as covered in class.
    2. What is the difference between “weak induction” and “strong induction”?

     
  9. We are interested in algorithms that work well when solving large problems on large, fast systems (here, “large” refers to things like memory and disk capacity).
    1. What word do we use to describe such algorithms?
    2. Why are we interested in algorithms with this property?

     
    1. In general, what is “efficiency” (in the context of algorithms and programming)?
    2. In practice, when we talk about the “efficiency” of an algorithm, we mean something very specific. What do we typically mean?

     
    1. How do we measure the (time) efficiency of an algorithm in a way that is independent of system and implementation details?
    2. What notation do we use to express how efficient an algorithm is?

     
    1. Suppose f(n) is some function of n. What does it mean for an algorithm to be “O(f(n))”?
    2. We generally use big-O to place an algorithm into one of a few well-known categories. List and name at least 5 of these.

     
  10. Rewrite each of the following using big-O.
    1. Linear time.
    2. Quadratic time.
    3. Constant time.
    4. Logarithmic time.
    5. Exponential time.
    6. Log-linear time.

     
  11. Consider the following function:
    // Dereferencing a valid Iterator must return an int
    template<typename Iterator>
    int sum_cubes(Iterator first, Iterator last)
    {
        int sum = 0;
        for (Iterator iter = first; iter != last; ++iter)
        {
            int cube = 1;
            for (int i = 0; i < 3; ++i)
                cube *= *iter;
            sum += cube;
        }
        return sum;
    }
    Break this function into the “obvious” 11 operations (ignore the parameter passing, and count the “*=” and “+=” lines each as a single operation). Let n be the length of the range [first, last).
    1. How many operations does this function perform?
    2. Express the time efficiency of this function using big-O.
    3. Using the definition of big-O, explain how you can arrive at your answer in part b.

     
  12. Consider the following function:
    void f2(int arr[], int n)  // n is the length of arr
    {
        for (int i = 0; i < n; ++i)
        {
            for (int j = i; j < n-1; ++j)
            {
                for int (k = 0; k < 10; ++k)
                {
                    for int (m = 2; m < j+1; ++m)
                    {
                        cout << arr[m] << endl;
                    }
                }
            }
        }
    }
    Without counting operations, determine the order of this function. Explain how you can arrive at your answer.
     
    1. What does it mean for an algorithm to be “scalable”?
    2. In practice, what are the slowest algorithms we would generally consider to be scalable?

     

Answers

  1. // div
    // Divides its parameters and returns the result: a/b.
    // Pre: None.
    // Post:
    //     Return value == a/b
    // Throws std::overflow_error if b == 0.
    double div(double a, double b)
    {
        if (b == 0.)
            throw std::overflow_error("Division by zero");
            // Must #include <stdexcept>
        return a/b;
    }
    Notes:
  2. // Must have #include'd <stdexcept> and <iostream>
    bool allocSucceeded; // true if alloc succeeds
    try
    {
        int * myArray = new int[10];
        allocSucceeded = true;
    }
    catch (std::bad_alloc & e)
    {
        allocSucceeded = false;
        std::cout << "Could not allocate" << std::endl;
    }
    Notes: Here is another version, which includes the possible differences mentioned above.
    // Must have #include'd <stdexcept> and <iostream>
    bool allocSucceeded = true; // set to false if alloc fails
    try
    {
        int * myArray = new int[10];
    }
    catch (...)
    {
        allocSucceeded = false;
        std::cout << "Could not allocate" << std::endl;
    }
    1. // lucas1
      // Given n, returns L(n) (the nth Lucas number).
      // L(0) = 2. L(1) = 1. For n >= 2, L(n) = L(n-2) + L(n-1).
      // Recursive.
      // Pre:
      //     n >= 0.
      //     L(n) is a valid int value.
      // Post:
      //     Return value is L(n).
      // Does not throw
      int lucas1(int n)
      {
          // Base case
          if (n <= 1)
              return 2 - n;
      
          // Recursive case
          return lucas1(n-2) + lucas1(n-1);
      }
    2. // lucas2
      // Given n, returns L(n) (the nth Lucas number).
      // L(0) = 2. L(1) = 1. For n >= 2, L(n) = L(n-2) + L(n-1).
      // Pre:
      //     n >= 0.
      //     L(n) is a valid int value.
      // Post:
      //     Return value is L(n).
      // Does not throw
      int lucas2(int n)
      {
          int luc1 = -1;  // Previous Lucas number [L(-1) = -1]
          int luc2 = 2;   // Current Lucas number
      
          for (int i = 0; i < n; ++i)
          {
              // Invariants:
              //     luc1 == L(i-1)
              //     luc2 == L(i)
      
              int next = luc1 + luc2;
              luc1 = luc2;
              luc2 = next;
          }
          return luc2;
      }
      • Binary Search is much faster than Sequential Search [O(log n) versus O(n)].
      • Binary Search requires sorted data. If data are not already sorted, and only one search (or only a few searches) are to be done, it is probably not worthwhile to sort the data for Binary Search; use Sequential Search instead.
      • To be efficient, Binary Search requires random-access data. It is not a good algorithm for data structures such as linked lists.
    1. Insertion Sort is used in situations in which the data are nearly sorted. If each item in a list is very close to its proper position, then a backwards Sequential Search is likely to find this position faster than a Binary Search. Thus, we usually implement Insertion Sort using Sequential Search, not Binary Search.
  3. std::advance is often better, because, for a random-access iterator, it is just as fast as addition, but it also works with more general categories of iterators. Using std::advance results in more generic code.
    1. Equality of two objects means that they compare equal [a == b]. Equivalence of two objects means that neither is less than the other [!(a < b) && !(b < a)].
    2. Equality and equivalence may be different concepts for some types. Thus, in any given algorithm, we should use one or the other, not both. It may seem natural to search in a sorted list using “<” and then check whether we have found the right thing using “==”. However, this means we are using both criteria (equality and equivalence), which may result in problems. Conclusion: Since we search in a sorted list using “<” (thus, equivalence), we should also use “<” (and equivalence) to check whether we have found what we want. In short, a generic Binary Search should not use “==” at all.
  4. When a function gets called repeatedly to compute the same value over and over, we can increase efficiency by using memoizing. A more general term for this is caching. That is, when the function is called, we check whether the value has been computed yet. If not, we compute it, save it, and return it. If it has already been computed, we do not need to recompute; we merely return the saved value.
  5. Recursion has two main disadvantages:
    1. Tail recursion means that a function makes exactly one recursive call, and it is the last operation the function performs.
    2. Tail recursion is important because it is easy to convert to iteration (either manually or automatically) without introducing a stack data structure, thus increasing efficiency and reliability (no stack overflows).
  6. Eliminating tail recursion is simple. Just put a “while (true)” loop around the function body, comment out the recursive call, and set up the parameters at the end of the loop.
    template <typename FDIter>
    void recursiveSelectionSort(FDIter first, FDIter last)
    {
        while (true)
        {
            // BASE CASE
            // Empty list? Done.
            if (first == last)
                return;
    
            // RECURSIVE CASE
            // Find minimum item
            FDIter minItem = std::min_element(first, last);
            // Put it at the beginning
            std::iter_swap(first, minItem);
            // Find iterator to item after first
            FDIter afterFirst = first;
            ++afterFirst;
            // And recurse
            //recursiveSelectionSort(afterFirst, last);
            first = afterFirst;
        }
    }
    We could improve this a little bit: get rid of the recursion comments, and eliminate the variable afterFirst, simply incrementing first instead.
    template <typename FDIter>
    void recursiveSelectionSort(FDIter first, FDIter last)
    {
        while (true)
        {
            // Empty list? Done.
            if (first == last)
                return;
    
            // Find minimum item
            FDIter minItem = std::min_element(first, last);
            // Put it at the beginning
            std::iter_swap(first, minItem);
            // Find iterator to item after first
            ++first;
        }
    }
  7. Recursion as an algorithm design technique and mathematical induction as a proof technique share the same structure. Both have a (usually simple) base case that is handled in a straightforward manner. Outside of the base case, both the algorithm and the proof make use of simpler versions of themselves.

    Thus, mathematical induction is the primary form of proof used to verify facts (e.g., correctness, efficiency) about recursive algorithms and code.

    1. In inductive proof, we are proving a statement for a large range of integers, often all nonnegative integers.
      • Basis. Here we prove the statement for small values (for example, for 0).
      • Inductive Hypothesis. We assume that the statement is true for a particular value (or for all values up to a particular value; see part b).
      • Inductive Conclusion. Based on the inductive hypothesis, we prove that the statement is true for the next value. We can then conclude, “by induction”, that the statement is true in general.
    2. The difference between weak induction and strong induction lies in the statement of the induction hypothesis. In weak induction, we assume the statement is true for a particular value; in strong induction, we assume the statement is true for all values up to a particular value.
    1. Algorithms that work well when solving large problems on large/fast systems are said to be scalable (or, they “scale well”).
    2. We want scalable algorithms because they are good at doing what people want done. In the real world, people solve ever larger problems on ever larger and faster systems. Scalable algorithms allow them to accomplish their goals in a reasonable amount of time.
    1. Efficiency means that an algorithm or function uses few resources.
    2. We are typically interested in time efficiency. Specifically, we want the worst-case number of operations performed by an algorithm to grow slowly as the size of its input increases.
    1. We measure time efficiency of an algorithm by dividing its work into “steps”. We count the maximum number of steps required to execute the algorithm for input of a given size (usually “n”).
    2. We express how efficient an algorithm is using “big-O” notation.
    1. An algorithm is O(f(n)) if there exist numbers k and n0 such that the algorithm requires no more than k×f(n) steps to solve a problem of size nn0.
    2. The major categories we use are as follows. Any five of these form a correct answer to this part.
      • O(1). Constant time.
      • O(log n). Logarithmic time.
      • O(n). Linear time.
      • O(n log n). Log-linear time.
      • O(n2). Quadratic time.
      • O(bn), for some number b. Exponential time.
  8. I use the variable “n” below, but any variable is acceptable.
    1. Linear time: O(n).
    2. Quadratic time: O(n2).
    3. Constant time: O(1).
    4. Logarithmic time: O(log n). The base of the logarithm is irrelevant.
    5. Exponential time: O(bn), for some b > 1. [For example, O(2n), O(10n), O(en), etc.]
    6. Log-linear time: O(n log n). The base of the logarithm is irrelevant.
    1. The code inside the inner braces does 13 operations, regardless of n. Total number of operations for each part:
      • int sum = 0. 1.
      • Iterator iter = first. 1.
      • iter != last. n+1.
      • ++iter. n
      • (Code inside inner braces). 13 operations, executed n times. Total: 13n.
      • return sum. 1.
      The total is 15n+4.
    2. This function is O(n).
    3. For large values of n (specifically, for n ≥ 4), we have 15n+4 ≤ 16n. Thus, with k = 16 (and n0 = 4), we see, by the definition of big-O, that this function is O(n).

      Note: Making k large does not hurt anything. We could just as well have used k = 100 or 1000000. However, k must be greater than 15.

  9. This function is O(n3). We can use the “Rule of Thumb” mentioned in class. There are three nested loops that go up to n or something like n or something that goes up to something like n: the i, j, and m loops. The k loop is only executed a constant number of times. The order of this function is n raised to the power of the number of significant loops: 3.
    1. An algorithm is scalable if it works well with (increasingly) large problems.
    2. Typically, the slowest algorithms we would consider to be scalable are the log-linear-time algorithms.


CS 311 Fall 2007: Midterm Exam Review Problems, Part 2 / Updated: 21 Oct 2007 / Glenn G. Chappell / ffggc@uaf.edu Valid HTML 4.01!