Skip to main content

Section 25.9 Function Calls

The only thing we need to carefully account for other than loops when analyzing the efficiency of a function is function calls. Consider the sample below. It has four lines of code and no loops. Does that mean its efficiency is O(1)? Not quite. Each function call is doing some amount of work and we need to account for that work as part of running this code.
int size;
cin >> size;
printRow(size);
printGrid(size);
printRow(size * 2);
Here is a version of the code with the function definitions included so we can see what they do:
#include <iostream>
using namespace std;

void printRow(int size) {
    for (int i = 0; i < size; i++) {
        cout << "*";
    }
    cout << endl;
}

void printGrid(int size) {
    for (int i = 0; i < size; i++) {
        for (int j = 0; j < size; j++) {
            cout << "*";
        }
        cout << endl;
    }
}

int main() {
    int size;
    cin >> size;
    printRow(size);
    printGrid(size);
    printRow(size * 2);
}
The variable size is passed into both functions. Each function uses that value to control one or more loops. So when we say \(n\) for this analysis, we will be talking about size. If we changed the value of size, that would change the amount of work done by both functions and thus the overall work of the program. (Although size is hard-coded in this example, we can imagine it being read from user input and thus truly variable.)
printRow has a single loop that runs \(O(n)\) times and does constant work. It also does a constant-time operations outside of the loop. But the overall work of printRow is \(O(n)\text{.}\)
void printRow(int size) {
    // O(n) work where n = size
}
printGrid has a nested loop. The outer loop runs \(O(n)\) times and for each iteration of that loop, the inner loop runs \(O(n)\) times. So the overall work of printGrid is \(O(n^2)\) .
void printGrid(int size) {
    // O(n^2) work where n = size
}
Now we can start to analyze the overall work of main. A call to printRow does \(O(n)\) work where \(n\) is the value passed to the function. A call to printGrid does \(O(n^2)\) work where \(n\) is the value passed to the function. The first two function calls in main both pass in size, which is \(n\text{.}\) So they are doing \(O(n)\) and \(O(n^2)\) work respectively:
#include <iostream>
using namespace std;

void printRow(int size) {
    // O(n) work where n = size
}

void printGrid(int size) {
    // O(n^2) work where n = size
}

int main() {
    int size;
    cin >> size;
    // O(n) work for printRow(size)
    // O(n^2) work for printGrid(size)
    printRow(size * 2);
}
The last call is passing in size * 2. So if size is \(n\text{,}\) then the function call does \(2 \cdot O(n)\) work. But that is still \(O(n)\) work. So that last call to printRow is also doing \(O(n)\) work:
#include <iostream>
using namespace std;

void printRow(int size) {
    // O(n) work where n = size
}

void printGrid(int size) {
    // O(n^2) work where n = size
}

int main() {
    // O(1) to get size
    // O(n) work for printRow(size)
    // O(n^2) work for printGrid(size)
    // O(n) work for printRow(size)
}

Insight 25.9.1.

A function call β€œcosts” work equal to the work done by the function being called.
Now we can either list all the terms of work or just focus on the dominant term. The dominant term is \(O(n^2)\text{,}\) so the overall work of main is \(O(n^2)\text{.}\) We would get the same result if we listed all the terms and then simplified:
\begin{equation*} O(1) + O(n) + O(n^2) + O(n) \end{equation*}
Combining like terms and ordering them gives
\begin{equation*} O(n^2) + 2 \cdot O(n) + O(1) \end{equation*}
Which is just
\begin{equation*} O(n^2) + O(n) + O(1) \end{equation*}
And since only the dominant term matters:
\begin{equation*} O(n^2) \end{equation*}
Now, let’s briefly consider an alternate version of main:
int main() {
    int size;
    cin >> size;
    printRow(size);
    printGrid(10);
    printRow(size + 1);
}
In this version, the second function call to printGrid is passing in a constant value of 10. That function does \(O(n^2)\) work where \(n\) is the value passed in. But since we are passing in a constant value, that means the function is doing \(O(10^2)\) work, which is a constant and thus \(O(1)\text{.}\) No matter what value size has, the second function call is always doing the same amount of work.

Insight 25.9.2.

We need to consider how the value of \(n\) in the caller relates to the value of \(n\) in the called function. If we call a function with a constant value, that function does \(O(1)\) work regardless of the value of \(n\) in the caller.
The last function call is passing in size + 1. \(O(n + 1)\) is still just \(O(n)\) work.
int main() {
    // O(1) to get size
    // O(n) work for printRow(size)
    // O(1) work for printGrid(10)
    // O(n) work for printRow(size + 1)
}
The overall work of this version of main is dominated by the \(O(n)\) terms, so it is \(O(n)\text{.}\)

Checkpoint 25.9.1.

Consider the following code:
for (int i = 0; i < n; i++) {
  foo(n);
}
Assume foo(n) is known to run in \(O(n)\) time. Pick the overall Big-O time complexity.
  • \(O(1)\)
  • \(O(n)\)
  • \(O(n^2)\)
  • Each iteration of the loop calls foo(n) which does \(O(n)\) work. The loop runs \(n\) times, so the overall work is \(n \cdot O(n) = O(n^2)\text{.}\)
  • \(O(n \log n)\)

Checkpoint 25.9.2.

Consider the following code:
for (int i = 0; i < n; i++) {
  foo(5);
}
Assume foo(n) is known to run in \(O(n)\) time. Pick the overall Big-O time complexity.
  • \(O(1)\)
  • \(O(n)\)
  • Each iteration of the loop calls foo(5) which does \(O(1)\) work since it is called with a constant. The loop runs \(n\) times, so the overall work is \(n \cdot O(1) = O(n)\text{.}\)
  • \(O(n^2)\)
  • \(O(n \log n)\)

Checkpoint 25.9.3.

Consider the following code:
foo(n/2);
int x = 10;
foo(n);
Assume foo(n) is known to run in \(O(n)\) time. Pick the overall Big-O time complexity.
  • \(O(1)\)
  • \(O(n)\)
  • The first call to foo(n/2) does \(O(n/2)\) work, which is just \(O(n)\text{.}\) The second call to foo(n) does \(O(n)\) work. So the overall work is \(O(n) + O(n) = 2 \cdot O(n) = O(n)\text{.}\)
  • \(O(n^2)\)
  • \(O(n \log n)\)

Checkpoint 25.9.4.

Consider the following code:
foo(n);
for (int i = 0; i < n; i++) {
  foo(n/2);
  foo(n/2);
}
Assume foo(n) is known to run in \(O(n)\) time. Pick the overall Big-O time complexity.
  • \(O(1)\)
  • \(O(n)\)
  • \(O(n^2)\)
  • The first call to foo(n) does \(O(n)\) work. The loop runs \(n\) times and each iteration does two calls to foo(n/2), each of which does \(O(n/2)\) work. So the loop does \(n \cdot 2 \cdot O(n/2) = n \cdot O(n) = O(n^2)\) work. The overall work is thus \(O(n) + O(n^2) = O(n^2)\text{.}\)
  • \(O(n \log n)\)
  • \(O(n^3)\)
You have attempted of activities on this page.