Skip to main content
Logo image

Java, Java, Java: Object-Oriented Problem Solving, 2024E

Section 14.8 Chapter Summary

Subsection 14.8.1 Technical Terms

asynchronous monitor ready queue
blocked multitasking round-robin scheduling
busy waiting multithreaded scheduling algorithm
concurrent mutual exclusion task
critical section priority scheduling thread
dispatched producer/consumer model thread life cycle
fetch-execute cycle quantum time slicing
lock queue

Subsection 14.8.2 Important Points

  • A sequential computer with a single central processing unit (CPU) can execute only one machine instruction at a time. A parallel computer uses multiple CPUs operating simultaneously to execute more than one instruction at a time.
  • Each CPU uses a fetch-execute cycle to retrieve the next machine instruction from memory and execute it. The cycle is under the control of the CPU’s internal clock, which typically runs at several hundred megahertz—where 1 megahertz (MHz) is 1 million cycles per second. Time slicing is the technique whereby several threads can share a single CPU over a given time period. Each thread is given a small slice of the CPU’s time under the control of some kind of scheduling algorithm.
  • In round-robin scheduling, each thread is given an equal slice of time, in a first-come–first-served order. In priority scheduling, higher-priority threads are allowed to run before lower-priority threads are run.
  • There are generally two ways of creating threads in a program. One is to create a subclass of Thread and implement a run() method. The other is to create a Thread instance and pass it a Runnable object—that is, an object that implements run().
  • The sleep() method removes a thread from the CPU for a determinate length of time, giving other threads a chance to run.
  • The setPriority() method sets a thread’s priority. Higher-priority threads have more and longer access to the CPU.
  • Threads are asynchronous. Their timing and duration on the CPU are highly sporadic and unpredictable. In designing threaded programs, you must be careful not to base your algorithm on any assumptions about the threads’ timing.
  • To improve the responsiveness of interactive programs, you could give compute-intensive tasks, such as drawing lots of dots, to a lower-priority thread or to a thread that sleeps periodically.
  • A thread’s life cycle consists of ready, running, waiting, sleeping, and blocked states. Threads start in the ready state and are dispatched to the CPU by the scheduler, an operating system program. If a thread performs an I/O operation, it blocks until the I/O is completed. If it voluntarily sleeps, it gives up the CPU.
  • According to the producer/consumer model, two threads share a resource, one serving to produce the resource and the other to consume the resource. Their cooperation must be carefully synchronized.
  • An object that contains synchronized methods is known as a monitor. Such objects ensure that only one thread at a time can execute a synchronized method. The object is locked until the thread completes the method or voluntarily sleeps. This is one way to ensure mutually exclusive access to a resource by a collection of cooperating threads.
  • The synchronized qualifier can also be used to designate a method as a critical section, whose execution should not be preempted by one of the other cooperating threads.
  • In designing multithreaded programs, it is useful to assume that if a thread can be interrupted at a certain point, it will be interrupted there. Thread coordination should never be left to chance.
  • One way of coordinating two or more cooperating threads is to use the wait/notify combination. One thread waits for a resource to be available, and the other thread notifies when a resource becomes available.

Solutions 14.8.3 Solutions to Self-Study Exercises

14.3 From the Java Library: java.lang.Thread
14.3.1 The Runnable Interface

Self-Study Exercise
14.3.1.1. Runnable.
Solution.
class PrintOdds implements Runnable {
    private int bound;

    public PrintOdds(int b) {
        bound = b;
    }

    public void print() {
        for (int k = 1; k <= bound; k += 2)
            System.out.println(k);
    }

    public void run() {
        print();
    }
}

class PrintEvens implements Runnable {
    private int bound;

    public PrintEvens(int b) {
        bound = b;
    }

    public void print() {
        for (int k = 0; k <= bound; k += 2)
            System.out.println(k);
    }

    public void run() {
        print();
    }
}

14.3.4 Forcing Threads to Sleep

Self-Study Exercises
14.3.4.1. Equal Priority.
Solution.
On my system, the experiment yielded the following output, if each thread printed its number after every 100,000 iterations:
    public void run() {
       for (int k = 0; k <= 2000000; k++) {
           if (k % 100000 == 0)
               System.out.print(num);
       } // for
   } // run()

1453253142513245314523451235421531245314534235132513
52413524135241354213421534215342514325432514321421421
This suggests that round-robin scheduling is being used.
14.3.4.2. Thread Sleep.
Solution.
    public void run() {
        for (int k = 0; k < 100; k++) {
            try {
                Thread.sleep(50);
            } catch (InterruptedException e) {
                System.out.println(e.getMessage());
            }
            if (k % 10 == 0)
                System.out.print(num);
        } // for
    } // run()

The output, with spaces inserted after each five numbers, looks far less random:
41253 32415 43251 32451 45231 35421 53421 35241 25431 25431
14.3.4.3. Garbage collection.
Solution.
The garbage collector runs whenever the available memory drops below a certain threshold. It must have higher priority than the application, since the application won’t be able to run if it runs out of memory.

14.4 Thread States and Life Cycle
14.4.1 Thread Control

Self-Study Exercise
14.4.1.1. Thread Scheduling.
Solution.
Priority scheduling is used for I/O threads, which are given higher priority in order to improve the responsiveness of an interactive program.

14.5 Using Threads to Improve Interface Responsiveness
14.5.8 Advantages of Multithreaded Design

Self-Study Exercises
14.5.8.1. Minimum Priority.
Solution.
If Dotty’s priority is set to the minimum, this would improve the responsiveness compared to the single-threaded version. But it would leave it up to the operating system to decide when to take control away from the drawing loop. Therefore, forcing the drawing loop to sleep on each iteration, as in our current multithreaded version, would be the more responsive design.
14.5.8.2. Round robin scheduling.
Solution.
If round-robin scheduling is used, each thread will be get a portion of the CPU’s time, so the I/O thread will eventually get its turn. But you don’t know how long it will be before the GUI gets its turn, so there might still be an unacceptably long wait before the user’s actions are handled.

14.6 CASE STUDY: Cooperating Threads
14.6.3 Design: The TakeANumberClass

Self-Study Exercise
14.6.3.1. Mutual exclusion.
Solution.
The take-a-number gadget “enforces” mutual exclusion by virtue of its design: There’s room for only one hand to grab the ticket and there’s only one ticket per number. If two customers got “bakery rage” and managed to grab the same ticket, it would rip in half and neither would benefit.

14.6.10 Creating a Critical Section

Self-Study Exercise
14.6.10.1. Critical Sections.
Solution.
One experiment to run would be to make the clerk’s performance very slow by using large sleep intervals. If the algorithm is correct, this should not affect the order in which customers are served. Another experiment would be to force the clerk to work fast but the customers to work slowly. This should still not affect the order in which the customers are served.

14.6.12 The wait/notify Mechanism

Self-Study Exercise
14.6.12.1. Busy Waiting.
Solution.
You should observe that the waiting line builds up as customers enter the bakery, but the clerk should still serve the customers in the correct order.
You have attempted of activities on this page.