Processes tend to benefit greatly from multi-core processors if they are CPU bound (i.e. computational intensive tasks). The actual speedup depends on the portion of the code that must remain sequential. The multi-core benefit diminishes for IO bound processes however since hard drive performance becomes the limiting factor. Unlike solid state drives (SSD), traditional hard drives are very good at handling large sequential reads/writes but handle random accesses poorly. In fact, parallelizing sequential read/write access can actually cause performance degradation since sequential data might be accessed by different threads and parallelizing the access makes the access pattern of each thread more random. So the performance of such IO bound task is largely determined by the IO sub-system performance.
For tasks that contain both intensive IO operations and CPU usages however it is desirable to parallelize the IO bound portion and the CPU bound portion of code so that maximum parallelism can be achieved.
An example of such IO and CPU intensive tasks is finding duplicate files in a file system. To find duplicate files, files or their hashes must be compared. In this post, I will discuss a couple of different approaches to achieve this using threading.
1. Sequential find
First let us take a look at how to find duplicate files sequentially (all the sample code can be downloaded from here):
Suppose we have a function that returns a list of files within a directory (see GetFileList in FileUtils). In function FindDuplicates, we loop through the list of files and calculate the hash of each file. The file name and hash are used together as a key to a hash table. Whenever the same key is indexed the count is incremented by one indicating a duplicate file is found. Here is what the code looks like:
2. ThreadPool
We could improve the performance by moving the hash calculation to separate threads.
The first approach to parallelize the task is to use a thread pool to queue the work of calculating file hashes. Since calculating file hash involves reading in the whole file (IO bound) and some calculation on the content, there is some inherent parallelism so using multiple threads could potentially improve performance.
The code below shows this approach:
3. Producer and Consumer
Even through the previous approach improved the duplicated file finding performance, it is still not optimum. For example, even though multiple threads are used to calculate file hashes, they are joined at the end before another set of calculation is queued. Also, the IO is operations are distributed across multiple threads and on some older hard drives handling IO in multiple threads can actually slow things down. An ideal solution would be to use a single thread for IO operation since IO is inherently sequential.
As it turned out, this problem is a classic producer-consumer problem. In this third approach, we will examine how to use the producer consumer model to solve the problem of finding duplicates. In this approach, the producer thread searches for files within a given directory. It opens each file as it encounters and read the content into a ProducerConsuerQueue. At the same time, the consumer computes the hash from the bytes stored in the queue and determines whether a file is duplicated or not. The code is slightly different than what you saw above as files are being added to the queue and consumed from the queue simultaneously.
If you run each versions of the code, you will see that this approach has most of the performance gains. It has better performance than ThreadPool approach since the IO thread always tries to keep the queue. Further, concentrate IO operations into a single thread helps the system optimize IO requests and thus increases the IO subsystem performance.