Models of Distributed Processing
We spent a couple of days talking about Hadoop and the Map-Reduce model that it that it supports. And, we spent a day talking about processor allocation and process migration. These represent two different ways of getting work done in a distributed environment.
In the case of the Map-Reduce model, we achieved power by instantiating the computation on top of the data. In the case of our processor allocation-migration model, we did the processing where we could, and established or migrated the data, as necessary.
These two different models aren't often two different ways of best solving the same problem. They are usually used to address different types of problems.
The Map-Reduce model is good at working with truly large quantities of data, where having the data in one place is impossible and moving it is impractical. This type of problem is most especially characterized by the need to make and consoladate observations of the data, rather than to do significant tightly coupled computation. For this reason, we call this class of problem and solution, "Data Intensive Scalable Computing (DISC)".
Other types of large distributed problems surely exist. For example, "Scientific Computing" is usually characterized by the need to perform a massive amount of somewhat tightly coupled computation on a large set of data. But, in the case of scientific computing, the data is often large, but not truly huge, and the computation is very prominent and involves more than one piece of data.
More general types of distributed system usage involve more computation and less input data. For example, the rendering of animations. Given reasonably few input parameters, these do a lot of computation to generate modestly large outputs (animation segments).
The infrastructure we use should be tuned to the type of job we want to do. We've got a project this semester using Hadoop, an exmaple of a Map-Reduce system for DISC. today we are going to talk about a few other systems for different types of problems.
OpenMP, MPI, and PVM
OpenMP and MPI are APIs for use largely in scientific computing. They provide a model for managing the topology of a cluster, communicating among nodes, and synchronizing across the nodes. They are predominantly used in scientific compuitng settings, because they cannot run commodity code. Instead they only run software written specifically to take advantage of their specific API.
OpenMP and MPI are different, somewhat competing, APIs. I say somewhat competiting, because each is best suited for a different type of cluster architecture. OpenMP provides a strong shared memory model, and little to worry about in terms of communication mumbo-jumbo, but requires SMP systems. MPI is really a communications protocol, but serves as a great set of building blocsk for implementing shared memory in NUMA systems (specialized super computers with different memory characteristics).
PVM (Parallel Virtual Machine) is a framework that is designed to simulate a parallel processing computer over a network of workstations. It solves similar problems to MPI and OpenMP, but has its own API and model for parallel computing. It is somewhat of an older tool, with more of a swiss army feel to it than a narrow paradigm. Perhaps its greatest strength is its portability.
Condor and OpenPBS/TORQUE
Condor and OpenPBS are systems that allow jobs to be scheduled across workstations. They are capable of monitoring the load on a pool of systems and distributing jobs to them as they are able. Condor can be used to scavange idle cycles form workstations, or for dedicated systems. Unlike OpenMP, MPI, and PVM, they aren't desinged to solve the problem of parallelizing or distributing a single large task. Instead, they are designed to allow for the distribution of many tasks over a large number of computers. We've used all three, and presently use TORQUE, on campus to distribute animation and rendering jobs across dedicated high performance clusters, and to scavange cycles from public workstations during low-use times.
Condor can schedule precompiled programs, or it can provided improved services, such as checkpointing and migratable I/O for those programs compiled with its libraries. It can also support parallel prcoessing when combined with MPI or PVM(See above).
OpenPBS is basically a scheduling system. It became a commercial prodict and is fading away as an open solution. It is the basis for TORQUE, which, in practice, seems to be its successor. Much like Condor, TORQUE provides a solution for the distributed scheduling of batch jobs. it also provides fault tolerance in the event of failed nodes,