Introduction
PARALLEL COMPUTING
Parallel Computing is splitting up a larger task and executing it in parallel so that the result can be achieved faster. Parallel Computing is the way through which programs are executed on a Super Computer! Our aim is to set up a parallel computing environment and demonstrate applications that could be executed in parallel.
A variety of methods can be used for setting a Parallel Computing environment. Our approach is to be cost-effective and efficient: this approach is called the Beowulf Cluster. A Beowulf cluster is a kind if High Performance Cluster [HPC], which is nothing but an interconnection of simple commodity PCs, but can achieve speeds comparable to that of many existing 'Super Computers'.
An application that can take hours to complete can be completed within minutes using Parallel Computing. But as said earlier, parallel computing involves splitting up of a task and the complexity of this part depends upon the application. This splitting of a task is where parallel programming plays the role. Message Passing Interface [MPI] is one of the parallel programming model, where communication between different parallel processes takes place using the message passing models. Communication between different parallel processes is required for synchronization processes.
Parallel Computing is not only useful for performing long, time-consuming applications, but also for certain applications, which cannot be performed using a single computer. One such example is building a Tiled Display. Tiled Display is a cheap replacement of the now available large screens. A tiled display eventually increases the effective resolution for viewing various scientific and high-resolution visualizations. It is a collection of monitors arranged in an NxM matrix fashion. The idea is to split the entire picture into NxM parts and send it for displaying to the respective node’s monitor.