Matrix Multiplication: Results

[cluster5@server ~]#./seq_mat
Generating two random matrices of size 800…..
Multiplying the two matrices…..

Clock time: 9.740000 seconds

[cluster5@server ~]#mpirun –np 5 ./par_mat

Generating two random matrices…..
Multiplying the two matrices…..

Clock time: 1.989466

Conclusion

Significant speedup is achieved using the parallel implementation of matrix multiplication. The reduction in time is proportional to the number of processor nodes available. As the size of the matrix increases, the difference in time required for computation on a single computer to that on the parallel computer is substantial.
Further speedup can be achieved using different algorithms for matrix multiplication which take advantage of the concept of cache coherency. This technique can be applied to many applications of matrix multiplication to save a good deal of computational time and solve complex computational tasks.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.