An Optimal Upper Bound on the Minimal Completion Time in Distributed Supercomputing
|Document type:||Conference Papers|
|Author(s):||Lars Lundberg, Håkan Lennerstad|
|Title:||An Optimal Upper Bound on the Minimal Completion Time in Distributed Supercomputing|
|Conference name:||Proceedings of International Conference on Supercomputing '94|
|Pagination:||p. xii+439, 196-203|
|Publisher:||ACM; NY, USA|
|Organization:||Blekinge Institute of Technology|
|Department:||Dept. of Computer Science and Business Administration (Institutionen för datavetenskap och ekonomi)
Dept. of Computer Science and Business Administration S-372 25 Ronneby
+46 455 780 00
|Abstract:||We first consider an MIMD multiprocessor configuration with n processors. A parallel program, consisting of n processes, is executed on this system-one process per processor.
The program terminates when all processes are completed. Due to synchronizations, processes may be blocked waiting for events in other processes.
Associated with the program is a parallel profile vector nu , index i (1<or=i<or=n) in this vector indicates the percentage of the total execution
time when i processes are executing. We then consider a distributed MIMD supercomputer with k clusters, containing u processors each.
The same parallel program, consisting of n processes, is executed on this system. Each process can only be executed by processors in the same cluster. Finding a
schedule with minimal completion time in this case is NP-hard. We are interested in the gain of using n processors compared to using k clusters containing
u processors each. The gain is defined by the ratio between the minimal completion time using processor clusters and the completion
time using a schedule with one process per processor. We present the optimal upper bound for this ratio in the form of an analytical expression
in n, nu , k and u. We also demonstrate how this result can be used when evaluating heuristic scheduling algorithms (12 Refs.)
Computer Science\Distributed Computing
|Keywords:||computational complexity, parallel machines, parallel programming, scheduling|
|Note:||This article is written under the Project "Optimal Combinatorial Bounds Comparing Multiprocessor Architectures"|