Kumar, V., Grama, A.Y., Vempaty, N.R.: Scalable load balancing techniques for parallel computers. ACM Press, New York (2005)īlumofe, R., Leiserson, C.: Scheduling multithreaded computations by work stealing. Leskovec, J., Kleinberg, J., Faloutsos, C.: Graphs over time: densification laws, shrinking diameters and possible explanations. Internet Engineering Task Force, RFC 3174 (Sept. Springer, Heidelberg (1963)Įastlake, D., Jones, P.: US secure hash algorithm 1 (SHA-1). Harris, T.: The Theory of Branching Processes. This process is experimental and the keywords may be updated as the learning algorithm improves. These keywords were added by machine and not by the authors. By varying key work stealing parameters, we expose important tradeoffs between the granularity of load balance, the degree of parallelism, and communication costs. Since dynamic load balancing requires intensive communication, performance portability remains difficult for applications such as UTS and performance degrades on PC clusters. However, UPC cannot alleviate the underlying communication costs of distributed-memory systems. Results show that both UPC and OpenMP can support efficient dynamic load balancing on shared-memory architectures. We found it simple to implement UTS in both UPC and OpenMP, due to UPC’s shared-memory abstractions. We benchmarked the performance of UTS on various parallel architectures, including shared-memory systems and PC clusters. We created versions of UTS in two parallel languages, OpenMP and Unified Parallel C (UPC), using work stealing as the mechanism for reducing load imbalance. We describe algorithms for building a variety of unbalanced search trees to simulate different forms of load imbalance. This paper presents an unbalanced tree search (UTS) benchmark designed to evaluate the performance and ease of programming for parallel applications requiring dynamic load balancing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |