Foundations of Multithreaded, Parallel, and Distributed Programming covers, and then applies, the core concepts and techniques needed for an introductory course in this subject. Its emphasis is on the practice and application of parallel systems, using real-world examples throughout. Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance.
Les mer
Teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. This book emphasizes how to solve problems, and includes a number of case studies which cover such topics as pthreads, MPI, and OpenMP libraries, as well as programming languages.
Les mer
1. The Concurrent Computing Landscape. The Essence of Concurrent Programming. Hardware Architectures. Single Processor Machines. Shared-Memory Multiprocessors. Multicomputers Networks. Applications and Programming Styles. Iterative Parallelism: Matrix Multiplication. Recursive Parallelism: Adaptive Quadrature. Producers and Consumers: Unix Pipes. Clients and Servers: Remote Files. Peers: Distributed Matrix Multiplication. Summary of Programming Notation. Declarations Sequential Statements. Concurrent Statements and Process Declarations. Comments. I. SHARED VARIABLE PROGRAMMING. 2. Processes and Synchronization. States, Actions, Histories, and Properties. Parallelization: Finding Patterns in Files. Synchronization: The Maximum of an Array. Atomic Actions and Await Statements. Fine-Grained Atomicity. Specifying Synchronization: The Await Statement. Finding Patterns in a File Revisited. A Synopsis of Axiomatic Semantics. Formal Logical Systems. A Programming Logic. Semantics of Concurrent Execution. Techniques for Avoiding Interference. Disjoint Variables Weakened Assertions. Global Invariants. Synchronization. An Example: The Array Copy Problem Revisited. Safety and Liveness Properties. Proving Safety Properties. Scheduling Policies and Fairness. 3. Locks and Barriers. The Critical Section Problem. Critical Sections: Spin Locks. Test and Set. Test and Test and Set. Implementing Await Statements. Critical Sections: Fair Solutions. The Tie-Breaker Algorithm. The Ticket Algorithm. The Bakery Algorithm. Barrier Synchronization. Shared Counter. Flags and Coordinators. Symmetric Barriers. Data Parallel Algorithms. Parallel Prefix Computations. Operations on Linked Lists. Grid Computations: Laplace's Equation. Synchronous Multiprocessors. Parallel Computing with a Bag of Tasks. Matrix Multiplication. Adaptive Quadrature. 4. Semaphores. Syntax and Semantics. Basic Problems and Techniques. Critical Sections: Mutual Exclusion. Barriers: Signaling Events. Producers and Consumers: Split Binary Semaphores. Bounded Buffers: Resource Counting. The Dining Philosophers. Readers and Writers. Readers/Writers as an Exclusion Problem. Readers/Writers Using Condition Synchronization. The Technique of Passing the Baton. Alternative Scheduling Policies. Resource Allocation and Scheduling. Problem Definition and General Solution Pattern. Shortest-Job-Next Allocation. Case Study: Pthreads. Thread Creation. Semaphores. Example: A Simple Producer and Consumer. 5. Monitors. Syntax and Semantics. Mutual Exclusion. Condition Variables. Signaling Disciplines. Additional Operations on Condition Variables. Synchronization Techniques. Bounded Buffers: Basic Condition Synchronization. Readers and Writers: Broadcast Signal. Shortest-Job-Next Allocation: Priority Wait. Interval Timer: Covering Conditions. The Sleeping Barber: Rendezvous. Disk Scheduling: Program Structures. Scheduler as a Separate Monitor. Scheduler as an Intermediary. Scheduler as a Nested Monitor. Case Study: Java. The Threads Class. Synchronized Methods. Parallel Readers/Writers. Exclusive Readers/Writers. True Readers/Writers. Case Study: Pthreads. Locks and Condition Variables. Example: Summing the Elements of a Matrix. 6. Implementations. A Single-Processor Kernel. A Multiprocessor Kernel. Implementing Semaphores in a Kernel. Implementing Monitors in a Kernel. Implementing Monitors Using Semaphores. II. DISTRIBUTED PROGRAMMING. 7. Message Passing. Asynchronous Message Passing. Filters: A Sorting Network. Clients and Servers. Active Monitors. A Self-Scheduling Disk Driver. File Servers: Conversational Continuity. Interacting Peers: Exchanging Values. Synchronous Message Passing. Case Study: CSP. Communication Statements. Guarded Communication. Example: The Sieve of Eratosthenes. Case Study: Linda. Tuple Space and Process Interaction. Example: Prime Numbers with a Bag of Tasks. Case Study: MPI. Basic Functions. Global Communication and Synchronization. Case Study: Java. Networks and Sockets. Example: A Remote File Reader. 8. RPC and Rendezvous. Remote Procedure Call. Synchronization in Modules. A Time Server Caches in a Distributed File System. A Sorting Network of Merge Filters. Interacting Peers: Exchanging Values. Rendezvous. Input Statements. Client/Server Examples. A Sorting Network of Merge Filters. Interacting Peers: Exchanging Values. A Multiple Primitives Notation. Invoking and Servicing Operations. Examples. Readers/Writers Revisited. Encapsulated Access. Replicated Files. Case Study: Java. Remote Method Invocation. Example: A Remote Database. Case Study: Ada. Tasks. Rendezvous. Protected Types. Example: The Dining Philosophers. Case Study: SR. Resources and Globals. Communication and Synchronization. Example: Critical Section Simulation. 9. Paradigms for Process Interaction. Managers/Workers (Distributed Bag of Tasks). Sparse Matrix Multiplication. Adaptive Quadrature Revisited. Heartbeat Algorithms. Image Processing: Region Labeling. Cellular Automata: The Game of Life. Pipeline Algorithms. A Distributed Matrix Multiplication Pipeline. Matrix Multiplication by Blocks. Probe/Echo Algorithms. Broadcast in a Network. Computing the Topology of a Network. Broadcast Algorithms. Logical Clocks and Event Ordering. Distributed Semaphores. Token-Passing Algorithms. Distributed Mutual Exclusion. Termination Detection in a Ring. Termination Detection in a Graph. Replicated Servers. Distributed Dining Philosophers. Decentralized Dining Philosophers. 10. Implementations. Asynchronous Message Passing. Shared-Memory Kernel. Distributed Kernel. Synchronous Message Passing. Direct Communication Using Asynchronous Messages. Guarded Communication Using a Clearing House. RPC and Rendezvous. RPC in a Kernel. Rendezvous Using Asynchronous Message Passing. Multiple Primitives in a Kernel. Distributed Shared Memory. Implementation Overview. Page Consistency Protocols. III. PARALLEL PROGRAMMING: Speedup and Efficiency. Overheads and Challenges. 11. Scientific Computing. Grid Computations. Laplace's Equation. Sequential Jacobi Iteration. Shared Variable Program. Message Passing Program. Red/Black Successive Over Relaxation. Multigrid Methods. Particle Computations. The Gravitational #N-Body Problem. Shared Variable Program. Message Passing Programs. Approximate Methods. Matrix Computations. Gaussian Elimination. LU Decomposition. Shared Variable Program. Message Passing Program. 12. Languages, Compilers, Libraries, and Tools. Parallel Programming Libraries. Case Study: Pthreads. Case Study: MPI. Case Study: OpenMP. Parallelizing Compilers. Dependence Analysis. Program Transformations. Other Programming Models. Imperative Languages. Coordination Languages Data Parallel Languages. Functional Languages. Abstract Models. Case Study: High Performance Fortran (HPF). Parallel Programming Tools. Performance Measurement and Visualization. Metacomputers and Metacomputing. Case Study: The Globus Toolkit. 0201357526T04062001
Les mer
Foundations of Multithreaded, Parallel, and Distributed Programming covers, and then applies, the core concepts and techniques needed for an introductory course in this subject. Its emphasis is on the practice and application of parallel systems, using real-world examples throughout. Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance. Features Emphasizes how to solve problems, with correctness the primary concern and performance an important, but secondary, concern Includes a number of case studies which cover such topics as pthreads, MPI, and OpenMP libraries, as well as programming languages like Java, Ada, high performance Fortran, Linda, Occam, and SR Provides examples using Java syntax and discusses how Java deals with monitors, sockets, and remote method invocation Covers current programming techniques such as semaphores, locks, barriers, monitors, message passing, and remote invocation Concrete examples are executed with complete programs, both shared and distributed Sample applications include scientific computing and distributed systems 0201357526B04062001
Les mer
Details A print textFree shipping

Produktdetaljer

ISBN
9780201357523
Publisert
2000-01-13
Utgiver
Vendor
Pearson
Vekt
1260 gr
Høyde
235 mm
Bredde
191 mm
Dybde
40 mm
Aldersnivå
U, 05
Språk
Product language
Engelsk
Format
Product format
Heftet
Antall sider
688

Biographical note

Gregory Andrews received a B.S. degree in Mathematics from Stanford University in 1969 and a Ph.D. degree in Computer Science from the University of Washington in 1974. From 1974-79 he was an Assistant Professor at Cornell University. Since 1979 he has been at The University of Arizona, where he is currently Professor of Computer Science. From 1986-93 he chaired the department; in 1986 he received a distinguished teaching award.

Greg has been on the editorial board of Information Processing Letters since 1979. He was the general chair of the Twelfth ACM Symposium on Operating Systems Principles in 1989 and has been on the program committees of numerous conferences. From 1988-92 he was on advisory committees for the computing directorate of the National Science Foundation. Since 1991 he has been on the Board of Directors of the Computing Research Association (CRA).

Greg's research interests include all aspects of concurrent programming. A long-term project has been the design and implementation of the SR programming language. Current work focuses on the development of Filaments, a software package that provides efficient fine-grain parallelism on a variety of parallel machines.

0201357526AB04062001