Parallel processing
Contents
Parallel processing by the brain
Parallel processing, the ability of the brain to simultaneously process incoming stimuli of differing quality. This becomes most important in vision, as the brain divides what it sees into four components: color, motion, shape, and depth. These are individually analyzed and then compared to stored memories, which helps the brain identify what you are viewing. The brain then combines all of these into the field of view that you see and comprehend. Parallel processing has been linked, by some experimental psychologists, to the Stroop effect. This is a continual and seamless operation.
Parallel processing in computers
Parallel processing is the simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads. Ideally, parallel processing makes programs run faster because there are more engines (CPUs or cores) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multi-core processor chips are becoming the norm. There are even computers with thousands of CPUs.
With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software.
Note that parallelism differs from concurrency. Concurrency is a term used in the operating systems and databases communities which refers to the property of a system in which multiple tasks remain logically active and make progress at the same time by interleaving the execution order of the tasks and thereby creating an illusion of simultaneously executing instructions.[1][2] Parallelism, on the other hand, is a term typically used by the supercomputing community to describe executions that physically execute simultaneously with the goal of solving a problem in less time or solving a larger problem in the same time. Parallelism exploits concurrency.[1]
Parallel processing is also called "parallel computing." In the quest of cheaper computing alternatives, parallel processing provides a viable option. The idle time of processor cycles across network can be used effectively by sophisticated distributed computing software. The term parallel processing is used to represent a large class of techniques which are used to provide simultaneous data processing tasks for the purpose of increasing the computational speed of a computer system.
Advantages
- Faster execution time, so higher throughput.
Disadvantages
- More hardware required, also more power requirements.
- Not good for low power and mobile devices.
See also
Notes
- ^ a b "How to sound like a Parallel Programming Expert: Introducing concurrency and parallelism" by Timothy Mattson, Intel, 2009
- ^ Lin, Calvin, & Larry Snyder. Principles of Parallel Programming, Pearson, ISBN 9780321487902, 2008
References
- "High Performance Computing Using MPI and OpenMP on Multi-core Parallel Systems" by Haoqiang Jin, Dennis Jespersen, Piyush Mehrotra, Rupak Biswas, of the NAS Division, NASA Ames Research Center, and Lei Huang, Barbara Chapman of the Department of Computer Sciences, University of Houston