regardless of how many processors are devoted to a parallelized execution of this program

https://en.wikipedia.org/wiki/Amdah's_law

Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (p = 0.95) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (1/(1 − p) = 20). For this reason parallel computing is relevant only for a low number of processors and very parallelizable programs.

Computer Systems A Programmer's Perspective Second Edition

Gene Amdahl, one of the early pioneers in computing, made a simple but insightful

observation about the effectiveness of improving the performance of one part

of a system. This observation has come to be known as Amdahl’s law. The main
idea is that when we speed up one part of a system, the effect on the overall sys-
tem performance depends on both how significant this part was and how much
it sped up.
 
So, for example, if we can speed up 60% of the system to the point where it re-
quires close to no time, our net speedup will still only be 1
/
.
4
=
2
.
5. We saw this
performance with our dictionary program as we replaced insertion sort by quick-
sort. The initial version spent 173.05 of its 177.57 seconds performing insertion
sort, giving
α
=
.
975. With quicksort, the time spent sorting becomes negligible,
giving a predicted speedup of 39.3. In fact, the actual measured speedup was a
bit less: 173
.
05
/
4
.
72
=
37
.
6, due to inaccuracies in the profiling measurements. We
were able to gain a large speedup because sorting constituted a very large fraction
of the overall execution time.
Amdahl’s law describes a general principle for improving any process. In
addition to applying to speeding up computer systems, it can guide a company
trying to reduce the cost of manufacturing razor blades, or a student trying to
improve his or her gradepoint average. Perhaps it is most meaningful in the world
of computers, where we routinely improve performance by factors of 2 or more.
Such high factors can only be achieved by optimizing large parts of a system.
5.15
Summary
Although most presentations on code optimization describe how compilers can
generate efficient code, much can be done by an application programmer to assist
the compiler in this task. No compiler can replace an inefficient algorithm or data
structure by a good one, and so these aspects of program design should remain
a primary concern for programmers. We also have seen that optimization block-
ers, such as memory aliasing and procedure calls, seriously restrict the ability of
compilers to perform extensive optimizations. Again, the programmer must take
primary responsibility for eliminating these. These should simply be considered
parts of good programming practice, since they serve to eliminate unneeded work.
Tuning performance beyond a basic level requires some understanding of the
processor’s microarchitecture, describing the underlying mechanisms by which
the processor implements its instruction set architecture. For the case of out-of-
order processors, just knowing something about the operations, latencies, and
issue times of the functional units establishes a baseline for predicting program
performance.
 
Amdahl’s law describes a general principle for improving any process. In
addition to applying to speeding up computer systems, it can guide a company
trying to reduce the cost of manufacturing razor blades, or a student trying to
improve his or her gradepoint average. Perhaps it is most meaningful in the world
of computers, where we routinely improve performance by factors of 2 or more.
Such high factors can only be achieved by optimizing large parts of a system.
上一篇:git 新建工程


下一篇:Struts2框架学习(二) Action