Thus,
Amdahl's Law can calculate the upper bound of the parallel program speedup.
Also, the total execution time T(f, n, [p.sub.app]) with diverse numbers of cores is able to be estimated by using
Amdahl's law. It should also be noted that the execution time depends on the compression parameter q.
* Find articles and papers that mention
Amdahl's laws, including the original articles.
Section 2 presents an overview on
Amdahl's law. In Section 3, we briefly describe the different types of multicore architectures.
Technically, discussions on a migration option can be couched in terms of
Amdahl's law: keep the sequential sections of code in a common language such as C or C++ and express the parallel sections so they can be migrated should it prove necessary to do so in the future.
In the late 1980s, key concerns included whether
Amdahl's Law would limit to 100 or so the number of processors that can be used efficiently (see the sidebar Promise and Limits of
Amdahl's Law and Moore's Law).
Amdahl's Law was a state-of-the-art analytical model that guided software developers to evaluate the actual speedup that could be achieved by using parallel programs, or hardware designers to draw much more elaborate microarchitecture and components.
In a nutshell, this is the argument behind
Amdahl's law that notes the speedup of a program using multiple processors will be limited by the rime spent in any sequential portions of the code.
These computers let programmers dodge hard problems by running them on one processor and blaming
Amdahl's law for the resulting poor performance.
The performance obtained by this typical parallelization is limited according to
Amdahl's law [8][9][10][11].
A danger with parallel programming, according to
Amdahl's law [1], is that there are sequential bottlenecks which prohibit parallelization.