MultiCore - Introduction
Hi,
I have started working on Multicore Programming. The subject is fascinating because all the super computers are multicored (obviously). Multicore is introducing a paradigm shift in computing. Earlier Clock frequencies were increased to achieve a higher speed. But a point has been reached where increasing the clock frequencies are bound to give diminishing returns because of increased leakage and heating.
So to keep up with Moore's law speed up is achieved by parallelizing the program and running it on multiple cores (Amdahl's law). It is a point of consideration that most of the programs are serial by nature and the parallelize-able portion is very small when compared to the serial portion. so one might argue that there might not be significant speed up. But consider a massively parallel machine which runs the same program on different sets of data. It offers a definite advantage to run them simultaneously on different sets of data. That is what i believe multicore programming is trying to achieve.
Good reading : paper "Amdahl's Law in MultiCore era"
AMDAHL'S LAW
defines speed up to be
speed up = original execution time/enhanced execution time.
MODERN VERSION
if fraction f of a computation is speeded up by a factor S. the overall speed up (of the entire computation)
Speedup(f,S) = [1/((1-f)+(f/s))]
Ncore scenario.
fraction f of a computation is "infinitely" parallelizable with no scheduling overhead. (1-f) is sequential. if we use n parallel cores then the sped up is
(f,n) = [1/((1-f)+(f/n))]
Amdahl argues that typical values of (1-f) were large enough to favour single processors. But drawback of amdahls law is that fraction of a program that is parallelizable remains fixed. John Gustafson said that amdahls law dint do justice foe massively parallel machines. argument is : a macnine with greater parallel computation ability lets computation to operate on larger data sets in same amount of time.
I have started working on Multicore Programming. The subject is fascinating because all the super computers are multicored (obviously). Multicore is introducing a paradigm shift in computing. Earlier Clock frequencies were increased to achieve a higher speed. But a point has been reached where increasing the clock frequencies are bound to give diminishing returns because of increased leakage and heating.
So to keep up with Moore's law speed up is achieved by parallelizing the program and running it on multiple cores (Amdahl's law). It is a point of consideration that most of the programs are serial by nature and the parallelize-able portion is very small when compared to the serial portion. so one might argue that there might not be significant speed up. But consider a massively parallel machine which runs the same program on different sets of data. It offers a definite advantage to run them simultaneously on different sets of data. That is what i believe multicore programming is trying to achieve.
Good reading : paper "Amdahl's Law in MultiCore era"
AMDAHL'S LAW
defines speed up to be
speed up = original execution time/enhanced execution time.
MODERN VERSION
if fraction f of a computation is speeded up by a factor S. the overall speed up (of the entire computation)
Speedup(f,S) = [1/((1-f)+(f/s))]
Ncore scenario.
fraction f of a computation is "infinitely" parallelizable with no scheduling overhead. (1-f) is sequential. if we use n parallel cores then the sped up is
(f,n) = [1/((1-f)+(f/n))]
Amdahl argues that typical values of (1-f) were large enough to favour single processors. But drawback of amdahls law is that fraction of a program that is parallelizable remains fixed. John Gustafson said that amdahls law dint do justice foe massively parallel machines. argument is : a macnine with greater parallel computation ability lets computation to operate on larger data sets in same amount of time.
Comments
Post a Comment