Parallelizing Transition fault testing algorithm
Hi,
I am working towards parallelization of one of my own sequential program. The sequential program is used to generate test patterns for Delay testing. (Testing of the rise and fall delays in a circuit). It takes up a lot of time to run and is a massively parallel program. So parallelization is a good idea to achieve speed up. I gathered a few points on parallelization of the program from the book "Parallel programming for Multicore and cluster systems" by Thomas Rauber Gudula Runger, I present them here.
Aim: To parallelize the transition fault testing program so that it runs on more than one processor
Steps involved in parallelization
1. Decomposition of the computations :
GOAL : of task decomposition is keeping all the processors busy at all times.
a. Computations of the sequential algorithm are decomposed into tasks and dependencies between the tasks are determined.Tasks are the smallest units of parallelism.
b. Task may involve accesses to shared address space or may execute message passing operations
GRANULARITY : computation time of a task. must take this into consideration when dividing the work into tasks. the granularity must be long enough to compensate for the scheduling overheads.
The decomposition step must find a good compromise between the number of tasks and their granularity.
2. ASSIGNMENT OF TASKS TO PROCESSES OR THREADS
GOAL: partition the tasks to achieve good load balancing results.
a. The flow of control executed by a single processor is a process or a thread
usually the number of processes or threads is the same as the number of cores.
b. To take into consideration, the number of memory accesses or communication operation for data exchange. EX- assign two tasks which work on the same data set to the same thread since this leads to a good cache usage.
c. SCHEDULING : the assignment of tasks to processes or threads is scheduling.
static scheduling
dynamic scheduling
3. Mapping of processes or threads to physical cores - mapping can be done by the operating systems supported by program statements. The main goal is get an equal utilization of the processors or cores while keeping communication between the processors as small as possible.
Next step is to analyze the program so as to implement the steps and make a detailed documentation. A job which is started properly is half done :)
Abishek Ramdas
NYU Poly
I am working towards parallelization of one of my own sequential program. The sequential program is used to generate test patterns for Delay testing. (Testing of the rise and fall delays in a circuit). It takes up a lot of time to run and is a massively parallel program. So parallelization is a good idea to achieve speed up. I gathered a few points on parallelization of the program from the book "Parallel programming for Multicore and cluster systems" by Thomas Rauber Gudula Runger, I present them here.
Aim: To parallelize the transition fault testing program so that it runs on more than one processor
Steps involved in parallelization
1. Decomposition of the computations :
GOAL : of task decomposition is keeping all the processors busy at all times.
a. Computations of the sequential algorithm are decomposed into tasks and dependencies between the tasks are determined.Tasks are the smallest units of parallelism.
b. Task may involve accesses to shared address space or may execute message passing operations
GRANULARITY : computation time of a task. must take this into consideration when dividing the work into tasks. the granularity must be long enough to compensate for the scheduling overheads.
The decomposition step must find a good compromise between the number of tasks and their granularity.
2. ASSIGNMENT OF TASKS TO PROCESSES OR THREADS
GOAL: partition the tasks to achieve good load balancing results.
a. The flow of control executed by a single processor is a process or a thread
usually the number of processes or threads is the same as the number of cores.
b. To take into consideration, the number of memory accesses or communication operation for data exchange. EX- assign two tasks which work on the same data set to the same thread since this leads to a good cache usage.
c. SCHEDULING : the assignment of tasks to processes or threads is scheduling.
static scheduling
dynamic scheduling
3. Mapping of processes or threads to physical cores - mapping can be done by the operating systems supported by program statements. The main goal is get an equal utilization of the processors or cores while keeping communication between the processors as small as possible.
Next step is to analyze the program so as to implement the steps and make a detailed documentation. A job which is started properly is half done :)
Abishek Ramdas
NYU Poly
Comments
Post a Comment