Lab 9 PDCHHHGGFFFFDDD
Lab 9 PDCHHHGGFFFFDDD
LAB no 09
Collective Commuincation Operation
EXERCISE:
Program 1: Write a program that solves the logic gate circuit. Execute the program on slave
nodes with host names.
CODE:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int xor_gate(int a, int b) {
return a ^ b;}
int or_gate(int a, int b) {
return a | b;}
int and_gate(int a, int b) {
return a & b;}
int main(int argc, char* argv[]) {
int rank, size;
int A, B, C;
int X1, X2, X3, X;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
printf("Enter values for A, B, and C (0 or 1): ");
scanf("%d %d %d", &A, &B, &C);
MPI_Send(&A, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
MPI_Send(&B, 1, MPI_INT, 1, 1, MPI_COMM_WORLD);
MPI_Send(&A, 1, MPI_INT, 2, 0, MPI_COMM_WORLD);
MPI_Send(&B, 1, MPI_INT, 2, 1, MPI_COMM_WORLD);
MPI_Send(&C, 1, MPI_INT, 3, 0, MPI_COMM_WORLD);
}
if (rank == 1) {
MPI_Recv(&A, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&B, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
pg. 1
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012
X1 = xor_gate(A, B);
printf("Process %d: X1 = %d\n", rank, X1);
MPI_Send(&X1, 1, MPI_INT, 3, 0, MPI_COMM_WORLD);}
if (rank == 2) {
MPI_Recv(&A, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&B, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X3 = and_gate(A, B);
printf("Process %d: X3 = %d\n", rank, X3);
MPI_Send(&X3, 1, MPI_INT, 4, 0, MPI_COMM_WORLD); }
if (rank == 3) {
MPI_Recv(&X1, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&C, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X2 = or_gate(X1, C);
printf("Process %d: X2 = %d\n", rank, X2);
MPI_Send(&X2, 1, MPI_INT, 4, 0, MPI_COMM_WORLD); }
if (rank == 4) {
MPI_Recv(&X2, 1, MPI_INT, 3, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&X3, 1, MPI_INT, 2, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X = xor_gate(X2, X3);
printf("Process %d: Final Output X = %d\n", rank, X);}
MPI_Finalize();
return 0;}
OUTPUT:
mpicc -o lab9t1 lab9t1.c
mpirun -np 5 --hostfile hosts ./ lab9t1
Enter values for A, B, and C (0 or 1): 1 0 1
Process 1: X1 = 1
Process 2: X3 = 0
Process 3: X2 = 1
Process 4: Final Output X = 1
Program 2: Write a program that perform matrix multiplication. Execute the program on slave
machines.
pg. 2
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012
pg. 3
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012
free(A);
free(B);
free(C); }
free(local_A);
free(local_C);
MPI_Finalize();
return 0;}
OUTPUT:
mpicc -o lab9t2 lab9t2.c
mpirun -np 4 ./lab9t2
Matrix A:
1.000000 2.000000
3.000000 4.000000
Matrix B:
1.000000 2.000000
3.000000 4.000000
Matrix C (Result of A * B):
7.000000 10.000000
15.000000 22.000000
CONCLUSION:
In this lab we explored collective communication in MPI, focusing on Broadcast, Scatter, and
Gather operations. The first program demonstrated distributed computation for a logic gate
circuit, while the second showed parallel matrix multiplication. These exercises highlighted
efficient data sharing and aggregation across processes, enabling faster problem-solving in
parallel computing.
pg. 4