0% found this document useful (0 votes)
4 views4 pages

Lab 9 PDCHHHGGFFFFDDD

The document presents a lab report on collective communication operations in MPI, specifically focusing on Broadcast, Scatter, and Gather. It includes two programs: one for solving a logic gate circuit and another for performing matrix multiplication, demonstrating efficient data sharing across processes. The conclusion emphasizes the importance of these operations in enhancing problem-solving speed in parallel computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views4 pages

Lab 9 PDCHHHGGFFFFDDD

The document presents a lab report on collective communication operations in MPI, specifically focusing on Broadcast, Scatter, and Gather. It includes two programs: one for solving a logic gate circuit and another for performing matrix multiplication, demonstrating efficient data sharing across processes. The conclusion emphasizes the importance of these operations in enhancing problem-solving speed in parallel computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012

LAB no 09
Collective Commuincation Operation

OBJECT: Explain the concept of collective communication command (Broadcast)

EXERCISE:

Program 1: Write a program that solves the logic gate circuit. Execute the program on slave
nodes with host names.

CODE:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int xor_gate(int a, int b) {
return a ^ b;}
int or_gate(int a, int b) {
return a | b;}
int and_gate(int a, int b) {
return a & b;}
int main(int argc, char* argv[]) {
int rank, size;
int A, B, C;
int X1, X2, X3, X;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
printf("Enter values for A, B, and C (0 or 1): ");
scanf("%d %d %d", &A, &B, &C);
MPI_Send(&A, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
MPI_Send(&B, 1, MPI_INT, 1, 1, MPI_COMM_WORLD);
MPI_Send(&A, 1, MPI_INT, 2, 0, MPI_COMM_WORLD);
MPI_Send(&B, 1, MPI_INT, 2, 1, MPI_COMM_WORLD);
MPI_Send(&C, 1, MPI_INT, 3, 0, MPI_COMM_WORLD);
}
if (rank == 1) {
MPI_Recv(&A, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&B, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

pg. 1
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012

X1 = xor_gate(A, B);
printf("Process %d: X1 = %d\n", rank, X1);
MPI_Send(&X1, 1, MPI_INT, 3, 0, MPI_COMM_WORLD);}
if (rank == 2) {
MPI_Recv(&A, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&B, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X3 = and_gate(A, B);
printf("Process %d: X3 = %d\n", rank, X3);
MPI_Send(&X3, 1, MPI_INT, 4, 0, MPI_COMM_WORLD); }
if (rank == 3) {
MPI_Recv(&X1, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&C, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X2 = or_gate(X1, C);
printf("Process %d: X2 = %d\n", rank, X2);
MPI_Send(&X2, 1, MPI_INT, 4, 0, MPI_COMM_WORLD); }
if (rank == 4) {
MPI_Recv(&X2, 1, MPI_INT, 3, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(&X3, 1, MPI_INT, 2, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
X = xor_gate(X2, X3);
printf("Process %d: Final Output X = %d\n", rank, X);}
MPI_Finalize();
return 0;}
OUTPUT:
mpicc -o lab9t1 lab9t1.c
mpirun -np 5 --hostfile hosts ./ lab9t1
Enter values for A, B, and C (0 or 1): 1 0 1
Process 1: X1 = 1
Process 2: X3 = 0
Process 3: X2 = 1
Process 4: Final Output X = 1

Program 2: Write a program that perform matrix multiplication. Execute the program on slave
machines.

[ 13 24] [31 24]


CODE:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
void matrix_multiply(double *local_A, double *B, double *local_C, int local_rows, int n, int p) {
for (int i = 0; i < local_rows; i++) {
for (int j = 0; j < p; j++) {
local_C[i * p + j] = 0;

pg. 2
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012

for (int k = 0; k < n; k++) {


local_C[i * p + j] += local_A[i * n + k] * B[k * p + j]; }} }}
int main(int argc, char **argv) {
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int m = 2, n = 2, p = 2;
double *A = NULL, *B = NULL, *C = NULL;
double *local_A, *local_C;
int local_rows = m / size;
if (rank == 0) {
A = (double *)malloc(m * n * sizeof(double));
B = (double *)malloc(n * p * sizeof(double));
C = (double *)malloc(m * p * sizeof(double));
A[0] = 1; A[1] = 2;
A[2] = 3; A[3] = 4;
B[0] = 1; B[1] = 2;
B[2] = 3; B[3] = 4; }
local_A = (double *)malloc(local_rows * n * sizeof(double));
local_C = (double *)malloc(local_rows * p * sizeof(double));
if (rank == 0) {
MPI_Bcast(B, n * p, MPI_DOUBLE, 0, MPI_COMM_WORLD);
} else {
B = (double *)malloc(n * p * sizeof(double));
MPI_Bcast(B, n * p, MPI_DOUBLE, 0, MPI_COMM_WORLD); }
MPI_Scatter(A, local_rows * n, MPI_DOUBLE, local_A, local_rows * n, MPI_DOUBLE, 0,
MPI_COMM_WORLD);
matrix_multiply(local_A, B, local_C, local_rows, n, p);
MPI_Gather(local_C, local_rows * p, MPI_DOUBLE, C, local_rows * p, MPI_DOUBLE, 0,
MPI_COMM_WORLD);
if (rank == 0) {
printf("Matrix A:\n");
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
printf("%f ", A[i * n + j]); }
printf("\n"); }
printf("Matrix B:\n");
for (int i = 0; i < n; i++) {
for (int j = 0; j < p; j++) {
printf("%f ", B[i * p + j]); }
printf("\n"); }
printf("Matrix C (Result of A * B):\n");
for (int i = 0; i < m; i++) {
for (int j = 0; j < p; j++) {
printf("%f ", C[i * p + j]); }
printf("\n"); }

pg. 3
NAME: ZAID BIN ASIM ROLL NO: 2021F-BCE-012

free(A);
free(B);
free(C); }
free(local_A);
free(local_C);
MPI_Finalize();
return 0;}

OUTPUT:
mpicc -o lab9t2 lab9t2.c
mpirun -np 4 ./lab9t2
Matrix A:
1.000000 2.000000
3.000000 4.000000
Matrix B:
1.000000 2.000000
3.000000 4.000000
Matrix C (Result of A * B):
7.000000 10.000000
15.000000 22.000000

CONCLUSION:
In this lab we explored collective communication in MPI, focusing on Broadcast, Scatter, and
Gather operations. The first program demonstrated distributed computation for a logic gate
circuit, while the second showed parallel matrix multiplication. These exercises highlighted
efficient data sharing and aggregation across processes, enabling faster problem-solving in
parallel computing.

pg. 4

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy