c++ 当使用与处理器相关联的不同工作负载时,MPI会在发送后阻止执行

gcmastyq  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(107)

我遇到了一些MPI代码问题(我写这篇文章是为了测试另一个程序,其中不同的工作负载与不同的处理器相关联)。问题是,当我使用的处理器数量不同于1或 arraySize 时,(本例中为4),程序在 MPI_Send 期间被阻塞,特别是当我运行mpirun -np 2 MPItest时,程序在调用过程中被阻塞。我现在没有使用任何调试器,我只是想了解为什么它适用于1和4个处理器,但它不具有2个处理器(每个处理器阵列中有2个点),代码如下:

#include <mpi.h>
#include <iostream>

int main(int argc, char** argv) {
    int rank, size;
    const int arraySize = 4;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    // every processor have a different workload (1 or more spots on the array to send to the other processors)
    // every processor sends to every other processor its designated spots

    int* sendbuf = new int[arraySize];
    int* recvbuf = new int[arraySize];

    int istart = arraySize/size * rank;
    int istop = (rank == size) ? arraySize : istart + arraySize/size;

    for (int i = istart; i < istop; i++) {
        sendbuf[i] = i;
    }

    std::cout << "Rank " << rank << " sendbuf :" << std::endl;
    //print the sendbuf before receiving its other values
    for (int i = 0; i < arraySize; i++) {
        std::cout << sendbuf[i] << ", ";
    }
    std::cout << std::endl;

    // sending designated spots of sendbuf to other processors
    for(int i = istart; i < istop; i++){
        for(int j = 0; j < size; j++){
            MPI_Send(&sendbuf[i], 1, MPI_INT, j, i, MPI_COMM_WORLD);
        }
    }

    // receiving the full array
    for(int i = 0; i < arraySize ; i++){
        int recvRank = i/(arraySize/size);
        MPI_Recv(&recvbuf[i], 1, MPI_INT, recvRank, i, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
    }

    // print the recvbuf after receiving its other values
    std::cout << "Rank " << rank << " recvbuf :" << std::endl;
    for (int i = 0; i < arraySize; i++) {
        std::cout << recvbuf[i] << ", ";
    }
    std::cout << std::endl;

    delete[] sendbuf;
    delete[] recvbuf;

    MPI_Finalize();
    return 0;
}

字符串
我正在使用标签来区分阵列中的不同点(也许这就是问题所在?)
我尝试使用不同数量的处理器,使用1个处理器程序可以工作,使用4个处理器程序也可以工作,使用3个处理器程序崩溃,使用2个处理器程序被阻塞。我还尝试使用MPI_Isend,但它也不工作(标志为0),MPI_Isend修改后的代码如下:

#include <mpi.h>
#include <iostream>

int main(int argc, char** argv) {
    int rank, size;
    const int arraySize = 4;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    // every processor have a different workload (1 or more spots on the array to send to the other processors)
    // every processor sends to every other processor its designated spots

    int* sendbuf = new int[arraySize];
    int* recvbuf = new int[arraySize];

    int istart = arraySize/size * rank;
    int istop = (rank == size) ? arraySize : istart + arraySize/size;

    for (int i = istart; i < istop; i++) {
        sendbuf[i] = i;
    }

    std::cout << "Rank " << rank << " sendbuf :" << std::endl;
    //print the sendbuf before receiving its other values
    for (int i = 0; i < arraySize; i++) {
        std::cout << sendbuf[i] << ", ";
    }
    std::cout << std::endl;

    // sending designated spots of sendbuf to other processors
    for(int i = istart; i < istop; i++){
        for(int j = 0; j < size; j++){
            MPI_Request request;
            //MPI_Send(&sendbuf[i], 1, MPI_INT, j, i, MPI_COMM_WORLD);
            MPI_Isend(&sendbuf[i], 1, MPI_INT, j, i, MPI_COMM_WORLD, &request);
            // control if the send is completed
            int flag = 0;
            MPI_Test(&request, &flag, MPI_STATUS_IGNORE);
            const int numberOfRetries = 10;
            if(flag == 0){ // operation not completed
                std::cerr << "Error in sending, waiting" << std::endl;
                for(int k = 0; k < numberOfRetries; k++){
                    MPI_Test(&request, &flag, MPI_STATUS_IGNORE);
                    if(flag == 1){
                        break;
                    }
                }
                if(flag == 0){
                    std::cerr << "Error in sending, aborting" << std::endl;
                    MPI_Abort(MPI_COMM_WORLD, 1);
                }
                
            }
        }
    }

    // receiving the full array
    for(int i = 0; i < arraySize ; i++){
        int recvRank = i/(arraySize/size);
        MPI_Recv(&recvbuf[i], 1, MPI_INT, recvRank, i, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
    }

    // print the recvbuf after receiving its other values
    std::cout << "Rank " << rank << " recvbuf :" << std::endl;
    for (int i = 0; i < arraySize; i++) {
        std::cout << recvbuf[i] << ", ";
    }
    std::cout << std::endl;

  
    //MPI_Alltoall(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, MPI_COMM_WORLD);

    delete[] sendbuf;
    delete[] recvbuf;

    MPI_Finalize();
    return 0;
}


对于这段代码,-np 4 也不起作用。

ztigrdn8

ztigrdn81#

由于我还没有收到任何答案的问题,我想添加一些见解,我的问题,以帮助一些人,如果他们发现自己在同样的条件。
我测试了另一段代码,看看OpenMPI标准在我的笔记本电脑上是否工作得很好,因为有太多的问题对于标准来说是正确的,甚至互联网上的代码示例在我的笔记本电脑上也不能工作。我测试了下面的代码,一个非常简单的代码,它在两个进程之间发送数组的一部分:

#include <mpi.h>
#include <iostream>

int main(int argc, char** argv) {
    int rank, size;
    const int arraySize = 5;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    // initialize sendbuf
    int* sendbuf = new int[arraySize];
    for(int iteration = 0; iteration < 3; iteration++){

        if(rank){
            std::cout << "Rank " << rank << " sendbuf :" << std::endl;
            for (int i = 0; i < arraySize; i++) {
                std::cout << sendbuf[i] << ", ";
            }
            std::cout << std::endl;
        }

        // first process send first three elements to second process
        if(rank == 0){
            for(int i = 0; i < 3; i++){
                sendbuf[i] = i;
            }
            MPI_Send(&sendbuf[0], 3, MPI_INT, 1, 0, MPI_COMM_WORLD);
        } else {
            for(int i = 3; i < 5; i++){
                sendbuf[i] = i;
            }
        }

        // receive the full array with MPI_Wait
        if(rank){
            // second process receive the first three elements from first process
            MPI_Recv(&sendbuf[0], 3, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        }

        // print the full array
        if(rank){
            std::cout << "Rank " << rank << " sendbuf after:" << std::endl;
            for (int i = 0; i < arraySize; i++) {
                std::cout << sendbuf[i] << ", ";
            }
            std::cout << std::endl;
        }

        // reset MPI requests and buffers
        for(int i = 0; i < arraySize; i++){
            sendbuf[i] = -1;
        }
        
    }

    MPI_Finalize();

}

字符串
我想看看一个单一的发送和一个单一的接收是否会在我的笔记本电脑上循环工作,让我惊讶的是(经过两天的尝试一切),这是我的笔记本电脑和OpenMPI实现的问题。我在集群上测试了这段代码,我有和MPI实现的工作,看看它是否是我的硬件问题。代码在集群上工作,但不是在我的笔记本电脑上。
最后,这是我拥有的硬件:

  • 内核:6.6.1-arch 1 -1
  • 搜索:x86_64
  • 位数:64
  • 编译器:gcc
  • 型号:Lenovo Legion 7 16 IAX 7
  • 处理器:第12代Intel(R)Core(TM)i7- 12800 HX
  • OpenMPI版本:4.1.5-5

这不是一个解决方案,但回答了我的问题,为什么代码不工作。

相关问题